-
Notifications
You must be signed in to change notification settings - Fork 21
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
FEAT Have model cards #43
Comments
@merveenoyan thanks for initiating the conversation. Your proposal sounds solid to me. Just for our context, https://huggingface.co/chansung/segmentation-training-pipeline/ - this is how the current repository looks like for the pushed model. The model is automatically pushed from a TFX pipeline (refer here and here). Would be really good to include model cards here in the Hub model repository as It's also possible to customize the model card w.r.t the components mentioned by Merve (hyperparameters, metrics, dataset, etc.). A concrete example is here. I believe the model card utilities should be implemented as a part of |
@sayakpaul can you assign this to me? I'll create a tracker for model card parts above so we can handle it one step at a time and come up with something very fast. |
@deep-diver could invite Merve to the repo? I will assign right away after it. |
I just invited her as a collaborator to this repository! |
About this example of using TensorBoard(https://huggingface.co/keras-io/lowlight-enhance-mirnet/tensorboard), can we automatically generate Model card based on the TensorBoard logs? |
also, it is worth noting how TFX official |
@deep-diver normally the tensorboard logs need to be pushed. the logs in model card are regardless of tensorboard logs but rather depend on model history, so if we keep history we can write metrics to card 🙂 it gets a bit exhausting though when there are too many epochs so we should have it under a toggle IMO. |
We can respect the formats. I.e., if we're pushing to Hugging Face Hub, then I guess it's best to follow what's typically followed / suggested for the Hub model repositories as they've already gone through rigorous validation. For Hub model cards here, I think it's best to combine the core eval metrics and whatever we get from |
There are some things to note about TensorBoard.
I am just sharing what I know, so we could discuss and decide! |
I got an idea for initial implementation.
I guess this is a good starting point, and we can research more about TensorFlow official model card toolkit and |
@deep-diver there's a very lightweight dependency called tabulate which we can use to turn TB logs in tables and host in model card. We can also keep it as CSV file separately if you wish.
|
Thought it would be good to have model cards in
HfPusher
. We can firstly discuss here, implement and later open a PR to TFA.I took a look at training script and thought it would be good to include:
(to-do)
We can also push training data in a dataset repository and link it to the model.
We can also push tensorboard logs BTW if you'd like to add it to training script. You can see how we host them here: https://huggingface.co/keras-io/lowlight-enhance-mirnet/tensorboard
WDYT?
The text was updated successfully, but these errors were encountered: