Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add metrics and test model tracking callbacks #3

Open
tmabraham opened this issue Sep 2, 2020 · 5 comments
Open

Add metrics and test model tracking callbacks #3

tmabraham opened this issue Sep 2, 2020 · 5 comments
Labels
enhancement New feature or request

Comments

@tmabraham
Copy link
Owner

I want to add support for metrics, and even potentially include some common metrics, like FID, mi-FID, KID, and segmentation metrics (for paired) etc.

Additionally, monitoring the losses and metrics, I want to be able to use fastai's built-in callbacks for saving best model, early stopping, and reducing LR on plateau.

This shouldn't be too hard to include. A major part of this feature is finding good PyTorch/numpy implementations of some of these metrics and getting it to work.

@tmabraham tmabraham added this to To do in Upcoming features Sep 2, 2020
@tmabraham tmabraham added the enhancement New feature or request label Sep 2, 2020
@neomatrix369
Copy link

Do you know of Weights and Biases, their library is super cool for tracking metrics and visualisations, check this out:
https://github.com/neomatrix369/awesome-ai-ml-dl/blob/master/data/about-Weights-and-Biases.md
You can find many such examples in the above link

There's more than metrics and visualisations you get from W&B.

Happy to help with this one, let me know.

@tmabraham
Copy link
Owner Author

@neomatrix369 Thank you for the suggestion. I am aware of W&B, and in fact fastai has great support for W&B thanks to the work of Boris Dayma. I plan to look into using W&B for tracking my own experiments with image translation models. However, this enhancement issue isn't really focused on that, but rather making it easy to use metrics with the models, as well as provide a few implementations of common metrics.

I already have some code for getting metrics to work well with these models and I will add it soon. Since the output are normalized images, there's a bit of extra code needed to transform it correctly and apply some sort of AverageMetric. Once I add this to the codebase, I will have to see which metrics to add to the library. If you're interested, I'll update this issue at that point, and I would be happy to take contributions for metrics.

@neomatrix369
Copy link

@tmabraham That's fine if you are already aware of the library, and I forgot it was integrated with Fastai and others - so its all taken care of.

Happy to follow this issue as a curiosity to learning but you sound you have many angles covered. Anything opens up do let me know.

@tmabraham
Copy link
Owner Author

I have added FID (432f784) and tested it. Horse2Zebra FID reached ~91.7 with 10 epochs of training (here), which is close to a full training CycleGAN benchmark of 89.7 reported here.

@tmabraham
Copy link
Owner Author

It seems like SaveModelCallback works fine, it's just that the default of monitoring the valid_loss obviously wouldn't work if there isn't any validation loss.

Apart from FID, I plan to implement the following metrics before closing this issue:

  • KID
  • Inception Score
  • LPIPS
  • Segmentation metrics
  • Comparison to paired data (ex: MSE, MAE loss of output and paired data of input)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
Upcoming features
  
In progress
Development

No branches or pull requests

2 participants