Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Variation in the benchmark #59

Open
mehdidc opened this issue Dec 26, 2022 · 1 comment
Open

Variation in the benchmark #59

mehdidc opened this issue Dec 26, 2022 · 1 comment
Labels
bug Something isn't working

Comments

@mehdidc
Copy link
Collaborator

mehdidc commented Dec 26, 2022

Seems there is some variation in the numbers of few datasets/models after re-running the benchmark, might be due to AMP. It happens on few datasets on zero-shot classification, diabetic retinopathy being the worse. Retrieval is fine.

See #56, where it was first detected.

delta

retrieval

@rom1504 rom1504 added the bug Something isn't working label Feb 3, 2023
@jianweif
Copy link

It could be that linear probing actually trains a classification head and then evaluates. If the training is on GPU and with pytorch, it will be non deterministic due to cudnn which is a well known issue

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants