Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Mismatch in reported testing accuracy and actual testing accuracy of the model on the sutd-traffic dataset #4

Open
bennycortese opened this issue Nov 24, 2023 · 1 comment

Comments

@bennycortese
Copy link

In the paper associated with the model and the pre-trained checkpoint, reported testing accuracy is 46% but when you run the training loop on the model from this repository, testing accuracy by default on the sutd-traffic dataset is 45.1%. I am not sure why this discrepancy occurs, thank you!

@CHENGY12
Copy link
Collaborator

Thanks for your questions! It seems because of the environmental randomness. I just re-trained using another independent device and obtained 45.7% accuracy. We have released the checkpoint that was used in the paper here. In light of your suggestion, we will re-run the code 5 times and report an average performance and coresspooning std as clarification. Hope it can help.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants