-
Notifications
You must be signed in to change notification settings - Fork 42
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Low Accuracy for SHVN ---> MNIST #9
Comments
@deep0learning good observation. For SHVN ---> MNIST using LeNet without adaptation the accuracy is 60%. And using CORAL it should be 79%. |
The Alexnet (one weird trick) used in this repository is different from the Alexnet (original) used by CORAL. Also the better accuracy in the paper is due to an additional mean loss function they used in the Caffe prototxt: layer { |
How you find this information? In D-CORAL they did not use any mean loss/ EuclideanLoss . |
Check the .prototxt file in https://github.com/VisionLearningGroup/CORAL/tree/master/code after unzipping. |
Did not find any loss that you have mentioned. See .prototxt file. labelled source datalayer { unlabelled target datalayer { add silence to suppress output of labelslayer { target data for testinglayer { conv1layer { relu1layer { pool1layer { norm1layer { conv2layer { relu2layer { pool2layer { norm2layer { conv3layer { relu3layer { conv4layer { relu4layer { conv5layer { relu5layer { pool5layer { fc6layer { relu6layer { drop6layer { fc7layer { relu7layer { drop7layer { layer { add silence to suppress output of fc8_office_tlayer { |
I got this accuracy for SHVN -----> MNIST
###Test Source: Epoch: 2460, avg_loss: 0.0003, Accuracy: 73255/73257 (100.00%)
###Test Target: Epoch: 2460, avg_loss: 21.5493, Accuracy: 35242/60000 (58.74%)
That means data loading does not matter to get good accuracy. Because for SHVN---->MNIST, I just use default data processing.
The text was updated successfully, but these errors were encountered: