Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

voc_eval APs are zero on VOC_test dataset #491

Open
shankar-agarwal opened this issue Nov 27, 2019 · 0 comments
Open

voc_eval APs are zero on VOC_test dataset #491

shankar-agarwal opened this issue Nov 27, 2019 · 0 comments

Comments

@shankar-agarwal
Copy link

I get zeros everywhere. Any idea?

Reading annotation for 4951/4952
Reading annotation for 4952/4952
Saving cached annotations to /home/sagarwal/tf-faster-rcnn/data/VOCdevkit2007/annotations_cache/test_annots.pkl
AP for aeroplane = 0.0000
AP for bicycle = 0.0000
AP for bird = 0.0000
AP for boat = 0.0000
AP for bottle = 0.0000
AP for bus = 0.0000
AP for car = 0.0000
AP for cat = 0.0000
AP for chair = 0.0000
AP for cow = 0.0000
AP for diningtable = 0.0000
AP for dog = 0.0000
AP for horse = 0.0000
AP for motorbike = 0.0000
AP for person = 0.0000
AP for pottedplant = 0.0000
AP for sheep = 0.0000
AP for sofa = 0.0000
AP for train = 0.0000
AP for tvmonitor = 0.0000
Mean AP = 0.0000

Results:
0.000
0.000
0.000
0.000
0.000
0.000
0.000
0.000
0.000
0.000
0.000
0.000
0.000
0.000
0.000
0.000
0.000
0.000
0.000
0.000
0.000

Results computed with the unofficial Python eval code.
Results should be very close to the official MATLAB eval code.
Recompute with ./tools/reval.py --matlab ... for your paper.
-- Thanks, The Management

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant