You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hey, first of all thank you very much for the repo :) I have a doubt on how you compute mAP.
From looking at other tutorials and code repos and pytorch classes (coco eval), I understood that one precision-recall plot is for 1 object class and different IoU threshold values (-> one point in the precision and recall plot is the precision and recall for all the bbox predictions in the test set for 1 class). However, in this video each precision and recall points in the plot corresponds to the precision and recall value of one specific prediction (as cumulative sum).
Intuitively for me, the first approach is the right one since we want to estimate the precision and recall over all the predictions in the entire dataset. Moreover, different methods (e.g. F1 score) are used to estimate the best IoU threshold value to use from the precision-recall plot. Could you clarify this for me, please? Thank you very much :)
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Hey, first of all thank you very much for the repo :) I have a doubt on how you compute mAP.
From looking at other tutorials and code repos and pytorch classes (coco eval), I understood that one precision-recall plot is for 1 object class and different IoU threshold values (-> one point in the precision and recall plot is the precision and recall for all the bbox predictions in the test set for 1 class). However, in this video each precision and recall points in the plot corresponds to the precision and recall value of one specific prediction (as cumulative sum).
Intuitively for me, the first approach is the right one since we want to estimate the precision and recall over all the predictions in the entire dataset. Moreover, different methods (e.g. F1 score) are used to estimate the best IoU threshold value to use from the precision-recall plot. Could you clarify this for me, please? Thank you very much :)
Beta Was this translation helpful? Give feedback.
All reactions