Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add a new metric for computing both f-measure and mean AP for detection training #3531

Closed

Conversation

wonjuleee
Copy link
Contributor

Summary

How to test

Checklist

  • I have added unit tests to cover my changes.​
  • I have added integration tests to cover my changes.​
  • I have ran e2e tests and there is no issues.
  • I have added the description of my changes into CHANGELOG in my target branch (e.g., CHANGELOG in develop).​
  • I have updated the documentation in my target branch accordingly (e.g., documentation in develop).
  • I have linked related issues.

License

  • I submit my code changes under the same Apache License that covers the project.
    Feel free to contact the maintainers if that's a concern.
  • I have updated the license header for each file (see an example below).
# Copyright (C) 2024 Intel Corporation
# SPDX-License-Identifier: Apache-2.0

Comment on lines +236 to +239
# cls_scores = [item.float() for item in cls_scores]
# bbox_preds = [item.float() for item in bbox_preds]
# centernesses = [item.float() for item in centernesses]

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we remove this?

@@ -780,8 +786,141 @@ def classes(self) -> list[str]:
return self.label_info.label_names


class MeanAveragePrecisionFMeasure(MeanAveragePrecision):
Copy link
Contributor

@vinnamkim vinnamkim May 22, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there any reason to implement this from scratch? I think there is another way to exploit https://lightning.ai/docs/torchmetrics/stable/pages/overview.html#torchmetrics.MetricCollection. For example,

class MeanAveragePrecisionFMeasure(MetricCollection):
     def __init__(self, **kwargs):
          super().__init__([MeanAveragePrecision(), FMeasure(**kwargs)])
     def compute(self, best_confidence_threshold: float | None = None) -> dict:
          ...

Copy link

codecov bot commented May 22, 2024

Codecov Report

Attention: Patch coverage is 35.21127% with 46 lines in your changes are missing coverage. Please review.

Project coverage is 82.93%. Comparing base (59014b8) to head (e46175d).

Files Patch % Lines
src/otx/core/metrics/fmeasure.py 31.34% 46 Missing ⚠️
Additional details and impacted files
@@                Coverage Diff                 @@
##           releases/2.0.0    #3531      +/-   ##
==================================================
- Coverage           83.04%   82.93%   -0.12%     
==================================================
  Files                 254      254              
  Lines               25263    25330      +67     
==================================================
+ Hits                20980    21007      +27     
- Misses               4283     4323      +40     
Flag Coverage Δ
py310 ?
py311 82.93% <35.21%> (+0.59%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@sungchul2 sungchul2 mentioned this pull request May 22, 2024
11 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
OTX 2.0 For OTX v2.0 TEST Any changes in tests
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants