Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add documentation for custom metrics #5430

Merged
merged 6 commits into from
Jan 25, 2025
Merged

Add documentation for custom metrics #5430

merged 6 commits into from
Jan 25, 2025

Conversation

manushreegangwar
Copy link
Contributor

@manushreegangwar manushreegangwar commented Jan 24, 2025

What changes are proposed in this pull request?

This PR adds documentation for custom metric operators.

How is this patch tested? If it is not, please explain why.

Only updates the documentation.

Release Notes

Is this a user-facing change that should be mentioned in the release notes?

  • No. You can skip the rest of this section.
  • Yes. Give a description of this change to be included in the release
    notes for FiftyOne users.

(Details in 1-2 sentences. You can just refer to another PR with a description
if this PR is part of a larger change.)

What areas of FiftyOne does this PR affect?

  • App: FiftyOne application changes
  • Build: Build and test infrastructure changes
  • Core: Core fiftyone Python library changes
  • Documentation: FiftyOne documentation changes
  • Other

Summary by CodeRabbit

  • Documentation
    • Added a new section on "Custom evaluation metrics" in the FiftyOne documentation.
    • Introduced a subsection on "Using custom metrics" to explain their application in evaluation runs.
    • Added a subsection on "Developing custom metrics" to guide users in creating custom metric operators.
    • Provided examples and code snippets for implementing custom metrics in evaluation runs.
    • Enhanced user understanding of extending evaluation capabilities through custom metrics in the SDK and App.
  • Bug Fixes
    • Resolved issues preventing the fetching of very large media and downloading initial batches of cloud media in FiftyOne Teams 2.5.0.
    • Improved memory requirements and rendering optimizations for heatmap fields and label masks in FiftyOne 1.3.0.
    • Enhanced stability and reliability for various features, including dynamic groups and tagging menu.

Copy link
Contributor

coderabbitai bot commented Jan 24, 2025

Walkthrough

The pull request introduces a new section titled "Custom evaluation metrics" in the FiftyOne documentation. This section explains how users can add custom metrics to their evaluation runs, detailing their support across all evaluation methods. It includes examples for computing custom metrics via the SDK or directly from the App, as well as a new subsection on "Developing custom metrics," which describes how to implement custom metric operators by subclassing the EvaluationMetric interface.

Changes

File Change Summary
docs/source/user_guide/evaluation.rst Added new section "Custom evaluation metrics" and subsections "Using custom metrics" and "Developing custom metrics" with detailed explanations and code examples.
docs/source/release-notes.rst Updated release notes for versions 2.5.0 and 1.3.0, highlighting bug fixes, enhancements, and added support for defining custom evaluation metrics.

Possibly related PRs

  • Teams 1.7.1/OSS 0.24.1 release notes #4456: This PR updates the release notes to include support for defining custom evaluation metrics, which is directly related to the new section on custom evaluation metrics introduced in the main PR.
  • Release notes v1.0.0 #4839: This PR includes enhancements and fixes in the release notes, mentioning custom evaluation metrics, which connects to the main PR's focus on custom metrics.
  • Merge release/v1.0.1 to develop #4911: This PR merges updates that include references to custom evaluation metrics in the release notes, aligning with the main PR's content.

Suggested labels

documentation

Suggested reviewers

  • findtopher

Poem

🐰 Metrics, metrics, custom and bright
Evaluation's new flexible might
Code snippets dance, flexibility sings
In FiftyOne's evaluation wings
A rabbit's leap of metric delight! 🔍


Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR. (Beta)
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@manushreegangwar manushreegangwar marked this pull request as ready for review January 24, 2025 07:50
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 2ac97ad and f08e7bc.

⛔ Files ignored due to path filters (1)
  • docs/source/images/app/model-evaluation-custom-metric.png is excluded by !**/*.png, !**/*.png
📒 Files selected for processing (1)
  • docs/source/user_guide/evaluation.rst (1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms (1)
  • GitHub Check: build
🔇 Additional comments (3)
docs/source/user_guide/evaluation.rst (3)

1996-2004: LGTM! Clear introduction to custom metrics.

The introduction effectively explains what custom metrics are and points users to example implementations in the fiftyone-plugins repository.


2005-2063: LGTM! Well-structured SDK usage example.

The example clearly demonstrates:

  • How to use custom metrics via SDK and App
  • The syntax for passing metric operators with and without kwargs
  • Where to find the results in both SDK and App

2064-2141: LGTM! Comprehensive development guide with excellent example.

The development guide:

  • Properly introduces the base class
  • Provides a well-documented example implementation
  • Explains each component (config, parameters, compute, fields)
  • Includes helpful comments explaining each method's purpose

docs/source/user_guide/evaluation.rst Outdated Show resolved Hide resolved
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between f08e7bc and bf9fcf5.

📒 Files selected for processing (1)
  • docs/source/user_guide/evaluation.rst (1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms (1)
  • GitHub Check: build
🔇 Additional comments (2)
docs/source/user_guide/evaluation.rst (2)

2045-2047: Update the image alt text to match the content.

The alt text should accurately reflect that this image shows custom metrics functionality.

-    :alt: model-evaluation-custom-metric
+    :alt: model-evaluation-custom-metrics

2064-2141: LGTM! Well-documented custom metric development guide.

The custom metric development section provides a comprehensive guide with clear explanations and a well-documented example implementation.

docs/source/user_guide/evaluation.rst Outdated Show resolved Hide resolved
docs/source/user_guide/evaluation.rst Outdated Show resolved Hide resolved
Base automatically changed from release/v1.3.0 to main January 24, 2025 20:00
@brimoor brimoor force-pushed the docs/custom-metrics branch from 266eadc to 68ba964 Compare January 25, 2025 01:05
Copy link
Contributor

@brimoor brimoor left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. I updated the screenshot ✅

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (1)
docs/source/release-notes.rst (1)

88-90: LGTM! Consider adding an example.

The addition of custom evaluation metrics support is well documented with a reference link. To make it even more helpful for users, consider adding a brief inline example showing the basic usage.

- Added support for defining :ref:`custom evaluation metrics <custom-evaluation-metrics>` and applying
  them when evaluating models. For example::

    import fiftyone as fo
    
    # Define custom metric
    def my_metric(results):
        # Custom evaluation logic
        return score

    # Apply custom metric
    results = dataset.evaluate_detections(
        "predictions", 
        "ground_truth",
        eval_key="eval",
        method="custom",
        metric_fn=my_metric
    )
📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 68ba964 and 37642ac.

⛔ Files ignored due to path filters (1)
  • docs/source/images/evaluation/custom-evaluation-metric.png is excluded by !**/*.png, !**/*.png
📒 Files selected for processing (2)
  • docs/source/release-notes.rst (1 hunks)
  • docs/source/user_guide/evaluation.rst (3 hunks)
⏰ Context from checks skipped due to timeout of 90000ms (1)
  • GitHub Check: build
🔇 Additional comments (3)
docs/source/user_guide/evaluation.rst (3)

2004-2015: LGTM! Clear and concise introduction.

The introduction effectively explains the scope and capabilities of custom metrics in FiftyOne.


2019-2021: Fix the RST link formatting.

The link to metric-examples is not properly formatted according to RST syntax.

-The example below shows how to compute a custom metric from the
-`metric-examples <https://github.com/voxel51/fiftyone-plugins/tree/main/plugins/metric-examples>`_
-plugin when evaluating object detections:
+The example below shows how to compute a custom metric from the
+`metric-examples <https://github.com/voxel51/fiftyone-plugins/tree/main/plugins/metric-examples>`__
+plugin when evaluating object detections:

2080-2082: Update the image alt text.

The alt text should accurately reflect the image content about custom evaluation metrics.

-    :alt: custom-evaluation-metric
+    :alt: model-evaluation-custom-metric

@brimoor brimoor merged commit 9cded34 into main Jan 25, 2025
9 checks passed
@brimoor brimoor deleted the docs/custom-metrics branch January 25, 2025 14:32
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants