Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for remote zoo model parameters #5439

Open
wants to merge 2 commits into
base: develop
Choose a base branch
from

Conversation

brimoor
Copy link
Contributor

@brimoor brimoor commented Jan 27, 2025

Change log

  • Adds resolve_input() and parse_parameters() methods to the remote zoo model interface. These methods allow remote zoo models to inject custom parameters into an operator's input form.

See voxel51/fiftyone-plugins#201 for example usage.

TODO

  • Can we add support for builtin zoo models to declare custom parameters too?

Copy link
Contributor

coderabbitai bot commented Jan 27, 2025

Walkthrough

This pull request introduces enhancements to the FiftyOne model zoo documentation and utilities, focusing on improving zero-shot prediction capabilities and remote model handling. The changes span multiple files, including script modifications for model documentation generation, documentation updates for remote datasets and models, and updates to utility classes and methods for handling model parameters and loading.

Changes

File Change Summary
docs/scripts/make_model_zoo_docs.py Added conditional blocks for handling 'zero-shot' and 'yolo' model tags, modified existing logic for session refresh.
docs/source/dataset_zoo/remote.rst Added links to GitHub repositories for remote datasets, removed previous note about voxel51/coco-2017.
docs/source/model_zoo/remote.rst Added new methods for parameter handling and repository links, updated notes on user input collection.
fiftyone/utils/ultralytics.py Updated docstring for FiftyOneYOLOModelConfig class to clarify the classes argument's role in zero-shot prediction.
fiftyone/zoo/models/__init__.py Added new methods for remote model path retrieval, loading, and parameter management, restructured model loading logic.
docs/source/user_guide/evaluation.rst Added new sections on model evaluation, expanded examples, and clarified evaluation processes and custom metrics.
fiftyone/operators/evaluation_metric.py Replaced get_parameters method with resolve_input, simplifying input handling in EvaluationMetric class.

Suggested labels

enhancement, documentation

Suggested reviewers

  • manushreegangwar

Possibly related PRs

Poem

🐰 In the realm of models, a rabbit's delight,
Zero-shot learning takes a magical flight!
Remote repos dance, parameters sing,
Documentation spreads its creative wing.
Code evolves with each hoppy stride! 🚀

✨ Finishing Touches
  • 📝 Generate Docstrings (Beta)

Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR. (Beta)
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (3)
fiftyone/zoo/models/__init__.py (3)

305-305: Ensure proper fallback when loading remote models
This newly introduced logic calls model._load_model(**kwargs) when no config is provided for RemoteZooModel. If _load_model fails for any reason, there's currently no fallback. Consider wrapping the call in a try-except block or logging an informational message to aid in debugging.

Also applies to: 307-307


505-519: Add docstrings for new helper methods
These new methods (_get_model_path, _load_model, _get_parameters, _parse_parameters) are crucial entry points for remote model handling, but they lack docstrings describing expected behavior and return values. Adding docstrings will help clarify usage and maintainability.


554-569: Consider logging when get_parameters()/parse_parameters() are missing
Here, _get_remote_model_parameters and _parse_remote_model_parameters gracefully do nothing when module doesn’t define the relevant methods. Logging a brief message in such cases could improve discoverability and debugging.

📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between e12fce4 and 3c9af15.

📒 Files selected for processing (5)
  • docs/scripts/make_model_zoo_docs.py (1 hunks)
  • docs/source/dataset_zoo/remote.rst (1 hunks)
  • docs/source/model_zoo/remote.rst (3 hunks)
  • fiftyone/utils/ultralytics.py (1 hunks)
  • fiftyone/zoo/models/__init__.py (3 hunks)
✅ Files skipped from review due to trivial changes (2)
  • fiftyone/utils/ultralytics.py
  • docs/source/dataset_zoo/remote.rst
⏰ Context from checks skipped due to timeout of 90000ms (6)
  • GitHub Check: test / test-python (ubuntu-latest-m, 3.10)
  • GitHub Check: test / test-app
  • GitHub Check: lint / eslint
  • GitHub Check: e2e / test-e2e
  • GitHub Check: build / build
  • GitHub Check: build
🔇 Additional comments (10)
docs/scripts/make_model_zoo_docs.py (2)

204-206: Nicely structured callback
Calling session.refresh() after applying the model ensures that the session accurately reflects the newly generated predictions. This looks good.


207-215: Clear demonstration of zero-shot usage
This example snippet for zero-shot predictions with YOLO is straightforward and helpful for users.

docs/source/model_zoo/remote.rst (8)

17-23: Useful references to external repos
Providing these repository examples is helpful for developers exploring remote model sources.


219-224: Well-defined function signatures
Declaring get_parameters() and parse_parameters() here clarifies the methods available for remote models.


431-432: Handy anchor for referencing parameter collection
This anchor makes documentation navigation more convenient for readers.


433-466: Comprehensive documentation for collecting model parameters
The new section provides a clear explanation of how to gather user-defined classes for zero-shot tasks.


469-471: Consistent cross-reference to operator inputs
Linking to operator-inputs ensures that readers know where to find more detailed input-collection references.


472-473: Concise anchor for parse-parameters
Continues the pattern of well-organized documentation anchors.


474-499: Useful example of parameter formatting
Demonstrates how remote models can adapt user-provided classes at execution time.


501-502: Consistent note referencing operator usage
This note helps users integrate custom parameter functionality into their existing operator workflows.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (1)
fiftyone/operators/evaluation_metric.py (1)

48-58: LGTM: Method signature change improves API clarity.

The replacement of get_parameters with resolve_input simplifies the interface while maintaining functionality. The new method has a clearer signature and explicit return type documentation.

Consider adding examples in the docstring to demonstrate common usage patterns of the resolve_input method.

📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 3c9af15 and 3b7de6a.

📒 Files selected for processing (4)
  • docs/source/model_zoo/remote.rst (3 hunks)
  • docs/source/user_guide/evaluation.rst (2 hunks)
  • fiftyone/operators/evaluation_metric.py (1 hunks)
  • fiftyone/zoo/models/__init__.py (3 hunks)
⏰ Context from checks skipped due to timeout of 90000ms (6)
  • GitHub Check: test / test-python (ubuntu-latest-m, 3.10)
  • GitHub Check: test / test-app
  • GitHub Check: lint / eslint
  • GitHub Check: build / build
  • GitHub Check: e2e / test-e2e
  • GitHub Check: build
🔇 Additional comments (9)
docs/source/model_zoo/remote.rst (4)

17-22: LGTM: Clear example references added.

The addition of example repository references (voxel51/openai-clip and voxel51/ultralytics-models) provides valuable guidance for users.


219-224: LGTM: Well-structured documentation for new methods.

The introduction of resolve_input and parse_parameters methods is clearly documented with proper indentation and formatting.


431-472: LGTM: Comprehensive documentation for resolve_input.

The section provides clear explanation of the method's purpose, signature, and usage. The code example effectively demonstrates parameter collection.


478-508: LGTM: Clear documentation for parse_parameters.

The section effectively explains the purpose and implementation of parameter parsing functionality.

fiftyone/zoo/models/__init__.py (4)

505-511: LGTM: Well-structured model path handling.

The addition of _get_model_path and load_model methods improves code organization and reusability.


512-519: LGTM: Clean implementation of parameter handling methods.

The resolve_input and parse_parameters methods are well-integrated with the existing codebase and follow consistent error handling patterns.


554-572: LGTM: Robust helper function implementation.

The helper functions _resolve_remote_input and _parse_remote_model_parameters provide proper error handling and module validation.


305-305: Verify model loading path handling.

The model loading logic has been updated to use instance methods. Ensure that model paths are correctly resolved in all scenarios.

Also applies to: 307-307

✅ Verification successful

Model path handling implementation verified

The model loading sequence properly resolves paths using get_path_in_dir before loading the model, consistent with the established pattern throughout the codebase. The path resolution is correctly integrated with the model loading workflow.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Verify model path resolution
rg -p "model_path" . --type py | grep -C 2 "get_path_in_dir"

Length of output: 797


Script:

#!/bin/bash
# Check get_path_in_dir implementation and model loading sequence
ast-grep --pattern 'def get_path_in_dir($_) {
  $$$
}'

Length of output: 56


Script:

#!/bin/bash
# Check get_path_in_dir implementation with Python syntax
ast-grep --pattern 'def get_path_in_dir($$$):
    $$$'

# Also search with ripgrep for broader context
rg -p "class.*get_path_in_dir" -A 5 --type py

Length of output: 104

docs/source/user_guide/evaluation.rst (1)

Line range hint 2114-2160: LGTM: Clear example of custom metric implementation.

The example effectively demonstrates how to implement a custom evaluation metric with proper configuration and input handling.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants