-
Notifications
You must be signed in to change notification settings - Fork 138
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Draft: Enable new model id's #886
base: main
Are you sure you want to change the base?
Conversation
@grzegorz-roboflow The new getWeights endpoint is up on staging and prod, and I posted an example above in the PR description with an example staging project. You'll need my staging api key though. It now doesn't require to pass a workspace id anymore, so I also removed the get_workspace call that I added before from this PR. |
inferencer endpoints also assume dataset_id/version_id format: https://github.com/roboflow/inference/blob/main/inference/core/interfaces/http/http_api.py#L2170C14-L2170C57 which probably needs to change / we need a new endpoint that can handle new model_ids |
FYI I tested this code in this notebook, and works for me :-) |
I can confirm I have this working on my end too - for OWLv2 specifically, we do already have a HTTP endpoint, and the only addition we need for box prompting (AI labeling in the annotation editor) to work properly is a separate endpoint that lets the user train customized versions of the model in addition to the auto-generated dataset revisions: @balthazur We can walk through the next steps when you're back, but this is starting to look good! |
@@ -116,7 +117,13 @@ def __init__( | |||
self.metrics = {"num_inferences": 0, "avg_inference_time": 0.0} | |||
self.api_key = api_key if api_key else API_KEY | |||
model_id = resolve_roboflow_model_alias(model_id=model_id) | |||
self.dataset_id, self.version_id = model_id.split("/") | |||
# TODO: | |||
# Is this really all we had to do here?, think we don't even need it? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To answer this question, it seems like the version ID is primarily used in models like SAM where there are multiple versions of the foundation model that can be called. Everywhere else, it's mainly used for providing clear debug info.
params=params, | ||
) | ||
print("api_url", api_url) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
print("api_url", api_url) |
Can delete this before merge.
Description
Goal of the PR is to make new model id's from Roboflow work in Inference.
Note:
As new model urls will be unique within a workspace only (same as workflow urls), we need a way to have a unique cache key on Inference, were we thought about hashing the api key + model id, as we will not have workspaceId available without an additional api call. For public models that doesn't seem to work however, because the api key used for a public model does not represent the owner. However for the first iteration this is fine because we'll only be serving uploaded models from the user itself and Roboflow Instant (owlv2) models with new model id's.
Type of change
Please delete options that are not relevant.
How has this change been tested, please provide a testcase or example of how you tested the change?
This code queries a new model from the models collection
#889 enables to use the model id "manual-owlvit" to use a fine prompted owlv2 model that's saved in models collection on staging.
Any specific deployment considerations
For example, documentation changes, usability, usage/costs, secrets, etc.
Docs