Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: v1.56.10 breaks get_model_info for custom providers #7575

Closed
lsorber opened this issue Jan 5, 2025 · 3 comments · Fixed by #7597
Closed

[Bug]: v1.56.10 breaks get_model_info for custom providers #7575

lsorber opened this issue Jan 5, 2025 · 3 comments · Fixed by #7597
Assignees
Labels
bug Something isn't working mlops user request

Comments

@lsorber
Copy link

lsorber commented Jan 5, 2025

What happened?

With v1.56.9, litellm.get_model_info for custom providers works fine.

With yesterday's v1.56.10 release, litellm.get_model_info is broken for custom providers. I suspect this is caused by the changes introduced in #7538.

Minimal reproducible example adapted from the official docs on custom providers:

  1. ✅ Works when running in uvx --python 3.10 --with "litellm==1.56.9" ipython
  2. 💥 Is broken when running in uvx --python 3.10 --with "litellm==1.56.10" ipython
# Custom provider example copied from https://docs.litellm.ai/docs/providers/custom_llm_server:
import litellm
from litellm import CustomLLM, completion, get_llm_provider


class MyCustomLLM(CustomLLM):
    def completion(self, *args, **kwargs) -> litellm.ModelResponse:
        return litellm.completion(
            model="gpt-3.5-turbo",
            messages=[{"role": "user", "content": "Hello world"}],
            mock_response="Hi!",
        )  # type: ignore

my_custom_llm = MyCustomLLM()

litellm.custom_provider_map = [ # 👈 KEY STEP - REGISTER HANDLER
        {"provider": "my-custom-llm", "custom_handler": my_custom_llm}
    ]

resp = completion(
        model="my-custom-llm/my-fake-model",
        messages=[{"role": "user", "content": "Hello world!"}],
    )

assert resp.choices[0].message.content == "Hi!"

# Register model info
model_info = {"my-custom-llm/my-fake-model": {"max_tokens": 2048}}
litellm.register_model(model_info)

# Get registered model info
from litellm import get_model_info
get_model_info(model="my-custom-llm/my-fake-model")  # 💥 "Exception: This model isn't mapped yet." in v1.56.10

Relevant log output

/Users/laurent/.cache/uv/archive-v0/C-TWgZG30xXPd4KJuphE0/lib/python3.10/site-packages/pydantic/_internal/_config.py:345: UserWarning: Valid config keys have changed in V2:
* 'fields' has been removed
  warnings.warn(message, UserWarning)
---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
File ~/.cache/uv/archive-v0/C-TWgZG30xXPd4KJuphE0/lib/python3.10/site-packages/litellm/utils.py:4268, in _get_model_info_helper(model, custom_llm_provider)
   4266 if custom_llm_provider:
   4267     provider_config = ProviderConfigManager.get_provider_model_info(
-> 4268         model=model, provider=LlmProviders(custom_llm_provider)
   4269     )
   4271 if _model_info is None and provider_config is not None:

File ~/.local/share/uv/python/cpython-3.10.16-macos-aarch64-none/lib/python3.10/enum.py:385, in EnumMeta.__call__(cls, value, names, module, qualname, type, start)
    384 if names is None:  # simple value lookup
--> 385     return cls.__new__(cls, value)
    386 # otherwise, functional API: we're creating a new Enum type

File ~/.local/share/uv/python/cpython-3.10.16-macos-aarch64-none/lib/python3.10/enum.py:710, in Enum.__new__(cls, value)
    709 if result is None and exc is None:
--> 710     raise ve_exc
    711 elif exc is None:

ValueError: 'my-custom-llm' is not a valid LlmProviders

During handling of the above exception, another exception occurred:

Exception                                 Traceback (most recent call last)
Cell In[1], line 50
     44 # from litellm.utils import custom_llm_setup
     45
     46 # custom_llm_setup()
     48 from litellm import get_model_info
---> 50 get_model_info(model="my-custom-llm/my-fake-model")

File ~/.cache/uv/archive-v0/C-TWgZG30xXPd4KJuphE0/lib/python3.10/site-packages/litellm/utils.py:4465, in get_model_info(model, custom_llm_provider)
   4395 """
   4396 Get a dict for the maximum tokens (context window), input_cost_per_token, output_cost_per_token  for a given model.
   4397
   (...)
   4459     }
   4460 """
   4461 supported_openai_params = litellm.get_supported_openai_params(
   4462     model=model, custom_llm_provider=custom_llm_provider
   4463 )
-> 4465 _model_info = _get_model_info_helper(
   4466     model=model,
   4467     custom_llm_provider=custom_llm_provider,
   4468 )
   4470 returned_model_info = ModelInfo(
   4471     **_model_info, supported_openai_params=supported_openai_params
   4472 )
   4474 return returned_model_info

File ~/.cache/uv/archive-v0/C-TWgZG30xXPd4KJuphE0/lib/python3.10/site-packages/litellm/utils.py:4387, in _get_model_info_helper(model, custom_llm_provider)
   4385 if "OllamaError" in str(e):
   4386     raise e
-> 4387 raise Exception(
   4388     "This model isn't mapped yet. model={}, custom_llm_provider={}. Add it here - https://github.com/BerriAI/litellm/blob/main/model_prices_and_context_window.json.".format(
   4389         model, custom_llm_provider
   4390     )
   4391 )

Exception: This model isn't mapped yet. model=my-custom-llm/my-fake-model, custom_llm_provider=my-custom-llm. Add it here - https://github.com/BerriAI/litellm/blob/main/model_prices_and_context_window.json.

Are you a ML Ops Team?

Yes

What LiteLLM version are you on ?

v1.56.10

Twitter / LinkedIn details

@laurentsorber

@krrishdholakia
Copy link
Contributor

thanks for the issue. will work on this, and add your example to our ci/cd to prevent future issues.

@krrishdholakia
Copy link
Contributor

able to repro

krrishdholakia added a commit that referenced this issue Jan 7, 2025
krrishdholakia added a commit that referenced this issue Jan 7, 2025
* test(test_amazing_vertex_completion.py): fix test

* test: initial working code gecko test

* fix(vertex_ai_non_gemini.py): support vertex ai code gecko fake streaming

Fixes #7360

* test(test_get_model_info.py): add test for getting custom provider model info

Covers #7575

* fix(utils.py): fix get_provider_model_info check

Handle custom llm provider scenario

Fixes https://github.com/
/issues/7575
@krrishdholakia
Copy link
Contributor

Closing as this is now fixed from v1.57.0+

rajatvig pushed a commit to rajatvig/litellm that referenced this issue Jan 16, 2025
* test(test_amazing_vertex_completion.py): fix test

* test: initial working code gecko test

* fix(vertex_ai_non_gemini.py): support vertex ai code gecko fake streaming

Fixes BerriAI#7360

* test(test_get_model_info.py): add test for getting custom provider model info

Covers BerriAI#7575

* fix(utils.py): fix get_provider_model_info check

Handle custom llm provider scenario

Fixes https://github.com/
BerriAI/issues/7575
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working mlops user request
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants