Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: Inconsistent response_format handling between Fireworks AI models #7533

Open
jtlicardo opened this issue Jan 3, 2025 · 0 comments
Open
Labels
bug Something isn't working

Comments

@jtlicardo
Copy link

What happened?

When using litellm's completion() function with Fireworks AI models, I discovered inconsistent behavior with the response_format parameter:

What works:

  • llama-v3p3-70b-instruct accepts response_format={"type": "text"}
  • deepseek-v3 only works with response_format=None

What breaks:

  • deepseek-v3 throws an error with response_format={"type": "text"}:
    Error: litellm.BadRequestError: Fireworks_aiException - Error code: 400 - {'error': {'object': 'error', 'type': 'invalid_request_error', 'message': "Extra inputs are not permitted, field: 'response_format.schema_field', value: None"}}

Expected behavior:
The response_format parameter should work consistently across all Fireworks AI models. Either both models should accept {"type": "text"}, or litellm should handle the model-specific differences transparently.

Relevant log output

Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new
LiteLLM.Info: If you need to debug this error, use `litellm.set_verbose=True'.


Provider List: https://docs.litellm.ai/docs/providers

Traceback (most recent call last):
  File "C:\Users\josip\Documents\GitHub\bpmn-assistant\.venv\Lib\site-packages\litellm\llms\openai\openai.py", line 657, in completion
    raise e
  File "C:\Users\josip\Documents\GitHub\bpmn-assistant\.venv\Lib\site-packages\litellm\llms\openai\openai.py", line 583, in completion
    self.make_sync_openai_chat_completion_request(
  File "C:\Users\josip\Documents\GitHub\bpmn-assistant\.venv\Lib\site-packages\litellm\llms\openai\openai.py", line 395, in make_sync_openai_chat_completion_request
    raise e
  File "C:\Users\josip\Documents\GitHub\bpmn-assistant\.venv\Lib\site-packages\litellm\llms\openai\openai.py", line 377, in make_sync_openai_chat_completion_request
    raw_response = openai_client.chat.completions.with_raw_response.create(
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\josip\Documents\GitHub\bpmn-assistant\.venv\Lib\site-packages\openai\_legacy_response.py", line 356, in wrapped
    return cast(LegacyAPIResponse[R], func(*args, **kwargs))
                                      ^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\josip\Documents\GitHub\bpmn-assistant\.venv\Lib\site-packages\openai\_utils\_utils.py", line 275, in wrapper
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\josip\Documents\GitHub\bpmn-assistant\.venv\Lib\site-packages\openai\resources\chat\completions.py", line 829, in create
    return self._post(
           ^^^^^^^^^^^
  File "C:\Users\josip\Documents\GitHub\bpmn-assistant\.venv\Lib\site-packages\openai\_base_client.py", line 1280, in post
    return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\josip\Documents\GitHub\bpmn-assistant\.venv\Lib\site-packages\openai\_base_client.py", line 957, in request
    return self._request(
           ^^^^^^^^^^^^^^
  File "C:\Users\josip\Documents\GitHub\bpmn-assistant\.venv\Lib\site-packages\openai\_base_client.py", line 1061, in _request
    raise self._make_status_error_from_response(err.response) from None
openai.BadRequestError: Error code: 400 - {'error': {'object': 'error', 'type': 'invalid_request_error', 'message': "Extra inputs are not permitted, field: 'response_format.schema_field', value: None"}}

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "C:\Users\josip\Documents\GitHub\bpmn-assistant\.venv\Lib\site-packages\litellm\main.py", line 1619, in completion
    raise e
  File "C:\Users\josip\Documents\GitHub\bpmn-assistant\.venv\Lib\site-packages\litellm\main.py", line 1592, in completion
    response = openai_chat_completions.completion(
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\josip\Documents\GitHub\bpmn-assistant\.venv\Lib\site-packages\litellm\llms\openai\openai.py", line 667, in completion
    raise OpenAIError(
litellm.llms.openai.common_utils.OpenAIError: Error code: 400 - {'error': {'object': 'error', 'type': 'invalid_request_error', 'message': "Extra inputs are not permitted, field: 'response_format.schema_field', value: None"}}

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "C:\Users\josip\Documents\GitHub\bpmn-assistant\scratch.py", line 30, in <module>
    response = completion(
               ^^^^^^^^^^^
  File "C:\Users\josip\Documents\GitHub\bpmn-assistant\.venv\Lib\site-packages\litellm\utils.py", line 994, in wrapper
    raise e
  File "C:\Users\josip\Documents\GitHub\bpmn-assistant\.venv\Lib\site-packages\litellm\utils.py", line 875, in wrapper
    result = original_function(*args, **kwargs)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\josip\Documents\GitHub\bpmn-assistant\.venv\Lib\site-packages\litellm\main.py", line 2974, in completion
    raise exception_type(
          ^^^^^^^^^^^^^^^
  File "C:\Users\josip\Documents\GitHub\bpmn-assistant\.venv\Lib\site-packages\litellm\litellm_core_utils\exception_mapping_utils.py", line 2190, in exception_type
    raise e
  File "C:\Users\josip\Documents\GitHub\bpmn-assistant\.venv\Lib\site-packages\litellm\litellm_core_utils\exception_mapping_utils.py", line 325, in exception_type
    raise BadRequestError(
litellm.exceptions.BadRequestError: litellm.BadRequestError: Fireworks_aiException - Error code: 400 - {'error': {'object': 'error', 'type': 'invalid_request_error', 'message': "Extra inputs are not permitted, field: 'response_format.schema_field', value: None"}}

Are you a ML Ops Team?

No

What LiteLLM version are you on ?

v1.56.5

Twitter / LinkedIn details

No response

@jtlicardo jtlicardo added the bug Something isn't working label Jan 3, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

1 participant