You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
deepseek-v3 throws an error with response_format={"type": "text"}:
Error: litellm.BadRequestError: Fireworks_aiException - Error code: 400 - {'error': {'object': 'error', 'type': 'invalid_request_error', 'message': "Extra inputs are not permitted, field: 'response_format.schema_field', value: None"}}
Expected behavior:
The response_format parameter should work consistently across all Fireworks AI models. Either both models should accept {"type": "text"}, or litellm should handle the model-specific differences transparently.
Relevant log output
Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new
LiteLLM.Info: If you need to debug this error, use `litellm.set_verbose=True'.Provider List: https://docs.litellm.ai/docs/providersTraceback (most recent call last): File "C:\Users\josip\Documents\GitHub\bpmn-assistant\.venv\Lib\site-packages\litellm\llms\openai\openai.py", line 657, in completion raise e File "C:\Users\josip\Documents\GitHub\bpmn-assistant\.venv\Lib\site-packages\litellm\llms\openai\openai.py", line 583, in completion self.make_sync_openai_chat_completion_request( File "C:\Users\josip\Documents\GitHub\bpmn-assistant\.venv\Lib\site-packages\litellm\llms\openai\openai.py", line 395, in make_sync_openai_chat_completion_request raise e File "C:\Users\josip\Documents\GitHub\bpmn-assistant\.venv\Lib\site-packages\litellm\llms\openai\openai.py", line 377, in make_sync_openai_chat_completion_request raw_response = openai_client.chat.completions.with_raw_response.create( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\josip\Documents\GitHub\bpmn-assistant\.venv\Lib\site-packages\openai\_legacy_response.py", line 356, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\josip\Documents\GitHub\bpmn-assistant\.venv\Lib\site-packages\openai\_utils\_utils.py", line 275, in wrapper return func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\josip\Documents\GitHub\bpmn-assistant\.venv\Lib\site-packages\openai\resources\chat\completions.py", line 829, in create return self._post( ^^^^^^^^^^^ File "C:\Users\josip\Documents\GitHub\bpmn-assistant\.venv\Lib\site-packages\openai\_base_client.py", line 1280, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\josip\Documents\GitHub\bpmn-assistant\.venv\Lib\site-packages\openai\_base_client.py", line 957, in request return self._request( ^^^^^^^^^^^^^^ File "C:\Users\josip\Documents\GitHub\bpmn-assistant\.venv\Lib\site-packages\openai\_base_client.py", line 1061, in _request raise self._make_status_error_from_response(err.response) from Noneopenai.BadRequestError: Error code: 400 - {'error': {'object': 'error', 'type': 'invalid_request_error', 'message': "Extra inputs are not permitted, field: 'response_format.schema_field', value: None"}}During handling of the above exception, another exception occurred:Traceback (most recent call last): File "C:\Users\josip\Documents\GitHub\bpmn-assistant\.venv\Lib\site-packages\litellm\main.py", line 1619, in completion raise e File "C:\Users\josip\Documents\GitHub\bpmn-assistant\.venv\Lib\site-packages\litellm\main.py", line 1592, in completion response = openai_chat_completions.completion( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\josip\Documents\GitHub\bpmn-assistant\.venv\Lib\site-packages\litellm\llms\openai\openai.py", line 667, in completion raise OpenAIError(litellm.llms.openai.common_utils.OpenAIError: Error code: 400 - {'error': {'object': 'error', 'type': 'invalid_request_error', 'message': "Extra inputs are not permitted, field: 'response_format.schema_field', value: None"}}During handling of the above exception, another exception occurred:Traceback (most recent call last): File "C:\Users\josip\Documents\GitHub\bpmn-assistant\scratch.py", line 30, in <module> response = completion( ^^^^^^^^^^^ File "C:\Users\josip\Documents\GitHub\bpmn-assistant\.venv\Lib\site-packages\litellm\utils.py", line 994, in wrapper raise e File "C:\Users\josip\Documents\GitHub\bpmn-assistant\.venv\Lib\site-packages\litellm\utils.py", line 875, in wrapper result = original_function(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\josip\Documents\GitHub\bpmn-assistant\.venv\Lib\site-packages\litellm\main.py", line 2974, in completion raise exception_type( ^^^^^^^^^^^^^^^ File "C:\Users\josip\Documents\GitHub\bpmn-assistant\.venv\Lib\site-packages\litellm\litellm_core_utils\exception_mapping_utils.py", line 2190, in exception_type raise e File "C:\Users\josip\Documents\GitHub\bpmn-assistant\.venv\Lib\site-packages\litellm\litellm_core_utils\exception_mapping_utils.py", line 325, in exception_type raise BadRequestError(litellm.exceptions.BadRequestError: litellm.BadRequestError: Fireworks_aiException - Error code: 400 - {'error': {'object': 'error', 'type': 'invalid_request_error', 'message': "Extra inputs are not permitted, field: 'response_format.schema_field', value: None"}}
Are you a ML Ops Team?
No
What LiteLLM version are you on ?
v1.56.5
Twitter / LinkedIn details
No response
The text was updated successfully, but these errors were encountered:
What happened?
When using litellm's completion() function with Fireworks AI models, I discovered inconsistent behavior with the response_format parameter:
What works:
What breaks:
Error: litellm.BadRequestError: Fireworks_aiException - Error code: 400 - {'error': {'object': 'error', 'type': 'invalid_request_error', 'message': "Extra inputs are not permitted, field: 'response_format.schema_field', value: None"}}
Expected behavior:
The response_format parameter should work consistently across all Fireworks AI models. Either both models should accept {"type": "text"}, or litellm should handle the model-specific differences transparently.
Relevant log output
Are you a ML Ops Team?
No
What LiteLLM version are you on ?
v1.56.5
Twitter / LinkedIn details
No response
The text was updated successfully, but these errors were encountered: