Agent / Tool usage using Anthropic and Gemini models on custom endpoints #6119
Replies: 3 comments 1 reply
-
I think the issue might be that custom endpoints use the OpenAI agent and some models expect different format / parameters for tool calling. When I manually craft requests with CURL I'm able to get tool calling to work. |
Beta Was this translation helpful? Give feedback.
-
Without seeing the actual usage, it's hard to tell. Bedrock/VertexAI work well without needing a proxy: |
Beta Was this translation helpful? Give feedback.
-
Thanks @danny-avila, let me explain our setup in more detail. In our company's enterprise setup, we are not allowed to hit Bedrock, Vertex or OpenAI directly. We must go though our company's PortKey proxy - this adds features such as per-application API keys, cost tracking, rate-limiting and observability which are important in an enterprise setting. LibreChat is just one of many applications at our company which makes calls to PortKey. In my LibreChat config, I have it set up as a custom endpoint like this, and then have a premade modelSpec for each model. That's what our prod config looks like. Right now we have agents disabled, but locally I have enabled agents and am experimenting with them. But it looks like when using a custom endpoint, tool calling won't work for Anthropic models though bedrock through a custom endpoint. Is there a way to set a custom endpoint but use the bedrock client?
|
Beta Was this translation helpful? Give feedback.
-
We use AWS Bedrock and Vertex via a custom endpoint through PortKey. We're not able to get tool usage / agent features like MCP working via these custom endpoints.
For example, our Claude models hit PortKey which then hits AWS Bedrock.
Any idea on what might be going wrong?
When trying to do a function call we get this error:
Beta Was this translation helpful? Give feedback.
All reactions