Configurations for DeepSeek with Azure AI Foundry #5924
Replies: 5 comments 8 replies
-
I'm also looking for a working example of using the Azure DeepSeek R1 model on Librechat, let me know if you firgure it out. |
Beta Was this translation helpful? Give feedback.
-
I think the issue is in this parameter:
I tried different variations, but got the same error each time. If anyone knows the correct way, please post it here. |
Beta Was this translation helpful? Give feedback.
-
This works as a custom endpoint, I know it can be done with Azure Serverless config, too, but don't have that handy: endpoints:
# other configs...
custom:
# custom endpoints...
- name: "Azure (DeepSeek)"
apiKey: "${AZURE_DEEPSEEK_API_KEY}"
baseURL: "https://DeepSeek-R1-YOUR-ENDPOINT-FROM-AZURE.models.ai.azure.com/v1/"
models:
default: [
"DeepSeek-R1",
]
fetch: false
titleConvo: true
titleModel: current_model
dropParams: ["stop", "user", "frequency_penalty", "presence_penalty"]
modelDisplayLabel: "DeepSeek-R1" needs |
Beta Was this translation helpful? Give feedback.
-
Thank you very much, Danny! I just tested serverless configurations, and they worked too!
|
Beta Was this translation helpful? Give feedback.
-
here is a serverless working endpoint from Azure AI Foundry - group: "DeepSeek-R1"
apiKey: "${AZURE_DEEPSEEK_API_KEY}"
baseURL: "https://xxxxxxxxxxxxxxx.services.ai.azure.com/models/"
version: "2024-05-01-preview"
serverless: true
models:
DeepSeek-R1: true |
Beta Was this translation helpful? Give feedback.
-
I get 401 status code error (no body) for DeepSeek model that is set up on Azure with Azure AI Foundry.
Here is my librechat.yaml:
I made a change that has not changed the behavior and error, but looks better:
I also tried serverless option, but it did not work at all. Not sure how serverless configurations should go together with other Azure OpenAI endpoints.
Beta Was this translation helpful? Give feedback.
All reactions