Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: Azure DeepSeek Model: 500 Error due to Extra /openai/ in URL despite Correct API Base Endpoint #8200

Closed
rmssantos opened this issue Feb 3, 2025 · 2 comments
Labels
bug Something isn't working

Comments

@rmssantos
Copy link

rmssantos commented Feb 3, 2025

What happened?

Environment:

LiteLLM version: v1.60.0.dev2 (latest as of February 2025)
Running in a Docker container
Provider: Azure OpenAI Foundry (DeepSeek model)

Description:
When configuring the Azure DeepSeek model using the model_list configuration, even though the API base endpoint is set to the correct Foundry endpoint, LiteLLM still appends an extra /openai/ segment to the URL. This causes the Azure backend to return a 500 InternalServerError with the message:

{'error': {'code': 'InternalServerError', 'message': 'Backend returned unexpected response. Please contact Microsoft for help.'}}

model_list:

  • model_name: deepseek-r1
    litellm_params:
    model: "azure/deepseek-r1"
    deployment: "DeepSeek-R1"
    api_base: "os.environ/AZURE_API_BASE_DEEPSEEKR1"
    api_key: "os.environ/AZURE_API_KEY_DEEPSEEKR1"
    api_version: "2024-05-01-preview"

As a result, the Azure endpoint returns a 500 error

Steps to Reproduce:

  1. Configure LiteLLM with the above settings and start the proxy in Docker (with detailed debugging enabled).
  2. Send a streaming chat completion request
  3. Observe in the logs that the final URL used is:
  • https://.services.ai.azure.com/models/chat/completions/openai/?api-version=2024-05-01-preview
  1. instead of the expected URL:
  • https://.services.ai.azure.com/models/chat/completions?api-version=2024-05-01-preview
  1. The extra “/openai/” in the URL leads the Azure backend to return a 500 error with:
  • {'error': {'code': 'InternalServerError', 'message': 'Backend returned unexpected response. Please contact Microsoft for help.'}}

Additional Notes:
I attempted to configure a pass-through endpoint as well, but the router does not support pass-through endpoints for models using the “azure” provider.
Other Azure models (e.g., gpt-4o) work as expected, which suggests this behavior is specific to the DeepSeek model integration.

Is there a recommended configuration or a patch that ensures that when use_in_pass_through: true is set (with a deployment override) for the Azure DeepSeek model, LiteLLM does not append “/openai/” to the API base URL?
Could the Azure provider code be modified to conditionally skip appending “/openai/” when using a Foundry endpoint with a specified deployment?

Any guidance or debugging tips on this issue would be greatly appreciated.
Thank you for your time and assistance!

Relevant log output

[DEBUG] Initializing Azure OpenAI Client for azure/deepseek-r1, Api Base: https://<YOUR-AZURE-ENDPOINT>/models/chat/completions, Api Key: <CENSORED>
...
[DEBUG] Final returned optional params: {'stream': True, 'extra_body': {'deployment': 'DeepSeek-R1'}}
[DEBUG] RAW RESPONSE: <coroutine object AzureChatCompletion.async_streaming at 0x...>
...
POST Request Sent from LiteLLM:
curl -X POST \
https://<YOUR-AZURE-ENDPOINT>/models/chat/completions/openai/ \
-H 'api_key: <CENSORED>' \
-d '{"model": "deepseek-r1", "messages": [{"role": "user", "content": "ola amigo"}], "stream": True, "extra_body": {"deployment": "DeepSeek-R1"}}'
...
[ERROR] litellm.APIError: AzureException APIError - Error code: 500 - {'error': {'code': 'InternalServerError', 'message': 'Backend returned unexpected response. Please contact Microsoft for help.'}}

Are you a ML Ops Team?

No

What LiteLLM version are you on ?

v1.60.0.dev2 (latest as of February 2025)

Twitter / LinkedIn details

No response

@rmssantos rmssantos added the bug Something isn't working label Feb 3, 2025
@rmssantos
Copy link
Author

fixed with the recent release
Azure AI Foundry - Deepseek R1 by @elabbarw in #8188

@jruokola
Copy link

jruokola commented Feb 6, 2025

I had this same bug the correct config:
model_list:

model_name: deepseek-r1
litellm_params:
model: "azure_ai/DeepSeek-R1"
deployment: "DeepSeek-R1"
api_base: "os.environ/AZURE_API_BASE_DEEPSEEKR1"
api_key: "os.environ/AZURE_API_KEY_DEEPSEEKR1"
api_version: "2024-05-01-preview"

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants