Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update factory.py to fix issue when calling from write-the -> langchain -> litellm served ollama #1054

Merged
merged 1 commit into from
Dec 11, 2023

Conversation

James4Ever0
Copy link
Contributor

Fixing issue when calling from write-the -> langchain -> litellm served ollama

Issue:


LiteLLM completion() model= openhermes2.5-mistral; provider = ollama

LiteLLM: Params passed to completion() {'functions': [], 'function_call': '', 'temperature': 0.0, 'top_p': 1, 'stream': None, 'max_tokens': 3905, 'presence_penalty': 0, 'frequency_penalty': 0, 'logit_bias': {}, 'user': None, 'response_format': None, 'seed': None, 'tools': None, 'tool_choice': None, 'max_retries': None, 'custom_llm_provider': 'ollama', 'model': 'openhermes2.5-mistral', 'n': 1, 'stop': None}

LiteLLM: Non-Default params passed to completion() {'temperature': 0.0, 'top_p': 1, 'max_tokens': 3905, 'presence_penalty': 0, 'frequency_penalty': 0, 'logit_bias': {}, 'n': 1}
self.optional_params: {'num_predict': 3905, 'temperature': 0.0, 'top_p': 1, 'repeat_penalty': 0}
[MESSAGES] [{'role': 'system', 'content': ['\nProvide Google style docstrings for the given code. \nInclude description, parameter types, exceptions, side effects, notes, and examples. \nReturn only the docstrings, with function/class names as keys. \nUse the Class.method format for methods.\n\nExample:\ndef add(a, b):\n  return a + b\nFormatted docstrings for add:\nadd:\n  Sums 2 numbers.\n\n  Args:\n    a (int): The first number to add.\n    b (int): The second number to add.\n\n  Returns:\n    int: The sum of a and b.\n\n  Examples:\n    >>> add(1, 2)\n    3\n\nCode:\n# generate doc for me!\n\ndef hello_world():\n    print("hello world")\nFormatted docstrings for [\'hello_world\']:\n']}]
Traceback (most recent call last):
  File "/usr/local/lib/python3.9/dist-packages/litellm/main.py", line 1269, in completion
    prompt = prompt_factory(model=model, messages=messages, custom_llm_provider=custom_llm_provider)
  File "/usr/local/lib/python3.9/dist-packages/litellm/llms/prompt_templates/factory.py", line 328, in prompt_factory
    return ollama_pt(model=model, messages=messages)
  File "/usr/local/lib/python3.9/dist-packages/litellm/llms/prompt_templates/factory.py", line 78, in ollama_pt
    prompt = "".join(m["content"] for m in messages)
TypeError: sequence item 0: expected str instance, list found

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.9/dist-packages/litellm/utils.py", line 4678, in exception_type
    raise APIConnectionError(
litellm.exceptions.APIConnectionError: sequence item 0: expected str instance, list found
Logging Details: logger_fn - None | callable(logger_fn) - False
Logging Details LiteLLM-Failure Call
self.failure_callback: []
An error occurred: sequence item 0: expected str instance, list found

 Debug this by setting `--debug`, e.g. `litellm --model gpt-3.5-turbo --debug`
INFO:     127.0.0.1:36778 - "POST /completions HTTP/1.1" 500 Internal Server Error
litellm.caching: False; litellm.caching_with_models: False; litellm.cache: None
kwargs[caching]: False; litellm.cache: None

LiteLLM completion() model= openhermes2.5-mistral; provider = ollama

LiteLLM: Params passed to completion() {'functions': [], 'function_call': '', 'temperature': 0.0, 'top_p': 1, 'stream': None, 'max_tokens': 3905, 'presence_penalty': 0, 'frequency_penalty': 0, 'logit_bias': {}, 'user': None, 'response_format': None, 'seed': None, 'tools': None, 'tool_choice': None, 'max_retries': None, 'custom_llm_provider': 'ollama', 'model': 'openhermes2.5-mistral', 'n': 1, 'stop': None}

LiteLLM: Non-Default params passed to completion() {'temperature': 0.0, 'top_p': 1, 'max_tokens': 3905, 'presence_penalty': 0, 'frequency_penalty': 0, 'logit_bias': {}, 'n': 1}
self.optional_params: {'num_predict': 3905, 'temperature': 0.0, 'top_p': 1, 'repeat_penalty': 0}
[MESSAGES] [{'role': 'system', 'content': ['\nProvide Google style docstrings for the given code. \nInclude description, parameter types, exceptions, side effects, notes, and examples. \nReturn only the docstrings, with function/class names as keys. \nUse the Class.method format for methods.\n\nExample:\ndef add(a, b):\n  return a + b\nFormatted docstrings for add:\nadd:\n  Sums 2 numbers.\n\n  Args:\n    a (int): The first number to add.\n    b (int): The second number to add.\n\n  Returns:\n    int: The sum of a and b.\n\n  Examples:\n    >>> add(1, 2)\n    3\n\nCode:\n# generate doc for me!\n\ndef hello_world():\n    print("hello world")\nFormatted docstrings for [\'hello_world\']:\n']}]
Traceback (most recent call last):
  File "/usr/local/lib/python3.9/dist-packages/litellm/main.py", line 1269, in completion
    prompt = prompt_factory(model=model, messages=messages, custom_llm_provider=custom_llm_provider)
  File "/usr/local/lib/python3.9/dist-packages/litellm/llms/prompt_templates/factory.py", line 328, in prompt_factory
    return ollama_pt(model=model, messages=messages)
  File "/usr/local/lib/python3.9/dist-packages/litellm/llms/prompt_templates/factory.py", line 78, in ollama_pt
    prompt = "".join(m["content"] for m in messages)
TypeError: sequence item 0: expected str instance, list found

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.9/dist-packages/litellm/utils.py", line 4678, in exception_type
    raise APIConnectionError(
litellm.exceptions.APIConnectionError: sequence item 0: expected str instance, list found
Logging Details: logger_fn - None | callable(logger_fn) - False
Logging Details LiteLLM-Failure Call
self.failure_callback: []
An error occurred: sequence item 0: expected str instance, list found

 Debug this by setting `--debug`, e.g. `litellm --model gpt-3.5-turbo --debug`
INFO:     127.0.0.1:36780 - "POST /completions HTTP/1.1" 500 Internal Server Error
litellm.caching: False; litellm.caching_with_models: False; litellm.cache: None
kwargs[caching]: False; litellm.cache: None

LiteLLM completion() model= openhermes2.5-mistral; provider = ollama

LiteLLM: Params passed to completion() {'functions': [], 'function_call': '', 'temperature': 0.0, 'top_p': 1, 'stream': None, 'max_tokens': 3905, 'presence_penalty': 0, 'frequency_penalty': 0, 'logit_bias': {}, 'user': None, 'response_format': None, 'seed': None, 'tools': None, 'tool_choice': None, 'max_retries': None, 'custom_llm_provider': 'ollama', 'model': 'openhermes2.5-mistral', 'n': 1, 'stop': None}

LiteLLM: Non-Default params passed to completion() {'temperature': 0.0, 'top_p': 1, 'max_tokens': 3905, 'presence_penalty': 0, 'frequency_penalty': 0, 'logit_bias': {}, 'n': 1}
self.optional_params: {'num_predict': 3905, 'temperature': 0.0, 'top_p': 1, 'repeat_penalty': 0}
[MESSAGES] [{'role': 'system', 'content': ['\nProvide Google style docstrings for the given code. \nInclude description, parameter types, exceptions, side effects, notes, and examples. \nReturn only the docstrings, with function/class names as keys. \nUse the Class.method format for methods.\n\nExample:\ndef add(a, b):\n  return a + b\nFormatted docstrings for add:\nadd:\n  Sums 2 numbers.\n\n  Args:\n    a (int): The first number to add.\n    b (int): The second number to add.\n\n  Returns:\n    int: The sum of a and b.\n\n  Examples:\n    >>> add(1, 2)\n    3\n\nCode:\n# generate doc for me!\n\ndef hello_world():\n    print("hello world")\nFormatted docstrings for [\'hello_world\']:\n']}]
Traceback (most recent call last):
  File "/usr/local/lib/python3.9/dist-packages/litellm/main.py", line 1269, in completion
    prompt = prompt_factory(model=model, messages=messages, custom_llm_provider=custom_llm_provider)
  File "/usr/local/lib/python3.9/dist-packages/litellm/llms/prompt_templates/factory.py", line 328, in prompt_factory
    return ollama_pt(model=model, messages=messages)
  File "/usr/local/lib/python3.9/dist-packages/litellm/llms/prompt_templates/factory.py", line 78, in ollama_pt
    prompt = "".join(m["content"] for m in messages)
TypeError: sequence item 0: expected str instance, list found

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.9/dist-packages/litellm/utils.py", line 4678, in exception_type
    raise APIConnectionError(
litellm.exceptions.APIConnectionError: sequence item 0: expected str instance, list found
Logging Details: logger_fn - None | callable(logger_fn) - False
Logging Details LiteLLM-Failure Call
self.failure_callback: []
An error occurred: sequence item 0: expected str instance, list found

Fixing issue when calling from write-the -> langchain -> litellm served ollama
Copy link

vercel bot commented Dec 7, 2023

The latest updates on your projects. Learn more about Vercel for Git ↗︎

Name Status Preview Comments Updated (UTC)
litellm ✅ Ready (Inspect) Visit Preview 💬 Add feedback Dec 7, 2023 7:01pm

@James4Ever0 James4Ever0 changed the title Update factory.py Update factory.py to fix issue when calling from write-the -> langchain -> litellm served ollama Dec 7, 2023
@krrishdholakia
Copy link
Contributor

the fix looks like it does "".join(m["content"]) both times?

@James4Ever0 do you have a script i can repro this with?

@James4Ever0
Copy link
Contributor Author

James4Ever0 commented Dec 8, 2023

the fix looks like it does "".join(m["content"]) both times?

@James4Ever0 do you have a script i can repro this with?

This change handles the case when m["content"] is not of type str but list[str].

You need minor fix of write-the library (pip installable), simply replace all imports from openai.error into <arbitrary_error> = Exception.

First run:

litellm --model ollama/openhermes2.5-mistral --drop_params --debug

Then run:

export OPENAI_API_BASE='http://0.0.0.0:8000'
export OPENAI_API_KEY='any'
write-the docs <arbitrary_python_file_path_with_at_least_one_function>

@krrishdholakia krrishdholakia merged commit 4ffe6a4 into BerriAI:main Dec 11, 2023
@krrishdholakia
Copy link
Contributor

lgtm!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants