Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: recognizing openai finetune model as huggingface model when calling #617

Closed
Undertone0809 opened this issue Oct 16, 2023 · 3 comments
Labels
bug Something isn't working

Comments

@Undertone0809
Copy link
Contributor

Undertone0809 commented Oct 16, 2023

What happened?

A bug happened!

I would like to use openai finetune model but failed. It recognizes finetune model as huggingface model.

from dotenv import load_dotenv
from litellm import completion

load_dotenv()


response = completion(
  model="ft:gpt-3.5-turbo-0613:xxx::xxx",
  messages=[
    {"role": "system", "content": "You are a helpful assistant."},
    {"role": "user", "content": "Hello!"}
  ]
)

print(response.choices[0].message)

docs: https://docs.litellm.ai/docs/tutorials/finetuned_chat_gpt

Relevant log output

Provider List: https://docs.litellm.ai/docs/providers


Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new
LiteLLM.Info: If you need to debug this error, use `litellm.set_verbose=True'.

Traceback (most recent call last):
  File "E:\Programming\anaconda\envs\babel_gpt35_finetune\lib\site-packages\litellm\main.py", line 260, in completion
    model, custom_llm_provider, dynamic_api_key = get_llm_provider(model=model, custom_llm_provider=custom_llm_provider, api_base=api_base)
  File "E:\Programming\anaconda\envs\babel_gpt35_finetune\lib\site-packages\litellm\utils.py", line 1485, in get_llm_provider
    raise e
  File "E:\Programming\anaconda\envs\babel_gpt35_finetune\lib\site-packages\litellm\utils.py", line 1482, in get_llm_provider
    raise ValueError(f"LLM Provider NOT provided. Pass in the LLM provider you are trying to call. E.g. For 'Huggingface' inference endpoints pass in `completion(model='huggingface/{model}',..)` Learn more: https://docs.litellm.ai/docs/providers")
ValueError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. E.g. For 'Huggingface' inference endpoints pass in `completion(model='huggingface/ft:gpt-3.5-turbo-0613:xxx::xxx',..)` Learn more: https://docs.litellm.ai/docs/providers

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "E:\project_hub\babel\babel_gpt35_finetune\babel_finetune\alpha\demo3.py", line 13, in <module>
    response = completion(
  File "E:\Programming\anaconda\envs\babel_gpt35_finetune\lib\site-packages\litellm\utils.py", line 791, in wrapper
    raise e
  File "E:\Programming\anaconda\envs\babel_gpt35_finetune\lib\site-packages\litellm\utils.py", line 750, in wrapper
    result = original_function(*args, **kwargs)
  File "E:\Programming\anaconda\envs\babel_gpt35_finetune\lib\site-packages\litellm\timeout.py", line 53, in wrapper
    result = future.result(timeout=local_timeout_duration)
  File "E:\Programming\anaconda\envs\babel_gpt35_finetune\lib\concurrent\futures\_base.py", line 458, in result
    return self.__get_result()
  File "E:\Programming\anaconda\envs\babel_gpt35_finetune\lib\concurrent\futures\_base.py", line 403, in __get_result
    raise self._exception
  File "E:\Programming\anaconda\envs\babel_gpt35_finetune\lib\site-packages\litellm\timeout.py", line 42, in async_func
    return func(*args, **kwargs)
  File "E:\Programming\anaconda\envs\babel_gpt35_finetune\lib\site-packages\litellm\main.py", line 1188, in completion
    raise exception_type(
  File "E:\Programming\anaconda\envs\babel_gpt35_finetune\lib\site-packages\litellm\utils.py", line 3000, in exception_type
    raise e
  File "E:\Programming\anaconda\envs\babel_gpt35_finetune\lib\site-packages\litellm\utils.py", line 2982, in exception_type
    raise APIError(status_code=500, message=str(original_exception), llm_provider=custom_llm_provider, model=model)
litellm.exceptions.APIError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. E.g. For 'Huggingface' inference endpoints pass in `completion(model='huggingface/ft:gpt-3.5-turbo-0613:xxx::xxx',..)` Learn more: https://docs.litellm.ai/docs/providers


### Twitter / LinkedIn details

_No response_
@Undertone0809 Undertone0809 added the bug Something isn't working label Oct 16, 2023
@Undertone0809 Undertone0809 changed the title [Bug]: recognizes openai finetune model as huggingface model when calling [Bug]: recognizing openai finetune model as huggingface model when calling Oct 16, 2023
@Undertone0809
Copy link
Contributor Author

I have solve this problem when I add parameter custom_llm_provider=openai. But I don't know the detail reason why I should add this provider. It seem openai finetune model can not find a llm_provider when calling get_llm_provider()

@krrishdholakia
Copy link
Contributor

@Undertone0809 i see the PR you made is merged, i'll update this ticket once it's in prod (we need to fix some linting issues on our end)

@ishaan-jaff
Copy link
Contributor

this is done

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants