-
-
Notifications
You must be signed in to change notification settings - Fork 345
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow setting can_stream in extra-openai-models.yaml to allow for o1 over proxy #599
Comments
cmungall
added a commit
to cmungall/llm
that referenced
this issue
Oct 31, 2024
Fixes simonw#599 A longer term fix would be to use something like Pydantic so we don't repeat ourselves, but this would be a bit of a refactor
Here's how I tested this: I added this to my - model_id: o1-via-proxy
model_name: o1-preview
api_base: "http://localhost:8040/v1"
api_key_name: openai
can_stream: false Then I ran a proxy on port 8040 like this: uv run --with asgi-proxy-lib==0.2a0 \
python -m asgi_proxy \
https://api.openai.com -p 8040 -v And tested it like this: llm -m o1-via-proxy 'just say hi' Output:
While my proxy server showed:
I had to fix this issue first though: |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
o1 support was added in response to
However, this hardwires the streamability to named o1 models. I am accessing o1-preview via a (litellm) proxy, so I get a
'message': 'litellm.BadRequestError: AzureException BadRequestError - Error code: 400 - {\'error\': {\'message\': "Unsupported value: \'stream\' does not support true with this model. Only the default (false) value is supported.", \'type\': \'invalid_request_error\'
I believe I need to be able to do this
however,
can_stream
is currently ignoredThe text was updated successfully, but these errors were encountered: