-
Notifications
You must be signed in to change notification settings - Fork 53
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature Request] Support other LLM providers #62
Comments
Sounds great, thanks for raising this. This has been on our roadmap for a while and we would appreciate any external contributions on this. LiteLLM seems like a good choice. Our current llm setup is a bit of a mess anyways.
If you can take this on, please consider the above. |
Awesome, thanks for the go-ahead! I'll build that LLM class with LiteLLM and make sure it handles the JSON schema and temperature differences between models. |
@sfaist I was thinking of using litellm's javascript library (litellm is a python project). But, the community based library is poorly maintained. So, there are two options:
I am thinking of using litellm proxy server. We can route the requests from superglue to it and it will route the request to the respective llm. Superglue already has docker compose. So, deployment/setup can be done easily using docker. Should I go ahead with it? |
Since we are already a proxy, adding another proxy seems a bit much, particularly since we want to keep the setup lightweight. I have double-checked with some very experienced folks and they recommend the Vercel AI SDK. What do you think about that? |
I get the concern about adding another proxy layer, but I think LiteLLM offers some real advantages that outweigh that: What do you think about sticking with the LiteLLM implementation I've built? I believe it'll make our code cleaner and easier to maintain as we add more LLM providers down the road. |
Hey @sfaist
I'd like to add multi-LLM support to Superglue using LiteLLM. Some users might prefer Gemini, Claude, or other models instead of OpenAI, and this would let them easily switch.
LiteLLM is perfect for this since it's a drop-in replacement for the OpenAI SDK - minimal code changes needed. Users would just change an env var to switch providers.
This would be my implementation plan:
If this repository is open to external contributions then I'll like to submit a PR.
Thanks!
The text was updated successfully, but these errors were encountered: