-
Notifications
You must be signed in to change notification settings - Fork 3.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Custom inline completion providers #18490
Comments
We have recently extended the I believe it could be an interesting option to explore in the scope of this issue. Feel free to ping me if you have any questions. |
I think Zed AI should also provide its own code completion functionality. |
Does Zed allows to use Codestral cloud for FIM ? |
https://arstechnica.com/ai/2025/01/nvidias-first-desktop-pc-can-run-local-ai-models-for-3000/ is relevant here. FIM models aren't all that big; one of these could probably handle an entire office's worth of requests. I'd really like to be able to use a llama.cpp model for FIM. |
Configurable FIM support should be a priority. While the improvements to the Assistant panel over time have been great, the lack of customizable AI autocomplete in Zed is becoming a significant drawback compared to the code assistance experiences offered by others, such as |
Summary: Custom inline completion providers for local models or other platforms
--
After going through: https://zed.dev/docs/completions
Zed currently supports completions via external LLM APIs like GitHub Copilot and Supermaven, but this is restrictive. Many users, for privacy or performance reasons, might prefer alternatives like Gemini Flash or local models via Ollama.
There are several advanced LLMs that support the Fill-in-the-Middle (FIM) objective, such as CodeGemma. Additionally, platforms like Continue.dev allow code completion via local models, with Startcoder via Ollama as the default.
Expanding LLM support to include more flexible, local, or privacy-focused options would greatly enhance Zed's appeal and utility for a wider range of developers.
The text was updated successfully, but these errors were encountered: