-
-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
offline chat generates garbage output #516
Comments
Ah that's unfortunate that you're seeing a regression in behavior. The chat Can you share details of your machine specs. Specifically the RAM, Processor and GPU on DetailsWe'd started using an upgraded default model for offline chat (Mistral |
Vulkan support in our upstream dependency (GPT4All) still needing some ironing out. Until then I've exposed a CLI flag to allow users to disable using GPU for offline chat. To use this fix:
@mtoniott: Let me know if this mitigates the issue with offline chat generating gibberish output? |
Hello,
Thank you for this software getting better everyday.
So recently I updated khoj and the offline chat that was giving me normal answers before is now outputing garbage with Mississipi in it each time for some reasons.

I tried reinstalling khoj in a new venv. Did not work.
I tried turning off the offline chat, removing the model in my .cache file then redownload it. Same result.
I guess it is linked to the fact that it wants to use my intel integrated graphics to accelerate the queries. But I did not find a way to turn it off. I get the following line in the terminal;
llama.cpp: using Vulkan on Intel(R) Iris(R) Plus Graphics 655 (CFL GT3)
Any idea on how to fix this?
The text was updated successfully, but these errors were encountered: