-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
llama_generate_text: error: unable to load model #14
Comments
Hi, can you run godot on a terminal and paste the terminal output here, there should be a bit more information. |
Meant to make this an issue under the addon github but this is the console output. It actually works fine with the CPU build of the addon but the vulkan build fails to load the model. `Vulkan API 1.3.277 - Forward Mobile - Using Vulkan Device #0: NVIDIA - NVIDIA GeForce RTX 4080 Laptop GPU test1 test1 |
GPU Has 12GB of VRAM so it shouldn't be out of memory. I also tried with the 5gb model, same issue |
@TechnicalParadox I have transferred the issue to this addon repo. This is very likely to be an upstream bug (2 vulkan devices for the same gpu). Can you try this new build godot_windows_release.zip, set Be aware that the |
The new build works with split mode set to none and the main gpu at default of 0. Setting main gpu to 1 fails to load model. Both vulkan devices still show in the command prompt. Thank you! Much faster than CPU generation, once it loads onto the gpu |
I point the file to the .gguf but only receive this output when attempting to generate text
The text was updated successfully, but these errors were encountered: