-
Notifications
You must be signed in to change notification settings - Fork 11k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Misc. bug: Vulkan is not optional at runtime #11493
Labels
Comments
daym
added a commit
to daym/llama.cpp
that referenced
this issue
Jan 29, 2025
daym
added a commit
to daym/llama.cpp
that referenced
this issue
Jan 29, 2025
daym
added a commit
to daym/llama.cpp
that referenced
this issue
Jan 29, 2025
daym
added a commit
to daym/llama.cpp
that referenced
this issue
Jan 29, 2025
daym
added a commit
to daym/llama.cpp
that referenced
this issue
Jan 29, 2025
daym
added a commit
to daym/llama.cpp
that referenced
this issue
Jan 29, 2025
daym
added a commit
to daym/llama.cpp
that referenced
this issue
Feb 7, 2025
Co-authored-by: Jeff Bolz <jbolz@nvidia.com>
daym
added a commit
to daym/llama.cpp
that referenced
this issue
Feb 7, 2025
Co-authored-by: Jeff Bolz <jbolz@nvidia.com>
0cc4m
pushed a commit
that referenced
this issue
Feb 10, 2025
tinglou
pushed a commit
to tinglou/llama.cpp
that referenced
this issue
Feb 13, 2025
…1494) Co-authored-by: Jeff Bolz <jbolz@nvidia.com>
orca-zhang
pushed a commit
to orca-zhang/llama.cpp
that referenced
this issue
Feb 26, 2025
…1494) Co-authored-by: Jeff Bolz <jbolz@nvidia.com>
arthw
pushed a commit
to arthw/llama.cpp
that referenced
this issue
Feb 26, 2025
…1494) Co-authored-by: Jeff Bolz <jbolz@nvidia.com>
ubergarm
pushed a commit
to ubergarm/llama.cpp
that referenced
this issue
Mar 1, 2025
…1494) Co-authored-by: Jeff Bolz <jbolz@nvidia.com>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Name and Version
With llama.cpp version from git tag b4549, if I compile Vulkan support in and then run in an environment where Vulkan is not supported (for example this would happen if a Linux distribution provides llama.cpp with Vulkan enabled but the user doesn't have a GPU with Vulkan), it will fail with the following exception:
It would be better to just disable Vulkan in this case but run on CPU.
Operating systems
Linux
Which llama.cpp modules do you know to be affected?
libllama (core library)
Command line
Problem description & steps to reproduce
First Bad Commit
No response
Relevant log output
The text was updated successfully, but these errors were encountered: