Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Misc. bug: Vulkan is not optional at runtime #11493

Open
daym opened this issue Jan 29, 2025 · 0 comments
Open

Misc. bug: Vulkan is not optional at runtime #11493

daym opened this issue Jan 29, 2025 · 0 comments

Comments

@daym
Copy link
Contributor

daym commented Jan 29, 2025

Name and Version

With llama.cpp version from git tag b4549, if I compile Vulkan support in and then run in an environment where Vulkan is not supported (for example this would happen if a Linux distribution provides llama.cpp with Vulkan enabled but the user doesn't have a GPU with Vulkan), it will fail with the following exception:

terminate called after throwing an instance of 'vk::IncompatibleDriverError'
  what():  vk::createInstance: ErrorIncompatibleDriver

It would be better to just disable Vulkan in this case but run on CPU.

Operating systems

Linux

Which llama.cpp modules do you know to be affected?

libllama (core library)

Command line

Any llama-cli command (yes, even `llama-cli -dev none`).

Problem description & steps to reproduce

  1. Compile with GGML_VULKAN=ON
  2. Run without GPU
  3. It crashes with exception

First Bad Commit

No response

Relevant log output

daym added a commit to daym/llama.cpp that referenced this issue Jan 29, 2025
daym added a commit to daym/llama.cpp that referenced this issue Jan 29, 2025
daym added a commit to daym/llama.cpp that referenced this issue Jan 29, 2025
daym added a commit to daym/llama.cpp that referenced this issue Jan 29, 2025
daym added a commit to daym/llama.cpp that referenced this issue Jan 29, 2025
daym added a commit to daym/llama.cpp that referenced this issue Jan 29, 2025
daym added a commit to daym/llama.cpp that referenced this issue Feb 7, 2025
Co-authored-by: Jeff Bolz <jbolz@nvidia.com>
daym added a commit to daym/llama.cpp that referenced this issue Feb 7, 2025
Co-authored-by: Jeff Bolz <jbolz@nvidia.com>
0cc4m pushed a commit that referenced this issue Feb 10, 2025
Co-authored-by: Jeff Bolz <jbolz@nvidia.com>
tinglou pushed a commit to tinglou/llama.cpp that referenced this issue Feb 13, 2025
orca-zhang pushed a commit to orca-zhang/llama.cpp that referenced this issue Feb 26, 2025
arthw pushed a commit to arthw/llama.cpp that referenced this issue Feb 26, 2025
@github-actions github-actions bot added the stale label Mar 1, 2025
ubergarm pushed a commit to ubergarm/llama.cpp that referenced this issue Mar 1, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant