Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[SYCL] Bug: check_allow_gpu_index error: device_index:0 is out of range: [0--2] #7718

Closed
nickp27 opened this issue Jun 3, 2024 · 2 comments
Labels
bug-unconfirmed medium severity Used to report medium severity bugs in llama.cpp (e.g. Malfunctioning Features but still useable) stale

Comments

@nickp27
Copy link
Contributor

nickp27 commented Jun 3, 2024

What happened?

SYCL bug when trying to use it as an RPC backend.

Seems to be the same bug as present in whisper.cpp - ggerganov/whisper.cpp#2033

Will keep digging around to find the cause but posting in the meantime.

Name and Version

[b3071] (https://github.com/ggerganov/llama.cpp/releases/tag/b3071)

What operating system are you seeing the problem on?

Linux

Relevant log output

create_backend: using SYCL backend
[SYCL] call ggml_init_sycl
ggml_init_sycl: GGML_SYCL_DEBUG: 0
ggml_init_sycl: GGML_SYCL_F16: yes
found 4 SYCL devices:
|  |                   |                                       |       |Max    |        |Max  |Global |                     |
|  |                   |                                       |       |compute|Max work|sub  |mem    |                     |
|ID|        Device Type|                                   Name|Version|units  |group   |group|size   |       Driver version|
|--|-------------------|---------------------------------------|-------|-------|--------|-----|-------|---------------------|
| 0| [level_zero:gpu:0]|                  Intel HD Graphics 630|    1.3|     24|     256|   32| 62191M|            1.3.27642|
| 1|     [opencl:gpu:0]|                  Intel HD Graphics 630|    3.0|     24|     256|   32| 62191M|       23.43.27642.40|
| 2|     [opencl:cpu:0]|       Intel Core i5-7500 CPU @ 3.40GHz|    3.0|      4|    8192|   64| 66662M|2024.17.3.0.08_160000|
| 3|     [opencl:acc:0]|            Intel FPGA Emulation Device|    1.2|      4|67108864|   64| 66662M|2024.17.3.0.08_160000|
check_allow_gpu_index error: device_index:0 is out of range: [0--2]
check_allow_gpu_index error: device_index:0 is out of range: [0--2]
8:02PM INF Starting llama-cpp-rpc-server on '127.0.0.1:33521'
@nickp27 nickp27 added bug-unconfirmed medium severity Used to report medium severity bugs in llama.cpp (e.g. Malfunctioning Features but still useable) labels Jun 3, 2024
@rgerganov
Copy link
Collaborator

As @slaren pointed out in #7808 the SYCL backend is using init_tensor do set tensor extra properties and this is not currently supported by the RPC backend.

@github-actions github-actions bot added the stale label Jul 8, 2024
Copy link
Contributor

This issue was closed because it has been inactive for 14 days since being marked as stale.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug-unconfirmed medium severity Used to report medium severity bugs in llama.cpp (e.g. Malfunctioning Features but still useable) stale
Projects
None yet
Development

No branches or pull requests

2 participants