Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can't run in SYCL mode, check_allow_gpu_index error: device_index:0 is out of range: [0--2] #2033

Closed
barolo opened this issue Apr 7, 2024 · 12 comments

Comments

@barolo
Copy link

barolo commented Apr 7, 2024

GGML_SYCL_DEVICE=0 ./build/bin/main -m models/ggml-base.bin -f samples/jfk.wav
whisper_init_from_file_with_params_no_state: loading model from 'models/ggml-base.bin'
whisper_model_load: loading model
whisper_model_load: n_vocab       = 51865
whisper_model_load: n_audio_ctx   = 1500
whisper_model_load: n_audio_state = 512
whisper_model_load: n_audio_head  = 8
whisper_model_load: n_audio_layer = 6
whisper_model_load: n_text_ctx    = 448
whisper_model_load: n_text_state  = 512
whisper_model_load: n_text_head   = 8
whisper_model_load: n_text_layer  = 6
whisper_model_load: n_mels        = 80
whisper_model_load: ftype         = 1
whisper_model_load: qntvr         = 0
whisper_model_load: type          = 2 (base)
whisper_model_load: adding 1608 extra tokens
whisper_model_load: n_langs       = 99
whisper_backend_init: using SYCL backend
[SYCL] call ggml_init_sycl
ggml_init_sycl: GGML_SYCL_DEBUG: 1
ggml_init_sycl: GGML_SYCL_F16: yes
[SYCL] call ggml_backend_sycl_print_sycl_devices
found 4 SYCL devices:
|  |                  |                                             |Compute   |Max compute|Max work|Max sub|               |
|ID|       Device Type|                                         Name|capability|units      |group   |group  |Global mem size|
|--|------------------|---------------------------------------------|----------|-----------|--------|-------|---------------|
| 0|[level_zero:gpu:0]|                 Intel(R) Iris(R) Xe Graphics|       1.3|         80|     512|     32|    14919602176|
| 1|    [opencl:gpu:0]|                 Intel(R) Iris(R) Xe Graphics|       3.0|         80|     512|     32|    14919602176|
| 2|    [opencl:cpu:0]|          12th Gen Intel(R) Core(TM) i5-1240P|       3.0|         16|    8192|     64|    16373895168|
| 3|    [opencl:acc:0]|               Intel(R) FPGA Emulation Device|       1.2|         16|67108864|     64|    16373895168|
check_allow_gpu_index error: device_index:0 is out of range: [0--2]
check_allow_gpu_index error: device_index:0 is out of range: [0--2]
Segmentation fault
@Mortezanavidi
Copy link

any idea on this? facing the same problem

@ggerganov
Copy link
Owner

cc @abhilash1910

@abhilash1910
Copy link
Contributor

@barolo is it using latest master? Will take a look.

@barolo
Copy link
Author

barolo commented May 21, 2024

@barolo is it using latest master? Will take a look.

Yes, but it was a month ago. I'll recheck tonight

@tannisroot
Copy link

@barolo is it using latest master? Will take a look.

This is still present on latest master.

@simonlui
Copy link

simonlui commented Jun 2, 2024

I tried it recently and it still seems like it is happening. If I comment out the function referenced, check_allow_gpu_index, which does the device_id checking and we stick with using 0 which points to my GPU's Level Zero instance, the end result is still hitting a segmentation fault.

@tannisroot
Copy link

@abhilash1910 hello, were you able to reproduce this? 😃

@abhilash1910
Copy link
Contributor

Yes will start on this shortly, appreciate the ping. Thanks. Mainly due to calling of init backend, which needs update.

@tannisroot
Copy link

Yes will start on this shortly, appreciate the ping. Thanks. Mainly due to calling of init backend, which needs update.

And I appreciate you working on this, thank you!

@simonlui
Copy link

Been a while, but I can report that this got solved with all the updates to the SYCL backend that went into llama.cpp that got synced up to whisper.cpp and I no longer have issues with using SYCL and this particular error.

@tannisroot
Copy link

Same, can be closed

@barolo
Copy link
Author

barolo commented Aug 30, 2024

Can confirm, closing it.

@barolo barolo closed this as completed Aug 30, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants