Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Compile bug: I'm trying to compile llama.cpp with SYCL support on Arch Linux with the glibc 2.41+r2+g0a7c7a3e283a-1 #11713

Open
dpriedel opened this issue Feb 6, 2025 · 4 comments

Comments

@dpriedel
Copy link

dpriedel commented Feb 6, 2025

Git commit

HEAD as of the time I post this

9ab42dc

Operating systems

Linux

GGML backends

SYCL

Problem description & steps to reproduce

I'm trying to compile llama.cpp with SYCL support on Arch Linux with the glibc 2.41+r2+g0a7c7a3e283a-1 and gcc 14.2.1+r730+gc061ad5a36ba-1 packages. Prior to installing these updates I had no problems with the compile.

I am also using the Intel oneAPI Base Toolkit version 2025.0.1

First Bad Commit

N/A

Compile command

I do `rm -rf build` and then

cmake -B build -DCMAKE_EXPORT_COMPILE_COMMANDS=ON -DGGML_SYCL=ON -DCMAKE_C_COMPILER=icx -DCMAKE_CXX_COMPILER=icpx -DLLAMA_BUILD_TESTS=ON -DCMAKE_BUILD_TYPE=Release

followed by:

cmake --build build --config Release -j 10

Relevant log output

There are multiple instances of the following type of output

In file included from /vol_KUtil2/LLM_Models/llama.cpp/ggml/src/ggml-sycl/conv.cpp:13:
In file included from /vol_KUtil2/LLM_Models/llama.cpp/ggml/src/ggml-sycl/conv.hpp:16:
In file included from /vol_KUtil2/LLM_Models/llama.cpp/ggml/src/ggml-sycl/common.hpp:16:
In file included from /usr/lib64/gcc/x86_64-pc-linux-gnu/14.2.1/../../../../include/c++/14.2.1/fstream:40:
In file included from /usr/lib64/gcc/x86_64-pc-linux-gnu/14.2.1/../../../../include/c++/14.2.1/istream:40:
In file included from /usr/lib64/gcc/x86_64-pc-linux-gnu/14.2.1/../../../../include/c++/14.2.1/ios:40:
In file included from /usr/lib64/gcc/x86_64-pc-linux-gnu/14.2.1/../../../../include/c++/14.2.1/iosfwd:42:
In file included from /usr/lib64/gcc/x86_64-pc-linux-gnu/14.2.1/../../../../include/c++/14.2.1/bits/postypes.h:40:
In file included from /usr/lib64/gcc/x86_64-pc-linux-gnu/14.2.1/../../../../include/c++/14.2.1/cwchar:44:
In file included from /usr/include/wchar.h:30:
/usr/include/bits/floatn.h:79:52: error: unsupported machine mode '__TC__'
   79 | typedef _Complex float __cfloat128 __attribute__ ((__mode__ (__TC__)));
      |                                                    ^
@Rbiessy
Copy link
Collaborator

Rbiessy commented Feb 6, 2025

Hello @dpriedel I'm not sure the issue is coming from llama.cpp. The error seems to come from including <fstream> with your particular environment. Have you maybe tried compiling a simpler cpp application that would also include <fstream> with the same compiler and environment and similar compilation flags used to compile ggml/src/ggml-sycl/conv.cpp?

@dpriedel
Copy link
Author

dpriedel commented Feb 6, 2025

I removed the include for fstream since it's not needed but the same error recurs from the iostream header which triggers the same sequence of includes, erroring out on the complex-float typedef.

The code in the floatn.h header indicates it is for 'older' compilers. I wonder if this is then an Intel oneAPI compiler problem??

Thanks.

PS. If I build for the Vulkan back-end, llama.cpp compiles.

@JohannesGaessler JohannesGaessler changed the title Compile bug: Compile bug: I'm trying to compile llama.cpp with SYCL support on Arch Linux with the glibc 2.41+r2+g0a7c7a3e283a-1 Feb 6, 2025
@qnixsynapse
Copy link
Contributor

qnixsynapse commented Feb 7, 2025

This is the SYCL compiler's fault and not llama.cpp. I too encountered that while working in this PR (comment)

I think they ship a dated version of compiler in intel-oneapi-base-toolkit. Please report on their repository. (I think it's intel/llvm)

For now, downgrade glibc to make the SYCL work again.

Also, there is a confirmed bug in the intel compute runtime.

@dpriedel
Copy link
Author

dpriedel commented Feb 8, 2025

I'm trying to make a small test case which reproduces the problem....

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants