Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Compile bug: ios swift xcode build error when upgrade to llama : use cmake for swift build #10747

Open
jiabochao opened this issue Dec 10, 2024 · 20 comments

Comments

@jiabochao
Copy link

Git commit

$git rev-parse HEAD 43ed389

Operating systems

Mac

GGML backends

Metal

Problem description & steps to reproduce

ios swift xcode build error when upgrade to

Before the upgrade, the code compiled successfully. After the upgrade, it throws a compilation error: "Cannot find type 'xxx' in scope."

image

First Bad Commit

43ed389

Relevant log output

/ios/llama.cpp.swift/LibLlama.swift:8:39 Cannot find type 'llama_batch' in scope

/ios/llama.cpp.swift/LibLlama.swift:12:37 Cannot find type 'llama_batch' in scope

/ios/llama.cpp.swift/LibLlama.swift:12:56 Cannot find type 'llama_token' in scope

/ios/llama.cpp.swift/LibLlama.swift:12:76 Cannot find type 'llama_pos' in scope

/ios/llama.cpp.swift/LibLlama.swift:12:99 Cannot find type 'llama_seq_id' in scope

/ios/llama.cpp.swift/LibLlama.swift:27:48 Cannot find type 'llama_sampler' in scope

/ios/llama.cpp.swift/LibLlama.swift:28:24 Cannot find type 'llama_batch' in scope

/ios/llama.cpp.swift/LibLlama.swift:29:31 Cannot find type 'llama_token' in scope

/ios/llama.cpp.swift/LibLlama.swift:44:22 Cannot find 'llama_batch_init' in scope

/ios/llama.cpp.swift/LibLlama.swift:46:23 Cannot find 'llama_sampler_chain_default_params' in scope

/ios/llama.cpp.swift/LibLlama.swift:47:25 Cannot find 'llama_sampler_chain_init' in scope

/ios/llama.cpp.swift/LibLlama.swift:48:9 Cannot find 'llama_sampler_chain_add' in scope

/ios/llama.cpp.swift/LibLlama.swift:48:48 Cannot find 'llama_sampler_init_temp' in scope

/ios/llama.cpp.swift/LibLlama.swift:49:9 Cannot find 'llama_sampler_chain_add' in scope

/ios/llama.cpp.swift/LibLlama.swift:49:48 Cannot find 'llama_sampler_init_dist' in scope

/ios/llama.cpp.swift/LibLlama.swift:53:9 Cannot find 'llama_sampler_free' in scope

/ios/llama.cpp.swift/LibLlama.swift:54:9 Cannot find 'llama_batch_free' in scope

/ios/llama.cpp.swift/LibLlama.swift:55:9 Cannot find 'llama_free' in scope

/ios/llama.cpp.swift/LibLlama.swift:56:9 Cannot find 'llama_free_model' in scope

/ios/llama.cpp.swift/LibLlama.swift:57:9 Cannot find 'llama_backend_free' in scope

/ios/llama.cpp.swift/LibLlama.swift:61:9 Cannot find 'llama_backend_init' in scope

/ios/llama.cpp.swift/LibLlama.swift:62:28 Cannot find 'llama_model_default_params' in scope

/ios/llama.cpp.swift/LibLlama.swift:68:21 Cannot find 'llama_load_model_from_file' in scope

/ios/llama.cpp.swift/LibLlama.swift:77:26 Cannot find 'llama_context_default_params' in scope

/ios/llama.cpp.swift/LibLlama.swift:82:23 Cannot find 'llama_new_context_with_model' in scope

/ios/llama.cpp.swift/LibLlama.swift:100:22 Cannot find 'llama_model_desc' in scope

/ios/llama.cpp.swift/LibLlama.swift:121:21 Cannot find 'llama_n_ctx' in scope

/ios/llama.cpp.swift/LibLlama.swift:142:12 Cannot find 'llama_decode' in scope

/ios/llama.cpp.swift/LibLlama.swift:150:27 Cannot find type 'llama_token' in scope

/ios/llama.cpp.swift/LibLlama.swift:152:24 Cannot find 'llama_sampler_sample' in scope

/ios/llama.cpp.swift/LibLlama.swift:154:12 Cannot find 'llama_token_is_eog' in scope

/ios/llama.cpp.swift/LibLlama.swift:185:12 Cannot find 'llama_decode' in scope

/ios/llama.cpp.swift/LibLlama.swift:211:13 Cannot find 'llama_kv_cache_clear' in scope

/ios/llama.cpp.swift/LibLlama.swift:213:30 Cannot find 'ggml_time_us' in scope

/ios/llama.cpp.swift/LibLlama.swift:215:16 Cannot find 'llama_decode' in scope

/ios/llama.cpp.swift/LibLlama.swift:218:13 Cannot find 'llama_synchronize' in scope

/ios/llama.cpp.swift/LibLlama.swift:220:28 Cannot find 'ggml_time_us' in scope

/ios/llama.cpp.swift/LibLlama.swift:224:13 Cannot find 'llama_kv_cache_clear' in scope

/ios/llama.cpp.swift/LibLlama.swift:226:30 Cannot find 'ggml_time_us' in scope

/ios/llama.cpp.swift/LibLlama.swift:235:20 Cannot find 'llama_decode' in scope

/ios/llama.cpp.swift/LibLlama.swift:238:17 Cannot find 'llama_synchronize' in scope

/ios/llama.cpp.swift/LibLlama.swift:241:28 Cannot find 'ggml_time_us' in scope

/ios/llama.cpp.swift/LibLlama.swift:243:13 Cannot find 'llama_kv_cache_clear' in scope

/ios/llama.cpp.swift/LibLlama.swift:245:24 No exact matches in call to initializer 

/ios/llama.cpp.swift/LibLlama.swift:246:24 No exact matches in call to initializer 

/ios/llama.cpp.swift/LibLlama.swift:254:32 Cannot convert value of type 'Duration' to expected argument type 'Double'

/ios/llama.cpp.swift/LibLlama.swift:255:32 Cannot convert value of type 'Duration' to expected argument type 'Double'

/ios/llama.cpp.swift/LibLlama.swift:272:64 Cannot find 'llama_model_size' in scope

/ios/llama.cpp.swift/LibLlama.swift:273:62 Cannot find 'llama_model_n_params' in scope

/ios/llama.cpp.swift/LibLlama.swift:293:9 Cannot find 'llama_kv_cache_clear' in scope

/ios/llama.cpp.swift/LibLlama.swift:296:60 Cannot find type 'llama_token' in scope

/ios/llama.cpp.swift/LibLlama.swift:299:43 Cannot find type 'llama_token' in scope

/ios/llama.cpp.swift/LibLlama.swift:300:26 Cannot find 'llama_tokenize' in scope

/ios/llama.cpp.swift/LibLlama.swift:302:27 Cannot find type 'llama_token' in scope

/ios/llama.cpp.swift/LibLlama.swift:313:40 Cannot find type 'llama_token' in scope

/ios/llama.cpp.swift/LibLlama.swift:319:23 Cannot find 'llama_token_to_piece' in scope

/ios/llama.cpp.swift/LibLlama.swift:327:30 Cannot find 'llama_token_to_piece' in scope

/ios/llama.cpp.swift/LibLlama.swift:328:33 Generic parameter 'Element' could not be inferred

~/Library/Developer/Xcode/DerivedData/Runner-efnwjojzxwrmmpfdjskgbtmftvem/SourcePackages/checkouts/llama.cpp/Sources/llama/llama.h
~/Library/Developer/Xcode/DerivedData/Runner-efnwjojzxwrmmpfdjskgbtmftvem/SourcePackages/checkouts/llama.cpp/Sources/llama/llama.h:3:10 'llama.h' file not found with <angled> include; use "quotes" instead
@nvoter
Copy link

nvoter commented Dec 11, 2024

same issue

@pgorzelany
Copy link

Can confirm, same issue

@slaren
Copy link
Collaborator

slaren commented Dec 11, 2024

The way it works now is that you need to build llama.cpp with cmake, and then install it using cmake --install. This should allow swift to find the llama.cpp library. See the way the CI builds the swift example:

- name: Build llama.cpp with CMake
id: cmake_build
run: |
sysctl -a
mkdir build
cd build
cmake -G Xcode .. \
-DGGML_METAL_USE_BF16=ON \
-DGGML_METAL_EMBED_LIBRARY=ON \
-DLLAMA_BUILD_EXAMPLES=OFF \
-DLLAMA_BUILD_TESTS=OFF \
-DLLAMA_BUILD_SERVER=OFF \
-DCMAKE_OSX_ARCHITECTURES="arm64;x86_64"
cmake --build . --config Release -j $(sysctl -n hw.logicalcpu)
sudo cmake --install . --config Release
- name: xcodebuild for swift package
id: xcodebuild
run: |
xcodebuild -scheme llama-Package -destination "${{ matrix.destination }}"

@pgorzelany
Copy link

First of all, thank you for providing some clarification. I don't usually use cmake so I am not familiar with the build process but the project still exposes a Package.swift file which seems to not work currently (even the example SwiftUI projects are broken).

Previously when developing for iOS and MacOS we could point Xcode to the llama.cpp swift package and it would "just work" which was pretty nice. If there are additional steps to be done now, can we have some additional documentation around the process?

@ggerganov
Copy link
Owner

ggerganov commented Dec 12, 2024

@pgorzelany Doing what the CI workflows do (see slaren's comment) should work.

The CI workflows install the llama.cpp binaries into the default system paths so your Swift project will automatically find them. However, you might not always want to do that. Instead, you can build different variants of the binaries (e.g. for iOS, tvOS, macOS, etc.) and install them into custom paths using CMAKE_INSTALL_PREFIX. After that you can point your project to that install location by updating the Build Settings in XCode. Here is how I configured the llama.swiftui example on my machine:

image

The process is a bit more involved than before, but it is more flexible and much easier to maintain. It would be useful to have step-by-step instructions added to the example, but I don't have much experience working with XCode (there is stuff like code signing, development teams, etc.), so I am hoping that people who are familiar will contribute and explain how to build a project correctly.

So atm, if you are looking for a point-and-click solution - there isn't one yet. You will need to understand how CMake works and start using it.

@pgorzelany
Copy link

Thank you. Once I understand how to properly set it up I will try to contribute some documentation around it. This project is used in multiple iOS and MacOS apps and it was very convenient to use it with the Package.swift file, maybe there is a way to modify the Package.swift to work again.

@Animaxx
Copy link

Animaxx commented Dec 20, 2024

Hi @ggerganov after I follow the steps in CI, do the CMake and install and updated the search path to /usr/local/include and /usr/local/lib.

still getting different errors for Undefined symbols do you have any suggestions?

Screenshot 2024-12-20 at 12 06 40 AM

@Animaxx
Copy link

Animaxx commented Dec 20, 2024

after run xcodebuild -scheme llama-Package -destination "generic/platform=macOS" and use swift package in the project, the app able to build but got Library not loaded error during the running.

dyld[61825]: Library not loaded: @rpath/libggml.dylib

@Animaxx
Copy link

Animaxx commented Dec 20, 2024

for iOS build, got error like "Building for 'iOS-simulator', but linking in dylib (/usr/local/lib/libggml.dylib) built for 'macOS'" while build.

here is the script I am using to update and build

if [ ! -d "llama.cpp" ]; then
    git clone https://github.com/ggerganov/llama.cpp
    cd ./llama.cpp
else
    cd ./llama.cpp
    git pull
fi

rm -rf build
mkdir build
cd build
cmake -G Xcode .. \
    -DGGML_METAL_USE_BF16=ON \
    -DGGML_METAL_EMBED_LIBRARY=ON \
    -DLLAMA_BUILD_EXAMPLES=OFF \
    -DLLAMA_BUILD_TESTS=OFF \
    -DLLAMA_BUILD_SERVER=OFF \
    -DCMAKE_OSX_ARCHITECTURES="arm64;x86_64"
cmake --build . --config Release -j $(sysctl -n hw.logicalcpu)
sudo cmake --install . --config Release

# build for swift package
cd ..
xcodebuild -scheme llama-Package \
    -destination "generic/platform=macOS" \
    -destination "generic/platform=iOS" \
    clean build

and use the package in the Xcode project. please let me know if anything I am missed

@edisonzf2020
Copy link

me too
Uploading 截屏2024-12-21 19.20.05.png…

@pgorzelany
Copy link

Was anyone able to solve this issue? I have an iOS project using llama and can't update to the latest version because of this. Any help appreciated!

@github-actions github-actions bot removed the stale label Jan 30, 2025
@pgorzelany
Copy link

The way it works now is that you need to build llama.cpp with cmake, and then install it using cmake --install. This should allow swift to find the llama.cpp library. See the way the CI builds the swift example:

llama.cpp/.github/workflows/build.yml

Lines 573 to 592 in 235f6e1

   - name: Build llama.cpp with CMake 
     id: cmake_build 
     run: | 
       sysctl -a 
       mkdir build 
       cd build 
       cmake -G Xcode .. \ 
         -DGGML_METAL_USE_BF16=ON \ 
         -DGGML_METAL_EMBED_LIBRARY=ON \ 
         -DLLAMA_BUILD_EXAMPLES=OFF \ 
         -DLLAMA_BUILD_TESTS=OFF \ 
         -DLLAMA_BUILD_SERVER=OFF \ 
         -DCMAKE_OSX_ARCHITECTURES="arm64;x86_64" 
       cmake --build . --config Release -j $(sysctl -n hw.logicalcpu) 
       sudo cmake --install . --config Release 

   - name: xcodebuild for swift package 
     id: xcodebuild 
     run: | 
       xcodebuild -scheme llama-Package -destination "${{ matrix.destination }}"

Hi @slaren , thank you for the response. The issue is that we are not able to use it as a system library on iOS since we cannot install llama on iOS. On Mac it may work but we need to somehow embed llama into the iOS app bundle if we would like to ship it. Could you advise som possible ways around it?

@ggerganov
Copy link
Owner

The issue is that we are not able to use it as a system library on iOS since we cannot install llama on iOS. On Mac it may work but we need to somehow embed llama into the iOS app bundle if we would like to ship it.

Could you provide a reference that shipping libraries on iOS is not possible? Seems hard to imagine that there is such a limitation. There must be some workaround. I just don't have a lot of experience and cannot suggest anything atm, but I doubt there is not way to fix this.

@MrMage
Copy link

MrMage commented Jan 31, 2025

The issue is that we are not able to use it as a system library on iOS since we cannot install llama on iOS. On Mac it may work but we need to somehow embed llama into the iOS app bundle if we would like to ship it.

Could you provide a reference that shipping libraries on iOS is not possible? Seems hard to imagine that there is such a limitation. There must be some workaround. I just don't have a lot of experience and cannot suggest anything atm, but I doubt there is not way to fix this.

Sorry for the drive-by, but my understanding is as follows; maybe it can provide some additional insights to avoid misunderstandings here:

The blocking issue with the build system changes is that, while iOS does support linking against dynamic libraries, it only supports that through the use of Frameworks. I.e., you are not allowed to ships a .dylib file as part of your app bundle on iOS, but you are allowed to essentially ship the same, also dynamically linked, code in a .framework bundle. On macOS, this limitation does not exist; it allows you to ship .dylib files as part of your app bundle.

So I think what needs to be done would be to add options to the llama.cpp's CMake build system to have it generate a .framework bundles instead of a .dylib files. I have no experience with CMake, so I can't provide more details on how to accomplish this, but I hope this at least provides a starting point for investigation.

ChatGPT conversation with extra information (the information about frame workssounds correct to me, but as always take this with a grain of salt, and I have no idea whether these CMake instructions would work): https://chatgpt.com/share/679ca58b-c498-8008-b9f7-a7527b5fd030

@pgorzelany
Copy link

Thanks for the investigation. Maybe its possible to build llama as a static library and embed that in the iOS app?

@ggerganov
Copy link
Owner

ggerganov commented Jan 31, 2025

@MrMage To add to that, from an earlier discussion, CMake should support building frameworks through CMAKE_FRAMEWORK. So definitely look into this as well.

@pgorzelany
Copy link

pgorzelany commented Jan 31, 2025

The issue is that we are not able to use it as a system library on iOS since we cannot install llama on iOS. On Mac it may work but we need to somehow embed llama into the iOS app bundle if we would like to ship it.

Could you provide a reference that shipping libraries on iOS is not possible? Seems hard to imagine that there is such a limitation. There must be some workaround. I just don't have a lot of experience and cannot suggest anything atm, but I doubt there is not way to fix this.

Here are Apple docs describing the use of a systemLibrary target in a Swift package.

The way I understand it, if you expose llama.cpp as a systemLibrary it has to be actually installed on the system. This is possible on MacOS but you can't just install system libraries on iOS. So I am just saying the current Package.swift setup feels incorrect.

You can ship dynamic libraries as Frameworks as discussed above but I am not sure if there is a way to expose them as a Swift package.

One possible route forward would be to build llama using CMake for all Apple platforms separately and then bundle that up in an XCFramework and then wrap that into a Swift package as a binaryTarget.

@MrMage
Copy link

MrMage commented Jan 31, 2025

To add to that, from an #11113 (comment), CMake should support building frameworks through CMAKE_FRAMEWORK. So definitely look into this as well.

For what it's worth, I am only using llama.cpp for my macOS app where dylibs are allowed. So I am personally not affected by this issue, but I figured I'd provide some insight. That being said, I remember trying the CMAKE_FRAMEWORK option when building llama.cpp, and it did not work for me for llama.cpp (and I've seen others reports here of the same).

The way I understand it, if you expose llama.cpp as a systemLibrary it has to be actually installed on the system. This is possible on MacOS but you can't just install system libraries on iOS. So I am just saying the current Package.swift setup feels incorrect.

Now, I can't expect my Mac app's users to install llama.cpp on their own, but I found a way around this: I am still adding the llama.cpp Swift package to my macOS app, but I am also adding references to the built dylibs to my app's target, telling Xcode to copy these dylibs into the app's frameworks folder upon build, plus the build settings from above that ensure that Xcode also looks for the same copies of these libraries when e.g. referencing header files.

That way, I can have SwiftPM believe that it's using the system libraries, but in reality ship those libraries as part of the app bundle.

I could imagine that a similar approach might work for iOS — but only if one can get llama.cpp to emit frameworks instead of dylibs. That being said, I could also imagine that building static versions of these libraries might work, but I think the integration with SwiftPM would be more complex then. Similarly, the approach of building an XCFramework would probably be ideal — if one can get llama.cpp's build process to emit one, and if Xcode can be convinced to use that without issues. I guess at that point, one would simply drop the reference to the llama.cpp "system library" package altogether.

@pgorzelany
Copy link

I tried to make the XCFramework approach work for several hours but wasn’t able to. In the end I forked the project, reverted the Package.swift changes and adjusted it for the latest project structure. It now builds for iOS as before so it’s a potential workaround.

@Animaxx
Copy link

Animaxx commented Feb 1, 2025

Good to see more discussion on this topic!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

8 participants