-
Notifications
You must be signed in to change notification settings - Fork 1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
termux build issues #389
Comments
I resolved ninja installation by termux-chroot. Also added following to |
Looks to be related to scikit-build/cmake-python-distributions#223 The longer term solution seems to be migrating the project to scikit-build-core (something I'm in the process of doing). However someone in the thread mentioned they were able to get their example to work by building cmake and ninja from source on the android. |
Also tried following: Here's output: Is installing successful? Python:
|
while trying to make it work on termux i ended up writing a small llama.cpp wrapper myself, instructions to use which are here, at the time llama.cpp cache files could be read but not generated/edited in termux, but that problem is sorted now, so put these highlighted lines are all you need to make a wrapper of your own |
Is it compatible with babyagi and other stuff that needs llama python
wrapper?
…On Wed, Jun 28, 2023, 10:37 PM Subhranshu Sharma ***@***.***> wrote:
while trying to make it work on termux i ended up writing a small
llama.cpp wrapper
<https://github.com/SubhranshuSharma/The_Elite_Order_of_Pure_Thought/tree/cb063f5ac0d8c8a26bc75f4f0de18cc8cee062ef/discord/termux>
myself, instructions to use which are here
<https://github.com/SubhranshuSharma/The_Elite_Order_of_Pure_Thought/tree/cb063f5ac0d8c8a26bc75f4f0de18cc8cee062ef#linuxtermux>,
at the time llama.cpp cache files could be read but not generated/edited in
termux, but that problem is sorted now, so put
cache_is_supported_trust_me_bro=True in discord/termux/settings.py to use
it.
these highlighted lines
<https://github.com/SubhranshuSharma/The_Elite_Order_of_Pure_Thought/blob/cb063f5ac0d8c8a26bc75f4f0de18cc8cee062ef/discord/termux/the_elite_bot_termux.py#L40C1-L51C71>
are all you need to make a wrapper of your own
—
Reply to this email directly, view it on GitHub
<#389 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AD5KKLEJQUGTJKV45ZHQCJTXNR6I5ANCNFSM6AAAAAAZKDJVOQ>
.
You are receiving this because you commented.Message ID:
***@***.***>
|
nope, my wrapper is tightly integrated with my usecase and isn't a separate installable package, was thinking of doing so, but would be too much of a headache |
@SubhranshuSharma so for your wrapper you just built the libllama.so seperately correct, did you just build llama.cpp with the Makefile? Maybe one solution is to avoid building llama.cpp on install by setting an environment variable / path to a pre-built library. |
@abetlen Yeah, I run I suggest, try building llama.cpp but don't crash if u can't, just check if the |
Any progress about it? |
|
@SubhranshuSharma @Freed-Wu implementing in #499 but I just have some issues with Macos still |
unrelated question: is there any way of storing cache files on disk for quick reboot in the api
i would still suggest treating this repo and llama.cpp as different things and not letting failure in one stop the other (for as long as its possible), so make the compilation a try except pass, if compile fails, force user to set a system variable pointing to llama.cpp, I would also suggest keeping the system variable at first priority, as in if the system variable is set, it will be given the first priority. That way the project will be more robust, letting people find workarounds to issues that originate in llama.cpp. |
@SubhranshuSharma sorry for this very late reply but I finally merged in #499. You can now set |
So can we close this issue now? |
@Freed-Wu , @abetlen , @SubhranshuSharma
But when I run simply by For pyinstaller do I need to generate the shared library seperatly and then use But I am using termex, plain terminal. |
this is the error i still get in termux when running am i missing something |
@SubhranshuSharma if you want to build cmake module termux/termux-packages#10065 or build without it
repeat 1-2 steps |
is the python library working for anyone? @Freed-Wu is this related to adding original llama.cpp to termux package manager? if yes, llama.cpp was working out of the box on termux anyway, that's why i could make my own usecase-specific python wrapper around it, to quote myself:
@romanovj this solution of yours did install cmake without errors, now and your second solution worked, and |
@SubhranshuSharma reinstall cmake also you can copy compiled cmake wheel to ~/wheels folder and install modules with You need to build shared libs for llama.cpp like this then export path to libllama.so
ok |
Hey guys, I was finally able to put some time into this, the following worked for me: pkg install python-pip python-cryptography cmake ninja autoconf automake libandroid-execinfo patchelf
MATHLIB=m CFLAGS=--target=aarch64-unknown-linux-android33 LDFLAGS=-lpython3.11 pip install numpy --force-reinstall --no-cache-dir
pip install llama-cpp-python --verbose |
@abetlen it works inconsistently, on a clean termux install with python installed i usually also haveto install this works more consistently for me, keep selecting default answers to prompts pkg install libexpat openssl python-pip python-cryptography cmake ninja autoconf automake libandroid-execinfo patchelf
pkg update && pkg upgrade
pip install llama-cpp-python then run |
Does the llama-cpp-server work in termux? |
@pinyaras original llama.cpp has its own server which is stable pkg install rust
pip install llama-cpp-python[server] |
pip install llama-cpp-python doesn't work on termux and gives this error, even if ninja is installed it gives same error, might be a hardcoded absolute path problem
i have tried to use portable venv setup on my linux machine but running it on termux gave a dependency not found error, so maybe some paths in the source code were still absolute(even after the correction attempts in the blog post)
tried using pyinstaller, but it doesn't support this library yet, same missing dependency issue
another option is to use docker on termux but that requires root privileges and custom kernel
i hv tried to look into the source code of this repo but donno where to start, any hint on where to start?
the original llama.cpp library works fine on termux but doesn't have a server inbuilt, and doesn't work well unless using bash
should i make a pull request editing the readme, linking to the docker workaround on rooted phones
The text was updated successfully, but these errors were encountered: