Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error when running make start #1

Closed
tuscland opened this issue Dec 14, 2023 · 6 comments
Closed

Error when running make start #1

tuscland opened this issue Dec 14, 2023 · 6 comments

Comments

@tuscland
Copy link
Member

Hi Guillaume,

I successfully installed requirements on a Python 3.11 running Docker (Apple Silicon).
When I run make start in the app directory, I get the following error:

ModuleNotFoundError: No module named 'llama_cpp'

After installing llama_cpp using pip install llama-cpp-python, I ran in another issue:

FileNotFoundError: [Errno 2] No such file or directory: '../models/api_semantic_retrieval.joblib'

I am not sure where I can get this file.

Cam

@glemaitre
Copy link
Member

I did not right yet the documentation :)

I intend to have a documentation page to describe how to set up the repo to train the retriever before to launch the app.

The install guide is still in process. While on CPU this is quite feasible, it is difficult to provide something reliable for GPU. At least, it should be feasible on MPS with MacOS. I'll check if can get something easy.

@tuscland
Copy link
Member Author

Looking forward to trying your LLM! :)

@glemaitre
Copy link
Member

glemaitre commented Dec 17, 2023

You can have a look to the following: https://glemaitre.github.io/sklearn-ragger-duck/install.html

Let see if pixi makes stuff reproducible at least on the MacOS M1/M2.

@tuscland
Copy link
Member Author

Almost there!
I am running a Docker container under the linux-aarch64 architecture.

I added linux-aarch64 to the platforms directive in pixi.toml.
Unfortunately, there is no llama-cpp-python package available with this arch on pixi.
So I tried to install using pip, which seemed to work fine (added llama-cpp-python to requirements.txt).
I also had to comment the line in pixi.toml that references llama-cpp-python.

Then I stumbled on prefix-dev/pixi#234.
So I copied the files into a volume (instead of a bind mount from the macOS filesystem).

pixi run train-retrievers ran fine until this happen:
RuntimeError: PyTorch is not linked with support for mps devices

Am I looking for trouble trying that in Docker. Yes :)

MPS (Metal Performance Shaders) is MacOS specific feature and as such not available on Linux-ARM64 (even if runs on Apple Silicon)

I'll revert to you when I'll try on bare metal.

@glemaitre
Copy link
Member

Am I looking for trouble trying that in Docker.

Normally this is the point of using pixi, we don't need to bother with docker and we are in a contained environment still :). On linux, then you need to change the DEVICE="cpu" in the config files since MPS is not supported.

I could try to have llama-cpp-python as a pypi dependencies and build it for source but we need to pass extra environment variables and I don't think this is supported in pixi right now.

@glemaitre
Copy link
Member

The documentation is outdated. I'm closing this issue and I'll update the documentation.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants