Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use params when loading models in llava-cli #3976

Merged
merged 1 commit into from
Nov 7, 2023

Conversation

tejom
Copy link
Contributor

@tejom tejom commented Nov 7, 2023

llava-cli was loading models with default params and ignoring settings from the cli. This switches to a generic function to load the params from the cli options.

llava-cli was loading models with default params and ignoring settings
from the cli. This switches to a generic function to load the params
from the cli options.
@tejom tejom marked this pull request as ready for review November 7, 2023 07:29
@tejom
Copy link
Contributor Author

tejom commented Nov 7, 2023

Hey small PR here, I wrote a quick fix when I noticed that the model wasn't using my GPU for offloading layers even though I had the setting on the cli.

@monatis
Copy link
Collaborator

monatis commented Nov 7, 2023

Thanks, a regression in #3613

@monatis monatis merged commit 54b4df8 into ggerganov:master Nov 7, 2023
@tejom
Copy link
Contributor Author

tejom commented Nov 7, 2023

Np, appreciate the quick turn around time!

olexiyb pushed a commit to Sanctum-AI/llama.cpp that referenced this pull request Nov 23, 2023
llava-cli was loading models with default params and ignoring settings
from the cli. This switches to a generic function to load the params
from the cli options.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants