Replies: 1 comment
-
@BWShor Contributions that enhance Jupyter AI's use of models like |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I'm new to github and don't know where to post this but I was able to get gpt4all to use GPU as well as the "Reasoner v1" model by making the following changes to jupyter-ai-magics/providers.py:
add the setdefualt("device"... argument after the n_threads argument is added:
kwargs["n_threads"] = max(int(n_threads), 1)
kwargs.setdefault("device", "kompute")
Add the qwen2.5-coder-zb-instruct-q4_0 line at the end of the models list:
"qwen2.5-coder-7b-instruct-q4_0",
# "qwen2-1_5b-instruct-q4_0" - I'm still tinkering with the 1_5b-instruct model and haven't actually tested it
If this is useful to anyone, please let me know!
Beta Was this translation helpful? Give feedback.
All reactions