-
Notifications
You must be signed in to change notification settings - Fork 10.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature Request: RPC Cuda Build to link with cudart dlls #8912
Labels
enhancement
New feature or request
Comments
jkfnc
changed the title
Feature Request: Req for RPC Cuda Build to link with cudart dlls
Feature Request: RPC Cuda Build to link with cudart dlls
Aug 7, 2024
I think it would be ok to enable the RPC backend on all the builds, and remove the RPC-specific build. It should be a simple change in the |
I will submit a patch for this in the next few days |
rgerganov
added a commit
to rgerganov/llama.cpp
that referenced
this issue
Aug 12, 2024
4 tasks
4 tasks
rgerganov
added a commit
that referenced
this issue
Aug 12, 2024
arthw
pushed a commit
to arthw/llama.cpp
that referenced
this issue
Nov 15, 2024
arthw
pushed a commit
to arthw/llama.cpp
that referenced
this issue
Nov 18, 2024
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Prerequisites
Feature Description
Is it possible to get RPC Builds that work with Cuda , so we dont have to compile from scratch.
Motivation
Would be easy to get started with RPC for distributed inference if we had readymade build available. Current RPC build only works with CPU.
Possible Implementation
No response
The text was updated successfully, but these errors were encountered: