-
Notifications
You must be signed in to change notification settings - Fork 10.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
llama : offload to RPC in addition to other backends #7640
Conversation
The way this is supposed to work is that backends need to check the buffer type of the tensors to determine if they can perform the copy, and return |
It crashes in |
The |
rgerganov#1 should fix it. |
@slaren thanks, I have verified your fix and merged it |
…uffer - always initialize views in the view_src buffer - add RPC backend to Makefile build - add endpoint to all RPC object names
Co-authored-by: slaren <slarengh@gmail.com>
@rgerganov, hello, I'm sorry to disturb you. could you do a more carefully check before your PR merged to master branch? I know you are the brother of the original author of the great ggml(I personally think it's another FFmpeg focus in AI industry)/llama.cpp/whisper.cpp and you have some privileges and this is the key reason why your PR was approved so quickly although the quality of your PR is might-be need more check. I'm not sure whether you might did wrong deletions in llama.cpp and cause the troubles for rebase operation by the other community backend developer(#6869):
GGML/whisper.cpp/llama.cpp is now a 60k+ starers project and it's referenced directly/indirectly by many developers/companies/research institutions..., it's not a simple personal project now. I hope you can do more carefully check before your PR was merged to master branch. I'll apologize sincerely if it's my misunderstanding. Thanks for your understanding. |
My changes have been reviewed and the CI run was successful.
I don't have any privileges in this project and my code is being reviewed as everybody's else. You'd better have some arguments when saying something about the quality of my changes.
When someone is proposing a change it is their responsibility to address review comments and make sure that the change can be applied on current
My change has been reviewed and approved by a core maintainer and the CI run was successful. |
Yes, you are right:"When someone is proposing a change it is their responsibility to address review comments and make sure that the change can be applied on current You should did all the "preparation work"(I don't know how to describe it in English correctly at the moment) of ggml-rpc in this project in your first/very beginning PR and focus on internal changes/refine in the source code of ggml-rpc.cpp. BUT you are still submitting many PR/changes in the source code of llama.cpp for purpose of your ggml-rpc backend since your first ggml-rpc PR got approval. this is what I want to say. this is also the responsibility of the so-called core maintainer whom I know he is the maintainer of the ggml backend subsystem. accordingly, that's the reason why I said you have privilege. You can see my ggml-qnn backend is still suspending in the review state since 04/24/2024 although I want it can be got approval and merged to master branch and then other community developers can contribute codes/ideas in the source code of ggml-qnn.cpp. this is the second reason why I said you have privilege. I know the maintainer of ggml backend subsystem is also a real AI expert and a C/C++ master , but I don't think he is professional in your so many ggml-rpc PRs(If I was him, your first PR of ggml-rpc couldn't got approval). |
@zhouwg I am having a very hard time understating what you are complaining about here. It is the responsibility of PR authors to deal with merge conflicts caused by changes to the code merged to the repository. How could it be otherwise? The RPC backend has been submit to the same quality requirements as any other PRs merged to llama.cpp. The QNN backend cannot be merged in its current state because it is not a functional backend. It is not correct to say that it has been waiting for review since 4/24/2024 because there is nothing to review until it is a functional backend that people can try. Generally, we will not merge non-functional code in the hope that other people will complete the missing parts. You have two ways to make it functional: implement the missing operations, or wait until ggml-backend implements support for using the CPU backend for ops not supported by a backend. I understand that you are frustrated, but please try to be patient. |
I'm sorry for that and pls don't care what I said(which is just a personal feeling/thoughts and I still think you are a real AI expert and C/C++ master) in this PR. 1.The "preparation work" or "HLD(high level design)" work should be done in the first PR of ggml-rpc and then the author of ggml-rpc can focus on internal refine/changes in the source code of ggml-rpc.cpp after the first PR was approved. just like what Intel SYCL backend did. I can handle the merge/rebase conflicts properly, but that is the exactly point I want to say. BTW, I really can't understand why such the "offload to rpc backend" was not done/verified in the very beginning PR of ggml-rpc backend. it's might be an non-functional backend as your words. 2.I provide a general approach based on the existing ggml backend subsystem for CPU&GPU / CPU&NPU mixed inference very easily in PR:refine backend subsystem for CPU&GPU / CPU&NPU mixed inference more easily for a specified GGML backend, but you said it's not correct and will not work although it works very well on my personal ggml learning project. 3.The QNN backend is more mature or ggml-rpc is more non-functional if we compare to the ggml-rpc backend, this is also a personal feeling. 4.I participate in this great/excellent project for learning and for fun and for make a little contribution although I know nothing/a little about real/hardcore AI tech: I submit PR what I can do and what I can hold. 5.The QNN backend works well as expected with whisper.cpp and llama.cpp and I provide UT in Android APK and UT in Android command line program and all the UT cases are also works fine as expected, but you still think it's not a functional backend. 6.You are the author/maintainer of ggml backend subsystem, you have rights to decide whether a specified PR about ggml backend can be accepted and I respect your decision. |
void ggml_backend_view_init(struct ggml_tensor * tensor) { | ||
GGML_ASSERT(tensor->buffer == NULL); | ||
GGML_ASSERT(tensor->view_src != NULL); | ||
GGML_ASSERT(tensor->view_src->buffer != NULL); | ||
GGML_ASSERT(tensor->view_src->data != NULL); | ||
|
||
tensor->buffer = buffer; | ||
tensor->buffer = tensor->view_src->buffer; | ||
tensor->data = (char *)tensor->view_src->data + tensor->view_offs; | ||
ggml_backend_buffer_init_tensor(buffer, tensor); | ||
ggml_backend_buffer_init_tensor(tensor->buffer, tensor); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This breaks the latest SYCL support. Could I know if all the other backends assumes the buffer already bound to the right tensor? Of cause I know it is the issue of SYCL itself, we are maintaining SYCL during spare time and are still begging for official support from the company :) I will look into it and try to fix it soon.
@slaren @rgerganov for awareness
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's the same issue that was fixed for the Vulkan backend in #7806, there are more details about why this happens and why the change was necessary there. The best way to fix this for the SYCL backend would be to remove the extras entirely in the same way they were removed from the CUDA backend.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we are maintaining SYCL during spare time and are still begging for official support from the company
Having a dedicated machine which runs the CI with the SYCL backend would be very helpful
…ov#7640)" (ggerganov#7981) This reverts commit bde7cd3.
I see that you reverted this merge. Does it mean that it will not work with others backend? I'm using Vulkan instead of CUDA (for many reasons), and actually I can see that |
This patch adds support for offloading layers to RPC servers in addition to other non-RPC backends. For example if you build with
-DLLAMA_CUDA=ON -DLLAMA_RPC=ON
, then you can offload to local GPU and remote server(s):$ bin/main -m ../models/ggml-model-f16.gguf -p "Hello, my name is" -n 64 -ngl 99 -s 1236 --rpc localhost:50052 ... ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: CUDA_USE_TENSOR_CORES: yes ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA T1200 Laptop GPU, compute capability 7.5, VMM: yes llm_load_tensors: ggml ctx size = 0,31 MiB llm_load_tensors: offloading 22 repeating layers to GPU llm_load_tensors: offloading non-repeating layers to GPU llm_load_tensors: offloaded 23/23 layers to GPU llm_load_tensors: CPU buffer size = 125,00 MiB llm_load_tensors: CUDA0 buffer size = 1008,19 MiB llm_load_tensors: RPC buffer size = 965,16 MiB ..........................................................................................
I have tried to follow the existing patterns in
llama.cpp
and introduced device numbers for RPC servers which always come last.When copying tensors, we need to handle the case when
src
anddst
are not on the same backend. For CUDA I had to build with-DLLAMA_CUDA_NO_PEER_COPY=ON
to make it work.