-
-
Notifications
You must be signed in to change notification settings - Fork 5.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature]: Reduce LoRA latency via speculative decoding #6912
Comments
I took a first pass, admittedly there's a lot of knowledge I'm not so familiar with but I would really like this feature so I'll invest some time into it and see if I can make some progress. If anyone else is interested, happy to collaborate. |
Awesome! also recommend checking out https://www.youtube.com/watch?v=9wNAgpX6z_4 if you're new to speculative decoding in vllm. |
May I ask how soon this feature will be supported? @cadedaniel |
This issue has been automatically marked as stale because it has not had any activity within 90 days. It will be automatically closed if no further activity occurs within 30 days. Leave a comment if you feel this issue should remain open. Thank you! |
This issue has been automatically closed due to inactivity. Please feel free to reopen if you feel it is still relevant. Thank you! |
🚀 The feature, motivation and pitch
The speculative decoding framework allows the target model to have LoRAs, however the work to set up batch expansion has not yet been done. We can implement batch expansion for LoRA and allow speculative decoding for LoRA.
The work required is basically to implement batch expansion but pass through the LoRA arguments. See "Let’s talk about code" in the following notes: https://docs.google.com/document/d/1z4Tgb1FcDr3YXvFPelyn-T-DEnLqqrlrxRi3TvIyAmg/edit
I expect this to work well for larger models (e.g. 70B) but more difficult with smaller models due to latency constraints and vLLM overheads. Perhaps with a speculator like ngram / eagle / mlpspeculator it can work for 7b models as well.
Note this work does not include applying LoRA to the speculator; that can be a future work.
Alternatives
No response
Additional context
No response
The text was updated successfully, but these errors were encountered: