Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature]: Access draft model directly #10790

Open
1 task done
cduk opened this issue Nov 30, 2024 · 0 comments
Open
1 task done

[Feature]: Access draft model directly #10790

cduk opened this issue Nov 30, 2024 · 0 comments

Comments

@cduk
Copy link
Contributor

cduk commented Nov 30, 2024

🚀 The feature, motivation and pitch

Currently vLLM supports speculative decoding whereby a smaller model outputs speculative tokens for the main model to verify. Since both models are already loaded in VRAM, it would be helpful to be able to access the draft model directly and request inferencing from this bypassing the larger model (for cases where speed is more important than quality).

If both models are exposed, then the incoming request can specify which model to use and vLLM can direct to the correct one.

Alternatives

No response

Additional context

No response

Before submitting a new issue...

  • Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the documentation page, which can answer lots of frequently asked questions.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant