Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Misc] Support FP8 kv cache scales from compressed-tensors #6528

Merged
merged 7 commits into from
Jul 23, 2024

Conversation

mgoin
Copy link
Member

@mgoin mgoin commented Jul 18, 2024

Adding the logic for loading models with quantized kv cache scales, generated using compressed-tensors framework.

  • CompressedTensorsConfig now has an optional kv_cache_scheme argument. As of the next compressed-tensors release, the key contains the information about the properties of the quantized kv cache.
  • Added an interface BaseKVCacheMethod that both Fp8KVCacheMethod and CompressedTensorsKVCacheMethod inherits from, to prepare the k_scale and v_scale attributes of the quantized Attention layer.

Currently, the loading of the kv cache scales happens inside the load_model method of the model (in this PR only LLaMa model), just like previous kv scales. However, it needs to happen before the parameter names get mapped to stacked params since we are looking for k_proj.output_scale and v_proj.output_scale. Thich is a bit ugly because we need to copy and paste the helper function into every model to make it read quantized kv cache.

Done in collaboration with @dbogunowicz

Copy link

👋 Hi! Thank you for contributing to the vLLM project.
Just a reminder: PRs would not trigger full CI run by default. Instead, it would only trigger fastcheck CI to run, which consists only a small and essential subset of tests to quickly catch errors with the flexibility to run extra individual tests on top (you can do this by unblocking test steps in the Buildkite run).

Full CI run is still required to merge this PR so once the PR is ready to go, please make sure to run it. If you need all test signals in between PR commits, you can trigger full CI as well.

To run full CI, you can do one of these:

  • Comment /ready on the PR
  • Add ready label to the PR
  • Enable auto-merge.

🚀

@mgoin mgoin added the ready ONLY add when PR is ready to merge/full CI is needed label Jul 18, 2024
@comaniac comaniac self-assigned this Jul 22, 2024
Copy link
Collaborator

@comaniac comaniac left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall LGTM

@mgoin mgoin enabled auto-merge (squash) July 23, 2024 02:38
@mgoin mgoin merged commit 9e0b558 into vllm-project:main Jul 23, 2024
73 checks passed
@mgoin mgoin deleted the compressed-tensors-kv-cache branch July 23, 2024 04:11
cduk pushed a commit to cduk/vllm-pascal that referenced this pull request Aug 6, 2024
kylesayrs pushed a commit to neuralmagic/vllm that referenced this pull request Aug 17, 2024
Alvant pushed a commit to compressa-ai/vllm that referenced this pull request Oct 26, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ready ONLY add when PR is ready to merge/full CI is needed
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants