-
-
Notifications
You must be signed in to change notification settings - Fork 6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Misc] Support FP8 kv cache scales from compressed-tensors #6528
[Misc] Support FP8 kv cache scales from compressed-tensors #6528
Conversation
👋 Hi! Thank you for contributing to the vLLM project. Full CI run is still required to merge this PR so once the PR is ready to go, please make sure to run it. If you need all test signals in between PR commits, you can trigger full CI as well. To run full CI, you can do one of these:
🚀 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Overall LGTM
vllm/model_executor/layers/quantization/compressed_tensors/compressed_tensors.py
Outdated
Show resolved
Hide resolved
vllm/model_executor/layers/quantization/compressed_tensors/utils.py
Outdated
Show resolved
Hide resolved
vllm/model_executor/layers/quantization/compressed_tensors/compressed_tensors.py
Show resolved
Hide resolved
…ect#6528) Signed-off-by: Alvant <alvasian@yandex.ru>
Adding the logic for loading models with quantized kv cache scales, generated using
compressed-tensors
framework.CompressedTensorsConfig
now has an optionalkv_cache_scheme
argument. As of the nextcompressed-tensors
release, the key contains the information about the properties of the quantized kv cache.BaseKVCacheMethod
that bothFp8KVCacheMethod
andCompressedTensorsKVCacheMethod
inherits from, to prepare thek_scale
andv_scale
attributes of the quantizedAttention
layer.Currently, the loading of the kv cache scales happens inside the
load_model
method of the model (in this PR only LLaMa model), just like previous kv scales. However, it needs to happen before the parameter names get mapped to stacked params since we are looking fork_proj.output_scale
andv_proj.output_scale
. Thich is a bit ugly because we need to copy and paste the helper function into every model to make it read quantized kv cache.Done in collaboration with @dbogunowicz