-
Notifications
You must be signed in to change notification settings - Fork 27.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Optimised 4bit inference kernels #28568
Comments
Thanks ! We also do have HQQ in the backlog #28328 but we are waiting to finalize #26610 from @poedator before adding any new quantization scheme cc @Titus-von-Koeller just FYI |
@qwopqwop200 seems to be working on adding marlin to AutoGPTQ. If it is merged, we will also have support with transformers quite easily. https://github.com/qwopqwop200/AutoGPTQ-add-marlin |
Yes, replacing the layers is pretty much it. It might also be possible to write a (not too complex) kernel to convert a GPTQ format model (groupsize 128, sym, no act-order; or any other quantization method that produces such models) to Marlin format on-the-fly (when loading the model) in reasonable time, which could be useful to have only a single storage format. However, I am not sure how many of the current GPTQ models on the hub already use the required settings for Marlin. |
Any update for this feature? |
I will have a look at it soon ! Since it is available on autogptq, the integration should be straightforward ! |
Any update? |
Feature request
Integration of new 4bit kernels
https://github.com/IST-DASLab/marlin
Motivation
provide faster Inference than awq/exllama for batch sizes upto 32
Your contribution
Just saw this today, can try provide sample notebook.
The text was updated successfully, but these errors were encountered: