-
Notifications
You must be signed in to change notification settings - Fork 29
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
support to export gguf q4_0 and q4_1 format #393
Conversation
Signed-off-by: n1ck-guo <heng.guo@intel.com>
for more information, see https://pre-commit.ci
Signed-off-by: n1ck-guo <heng.guo@intel.com>
for more information, see https://pre-commit.ci
Signed-off-by: n1ck-guo <heng.guo@intel.com>
for more information, see https://pre-commit.ci
Signed-off-by: n1ck-guo <heng.guo@intel.com>
Signed-off-by: n1ck-guo <heng.guo@intel.com>
Signed-off-by: n1ck-guo <heng.guo@intel.com>
endianness: gguf.GGUFEndian | ||
use_temp_file: bool | ||
lazy: bool | ||
part_names: list[str] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why not calling gguf code directly
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
cannot. Different model series use different class to write gguf file.
Signed-off-by: n1ck-guo <heng.guo@intel.com>
Signed-off-by: n1ck-guo <heng.guo@intel.com>
remember to add checker in save_quantized and add warning when combined with fp_layers later |
for format in formats: | ||
if format not in supported_formats: | ||
raise ValueError(f"{format} is not supported, we only support {supported_formats}") | ||
if format in ["gguf:q4_0", "gguf:q4_1"]: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
support gguf later if we could inference the exact type by the quantization config
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
iterx_xpu to itrex_xpu
@@ -1267,6 +1267,14 @@ def save_quantized(self, output_dir=None, format="auto_round", inplace=True, **k | |||
if processor is not None: | |||
processor.save_pretrained(output_dir) | |||
return | |||
if format in ["gguf:q4_0", "gguf:q4_1"]: | |||
if self.group_size != 32: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
also better check bits
using llama.cpp(llama-cli) test q4_0 and q4_1 quantized file, work well.
#288