Skip to content

Commit

Permalink
remove print
Browse files Browse the repository at this point in the history
  • Loading branch information
SunMarc committed Dec 11, 2023
1 parent 39aab85 commit 4f770d6
Showing 1 changed file with 0 additions and 1 deletion.
1 change: 0 additions & 1 deletion optimum/gptq/quantizer.py
Original file line number Diff line number Diff line change
Expand Up @@ -749,7 +749,6 @@ def load_quantized_model(
f"Failed to load quantization config from {save_folder} (lookup for traceback): {err}\nTip: If the save directory is saved from a transformers.PreTrainedModel, make sure that `config.json` contains a 'quantization_config' key."
) from err
quantizer = GPTQQuantizer.from_dict(quantize_config_dict)
print(quantizer.to_dict())
quantizer.disable_exllama = disable_exllama
quantizer.exllama_config = exllama_config
quantizer.exllama_version = quantizer.exllama_config["version"]
Expand Down

0 comments on commit 4f770d6

Please sign in to comment.