Bug: convert_hf_to_gguf.py - Converting HF model to GGUF giving error Missing tokenizer.model - Qwen2.5 based #9673
Labels
bug-unconfirmed
high severity
Used to report high severity bugs in llama.cpp (Malfunctioning hinder important workflow)
What happened?
Running into an issue in convert_hf_to_gguf.py that was reportedly fixed (issues/6419 | pull/6443), but I'm having it with a Qwen2.5-7B model:
Directory:
Name and Version
What operating system are you seeing the problem on?
Windows 11 23H2
Relevant log output
I'm here to test and provide any more information needed.
The text was updated successfully, but these errors were encountered: