-
Notifications
You must be signed in to change notification settings - Fork 478
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support importing GGUF files #1187
Comments
If gguf contains the model graph information, then we can use what burn-import ONNX facility. In our burn-import, we convert ONNX graph to IR (intermediate representation) (see this doc). So, it would possible to convert the model graph to IR and generate source code + weights. If gguf contains only weights, we can go burn-import pytorch route, where we only download weights. |
From my brief research, GGUF format contains metadata + tensor weights. This aligns with burn-import pytorch route and not burn-import/ONNX. This will mean model needs to be constructed in Burn first and use the weights to load. Here is one Rust lib to parse GGUF file: https://github.com/Jimexist/gguf |
GGUF spec: ggerganov/ggml#302 |
Parser in Rust: https://github.com/Jimexist/gguf |
Hi, it has been about a year since this was last updated, since then pre-existing models on HF often come in GGUF for quantised or Safetensor formats for non-quantised. I think it would be useful to people new to the space to understand how burn can be leveraged with these formats, as they seem to be the most common formats available to start from scratch. Specifically importing quantised GGUF models as I couldn't see much in the docs. Candle is okay for this, but its support for models is a little spotty with quantised models which are more accessible to people with fewer resources. I saw in #1323 some added pieces were available for reconstructing config files but I am wondering about simply ingesting a gguf model and using it with burn directly similar to the import options for onnx or pytorch without people needing to figure out how to reverse engineer what gguf is doing under the hood with little guidance. GGUF's single file format seems like an ideal target for burn's use-case to me and the format is much more universally accessible, similar to ONNX on paper. I am happy to contribute docs, I just need a bit of direction to start testing with the current capabilities or indicators that it is even possible. Edit Ref to the Candle issue I am seeing with Mistral-Nemo Quantizations: |
I'll be happy to assist if you decide to submit a pr. We can leverage Candle's reader similar PyTorch pt reader. We can use the existing burn-import infrastructure. It should be somewhat easier now that PyTorch pt import works. |
I actually made a start last night using Example names from the gguf spec that could be mapped: tok_embd I am very new to rust so it's taking me a bit of time to figure out how to transform the format Content creates as rather than treat things directly as u32 or String everything is stored in 'U32('VALUE')' first and being able to transform those and then map them to the right places to create burn modules etc is a bit of time and effort |
When I say stored as an example of the K, V structure it uses:
Rather than say:
|
I actually haven't used Burn at all until now, I only learnt detailed information about transformer architecture after posting my original comment two days ago, and I started with Rust like 3-4 weeks ago so I will try my best but I apologise in advance if I can't see it through. It's partly my motivation for commenting, as someone new to the whole space all I see is gguf really, I would love to make it more accessible to those of us who want to get started, and from what I can tell Burn is well placed for doing that - I also love you have built in WGPU support - and my ambition for learning Rust to do this is because years ago I did a lot of embedded and I have a load of RPi Picos and various other devices lying around so love you guys have the demo for it, and also I think your approach is fantastic for my goals. Most of my career until now has been more devops oriented, and even then I have been more on the infrastructure and networking side than development so I am out of my depth but trying. I can figure out most things on my own but any general pointers are always welcome, I will try and figure it out. |
I apologize if this seems too far fetched, but it seemed in line with how ONNX generation works.
The text was updated successfully, but these errors were encountered: