-
Notifications
You must be signed in to change notification settings - Fork 5.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Docs] add a guide for adapter merging. #7110
Conversation
|
||
There are more examples of different merging methods available in [this post](https://huggingface.co/blog/peft_merging). We encourage you to give them a try 🤗 | ||
|
||
## AnimateDiff |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@DN6 would you like to give this a try? I think that would be very cool!
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh nice, I was wondering if it'd be possible to have add_weighted_adapter
work with Diffusers! 🚀
The only concern for me is I feel like merging/combining LoRA adapters is too spread out in the docs now (here, here, and here). There are several options (I put a ⭐ for the one I'd recommend), so let me know what you think:
- add the
add_weighted_adapter
method in the Load adapters section - add the
add_weighted_adapter
method in the Inference with PEFT tutorial - create a new Merge LoRA guide in the Techniques section with
set_adapter
andadd_weighted_adapter
and then clean up the Load adapters doc to only show how to load ⭐
So, I don't think we should add Maybe we could give the guide a different title? "Advanced LoRA Merging"? Plus, we plan on showing the use of Then we can link this guide in the other docs you mentioned. |
@stevhliu gentle ping on the above. |
Sorry for the delay! Since it's experimental, I especially don't think it belongs in the Tutorials because I don't think we want to provide an unstable (meaning things can still change) foundation for new users learning how to merge LoRAs. I also think it's not great to introduce two ways of merging LoRAs in the Tutorials because it places extra cognitive work on the learner to choose/distinguish which method to use. We should focus on recommending one general approach in the Tutorial, and at the end link to the other option (whether its Let me know what you think! 🙂 |
Let’s go with option 3 then. Would you be able to repurpose this PR for the same? |
Let me start a new PR, it'll be easier for me to refactor everything :) |
Sure. Feel free to re-purpose this one. Once that's opened, I will close this one. |
Closing in light of #7213. |
What does this PR do?
Content reused from https://huggingface.co/blog/peft_merging. Adapter merging is very popular in the diffusion community. Even though it's experimental, I think having it inside our docs will be beneficial.