Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Docs: fix GaLore optimizer code example (huggingface#32249)
Docs: fix GaLore optimizer example Fix incorrect usage of GaLore optimizer in Transformers trainer code example. The GaLore optimizer uses low-rank gradient updates to reduce memory usage. GaLore is quite popular and is implemented by the authors in [https://github.com/jiaweizzhao/GaLore](https://github.com/jiaweizzhao/GaLore). A few months ago GaLore was added to the HuggingFace Transformers library in huggingface#29588. Documentation of the Trainer module includes a few code examples of how to use GaLore. However, the `optim_targe_modules` argument to the `TrainingArguments` function is incorrect, as discussed in huggingface#29588 (comment). This pull request fixes this issue.
- Loading branch information