Skip to content

Commit

Permalink
Update gemma2.md
Browse files Browse the repository at this point in the history
Update the transformers version to fix Gemma support issue.

Context: huggingface/transformers#31661
  • Loading branch information
xianbaoqian authored Jun 28, 2024
1 parent 6501ca3 commit cf5f75d
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion gemma2.md
Original file line number Diff line number Diff line change
Expand Up @@ -160,7 +160,7 @@ You can chat with the Gemma 27B Instruct model on Hugging Chat! Check out the li
With Transformers [release 4.42](https://github.com/huggingface/transformers/releases/tag/v4.42.0), you can use Gemma and leverage all the tools within the Hugging Face ecosystem. To use Gemma models with transformers, make sure to use the latest `transformers` release:

```bash
pip install "transformers==4.42.0" --upgrade
pip install "transformers==4.42.1" --upgrade
```

The following snippet shows how to use `gemma-2-9b-it` with transformers. It requires about 18 GB of RAM, which fits many consumer GPUs. The same snippet works for `gemma-2-27b-it`, which, at 56GB of RAM, makes it a very interesting model for production use cases. Memory consumption can be further reduced by loading in 8-bit or 4-bit mode.
Expand Down

0 comments on commit cf5f75d

Please sign in to comment.