Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Flash Attention 2] Add flash attention 2 for GPT-J #28295

Merged
merged 11 commits into from
Mar 13, 2024
Prev Previous commit
Next Next commit
Update src/transformers/models/gptj/modeling_gptj.py
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
younesbelkada and ArthurZucker authored Jan 30, 2024

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature. The key has expired.
commit def626ef1699b58e78507161fe32433290454e59
2 changes: 1 addition & 1 deletion src/transformers/models/gptj/modeling_gptj.py
Original file line number Diff line number Diff line change
@@ -375,7 +375,7 @@ def forward(
else:
present = None

# The Falsh attention requires the input to have the shape
# The Flash attention requires the input to have the shape
# batch_size x seq_length x head_dim x hidden_dim
# therefore we need to keep the original shape for query and key, and reshape value
# to have the correct shape.