Skip to content

Commit

Permalink
Fix a small typo of a variable name (huggingface#1063)
Browse files Browse the repository at this point in the history
Fix a small typo

fix a typo in `models/attention.py`.
weight -> width
  • Loading branch information
omihub777 authored Nov 2, 2022
1 parent c735526 commit 3be47ed
Showing 1 changed file with 3 additions and 3 deletions.
6 changes: 3 additions & 3 deletions models/attention.py
Original file line number Diff line number Diff line change
Expand Up @@ -165,15 +165,15 @@ def _set_use_memory_efficient_attention_xformers(self, use_memory_efficient_atte

def forward(self, hidden_states, context=None):
# note: if no context is given, cross-attention defaults to self-attention
batch, channel, height, weight = hidden_states.shape
batch, channel, height, width = hidden_states.shape
residual = hidden_states
hidden_states = self.norm(hidden_states)
hidden_states = self.proj_in(hidden_states)
inner_dim = hidden_states.shape[1]
hidden_states = hidden_states.permute(0, 2, 3, 1).reshape(batch, height * weight, inner_dim)
hidden_states = hidden_states.permute(0, 2, 3, 1).reshape(batch, height * width, inner_dim)
for block in self.transformer_blocks:
hidden_states = block(hidden_states, context=context)
hidden_states = hidden_states.reshape(batch, height, weight, inner_dim).permute(0, 3, 1, 2)
hidden_states = hidden_states.reshape(batch, height, width, inner_dim).permute(0, 3, 1, 2)
hidden_states = self.proj_out(hidden_states)
return hidden_states + residual

Expand Down

0 comments on commit 3be47ed

Please sign in to comment.