You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I was trying C++ version of spec_infer and incr_decoding in FlexFlow/inference. And I found that when using the model Llama-2-70b-hf, An error occured as below.
And after checking issue flexflow/flexflow-train#1154 and the source code of llama.cc and file_loader.cc, I realized that the reason was that the number of attention heads and key value heads are not identical in this model, but the fix was only added to python version in issue flexflow/flexflow-train#1154.
I added a parameter num_key_value_heads to FlexFlow/inference/models/llama.h and passed it to FileDataLoader in FlexFlow/inference/models/llama.cc, and it worked.
I was trying C++ version of spec_infer and incr_decoding in FlexFlow/inference. And I found that when using the model Llama-2-70b-hf, An error occured as below.
And after checking issue flexflow/flexflow-train#1154 and the source code of llama.cc and file_loader.cc, I realized that the reason was that the number of attention heads and key value heads are not identical in this model, but the fix was only added to python version in issue flexflow/flexflow-train#1154.
I added a parameter num_key_value_heads to FlexFlow/inference/models/llama.h and passed it to FileDataLoader in FlexFlow/inference/models/llama.cc, and it worked.
FlexFlow/inference/models/llama.h
FlexFlow/inference/models/llama.cc
The text was updated successfully, but these errors were encountered: