-
Notifications
You must be signed in to change notification settings - Fork 3.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[ARM CPU] Add rotary embedding fp16 kernel #23013
Conversation
if (rotary_emb_dim < head_size) { | ||
std::memcpy(output_data + rotary_emb_dim, | ||
input_data + rotary_emb_dim, | ||
(head_size - rotary_emb_dim) * sizeof(T)); |
Check warning
Code scanning / PREfast
Arithmetic overflow: Using operator '-' on a 4 byte value and then casting the result to a 8 byte value. Cast the value to the wider type before calling operator '-' to avoid overflow (io.2). Warning
if (rotary_emb_dim < head_size) { | ||
std::memcpy(output_data + rotary_emb_dim, | ||
input_data + rotary_emb_dim, | ||
(head_size - rotary_emb_dim) * sizeof(T)); |
Check warning
Code scanning / PREfast
Arithmetic overflow: Using operator '-' on a 4 byte value and then casting the result to a 8 byte value. Cast the value to the wider type before calling operator '-' to avoid overflow (io.2). Warning
### Description Add fp16 kernel to rotary embedding to boost performance. ### Motivation and Context Part of performance optimization work for group query attention
### Description Add fp16 kernel to rotary embedding to boost performance. ### Motivation and Context Part of performance optimization work for group query attention
### Description Add fp16 kernel to rotary embedding to boost performance. ### Motivation and Context Part of performance optimization work for group query attention
### Description Add fp16 kernel to rotary embedding to boost performance. ### Motivation and Context Part of performance optimization work for group query attention
Description
Add fp16 kernel to rotary embedding to boost performance.
Motivation and Context
Part of performance optimization work for group query attention