-
Notifications
You must be signed in to change notification settings - Fork 4.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
bf16 & tf32,can they be used together #252
Comments
I have the same question |
I was searching for the internet to find the answer. and then I found this thread. this thread might help: huggingface/transformers#14608 |
any new update on this? |
By setting bf16, the model weights are saved in bf16, but gradients (computed in half-precision but converted to full-precision for the update) and optimizations (optimizer states) are still done in full 32-bit precision. By additionally setting tf32, you speed up that very calculations at the expense of precision, but since ultimately those weights are saved as bf16 anyway, it probably has no effect on accuracy. |
In the hyperparameter, I saw the bf16 and tf32 are both set to true. However I don't they can be used together?
The text was updated successfully, but these errors were encountered: