-
Notifications
You must be signed in to change notification settings - Fork 27.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Running run_translation.py with mt5 model, but loss is always 0.0 #22467
Comments
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the contributing guidelines are likely to be ignored. |
It seems this issue still persists. Maybe we could consider re-opening this issue. I also have the similar issue with loss being My system info is as follows:
|
Inviting you to read #10956 which has very detailed explanation and a potential solution for you 😉 |
Hi @ArthurZucker, as quote from #10956 (comment). It sees the experimental change has not been merged, and also without too much related performance experiments. However, from this pr #20760 I noticed that the 8bit workaround is first converting partial of the modules to be fp16 with the other unchanged. I wonder this might also seem to be feasible solution for fp16 training ? |
System Info
transformers version 4.28.0.dev
Who can help?
No response
Information
Tasks
examples
folder (such as GLUE/SQuAD, ...)Reproduction
mt0-base is cloned from the huggingface. And the loss is always 0.0:
But if I try to train mt5 model from scratch with my mt data, the loss looks good. Did I miss something?
Any advice is appreciated! Thx in advance!
Expected behavior
Loss is larger than 0.0 and the model parameter will update.
The text was updated successfully, but these errors were encountered: