-
Notifications
You must be signed in to change notification settings - Fork 27.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add optimal model size and stopping time feature #4847
Comments
Ah yes - I remembered having a doubt on that, I checked again the library we used to estimate those and there might have been a unit conversion error, I'll fix that ASAP tomorrow! Edit: it's fixed, thank you @lopuhin ! |
This is already looking very promising! Good stuff. When clicking the "initialize in transformers" button, the code block should probably not center-align the code, but left align instead. That makes the code a lot more readable. |
Yeah that was a bit of an aesthetic choice to not break the flow of the web page, it definitely wouldn't be like this in a tool rather than a demo! |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
unstale, what's the status on this @TevenLeScao? Should we close? |
@julien-c we had originally decided not to go forward with this, but I started working on it amongst the discussions about the scale of GPT-3. I didn't get to finish it before leaving for holidays two weeks ago, but the PR will be ready this week. |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
Hi! The "initialize in Huggingface" button is broken -- is there something I can do locally to solve it? I just wanted the lines of training code for a given wall-clock time. |
Hey! The page seems broken, not sure why, I'll relaunch it |
@TevenLeScao Thanks for the immediate reply! The button to launch in Huggingface Transformers still isn't working, but I'm happy to help debug / send any reports if it helps! Alternatively, do you think you could help me understand what the button does? i'm just hoping to generate the configuration string Thanks for your time! |
I've relaunched, it should work now (just gotta figure why the page doesn't center on my desktop). |
@TevenLeScao Yes, it works -- thanks! Out of curiosity, why did you use Transformer-XL as opposed to something like GPT-2? Does Transformer-XL reach a lower validation loss on Wikitext-103 as opposed to GPT-2 when training for the same number of steps? |
Yeah, it was the state-of-the-art at the time! |
🚀 Feature request
The calculator blog post presented an automated way to find scaling laws with model size and compute budget on language modeling tasks. Adding it to the library would help save on training costs by picking an optimal model size and training time.
Motivation
Estimating how big of a model to use and how long to train for is more of an art than a science. An automated tool to perform that task would allow researchers and practitioners to concentrate on the the high-level parts of their projects as opposed to parameter tweaking.
Your contribution
I can submit a PR with my existing work, probably integrating it within
Trainer
and/orknocknock
.The text was updated successfully, but these errors were encountered: