-
Notifications
You must be signed in to change notification settings - Fork 191
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support for SQuAD 2.0 #157
Comments
Hi! Just curious but what are the specs for the machine used to fine-tune the model on SQuADv1.1 and SQuADv2.0? How long did 1.1 take you? And how long do you think 2.0 will take you? |
Hi @thomasvn, The fine-tuning of the Reader (BERT for QA) was run in an AWS EC2 p3.2xlarge machine (GPU Tesla V100 16GB). It took about 2 hours to complete (2 epochs on SQuAD 1.1 train was enough to achieve SOTA results on SQuAD 1.1 dev). I think that 2 epochs on SQuAD 2.0 would take a little bit more than 3h |
Awesome. Thanks for putting it into perspective! |
there is already pretrained model here |
Thanks @alex-movila, we will test this pre-trained model with cdQA and if it runs good we will probably integrate it to the package |
The text was updated successfully, but these errors were encountered: