-
Notifications
You must be signed in to change notification settings - Fork 2.2k
ProductionRuleField not compatible with multiple GPUs #2057
Comments
|
Yes, I tested with
|
Can you give more of the stack trace? |
@matt-gardner The entire error trace is:
Is the format |
Ok, thanks, this is a data type that I thought we had fixed in #1944. Looks like that commit was the first one that wasn't included in the 0.7.1 release. Can you try again from master and see if it fixes the issue? |
@matt-gardner I re-built allennlp from source that pulled from master branch. However, I still got the same error message when using multiple gpus. |
Ok, we'll need to look into this, but it probably won't be soon, unfortunately. With holidays coming up and then the NAACL deadline, we don't really have time to look into this ourselves right now. |
OK. Good luck with your NAACL! |
See #2199; not a fix yet, but I've at least diagnosed the problem. |
This was fixed by #2200. |
System (please complete the following information):
Question
I am training mml parser on the wikitables dataset. I found it would be out of cuda memory if I use a larger batch size than 8 on a single Tesla P100. Even the batch size 8 causes the out-of-memory error sometimes. How can I train on multiple gpus with allennlp? I don't think the
cuda_device
field supports list input in the config file for now.The text was updated successfully, but these errors were encountered: