-
Notifications
You must be signed in to change notification settings - Fork 133
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RuntimeError #7
Comments
Thank you for your interest. Which pytorch version did you use? Have you used cuda? |
Thanks a lot for your kind reply. |
I mean have you use gpu? |
The default setting is to use GPU if your system has one |
Maybe I need to disable the GPU option, because it takes a lot time to run the script. so, i move the script from osx to remote server |
How to disable the GPU option? |
Emm, I haven't tried to use cpu, I recommend to use the gpu, you can disable it by adding --gpu 0 |
no lucky.
|
Let me test on my computer without gpu |
I haven't encountered the same problem, maybe you can try a smaller batch size by setting --batch_size 10? |
Lucky for me. It's running now with the setting --batch_size 10 |
Great, I think maybe it's out of memory |
I found that the previous error is a CPU OOM message pytorch/pytorch#20618 |
system
python train.py --data_path data/pubmed_abstract --model_dp abstract_model/
Epoch 0/99
Traceback (most recent call last):
File "train.py", line 236, in
batch_o_t, teacher_forcing_ratio=1)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 493, in call
result = self.forward(*input, **kwargs)
File "/mnt/sync/ubuntu/PaperRobot-master/New paper writing/memory_generator/seq2seq.py", line 18, in forward
stopwords, sflag)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 493, in call
result = self.forward(*input, **kwargs)
File "/mnt/sync/ubuntu/PaperRobot-master/New paper writing/memory_generator/Decoder.py", line 134, in forward
max_source_oov, term_output, term_id, term_mask)
File "/mnt/sync/ubuntu/PaperRobot-master/New paper writing/memory_generator/Decoder.py", line 68, in decode_step
term_context, term_attn = self.memory(_h.unsqueeze(0), term_output, term_mask, cov_mem)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 493, in call
result = self.forward(*input, **kwargs)
File "/mnt/sync/ubuntu/PaperRobot-master/New paper writing/memory_generator/utils.py", line 32, in forward
e_t = self.vt_layers[i](torch.tanh(enc_proj + dec_proj).view(batch_size * max_enc_len, -1))
RuntimeError: [enforce fail at CPUAllocator.cpp:56] posix_memalign(&data, gAlignment, nbytes) == 0. 12 vs 0
The text was updated successfully, but these errors were encountered: