-
Notifications
You must be signed in to change notification settings - Fork 27.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RuntimeError: Expected tensor for argument #1 'indices' to have scalar type Long; but got torch.cuda.IntTensor instead (while checking arguments for embedding) #2952
Comments
It is weird that there is a discrepancy between Windows and Linux. Could you try casting your variables Are you defining some of your variables on GPU? Does it fail if everything stays on CPU? |
I often prototype on Windows and push to Linux for final processing and I've never had this issue. Can you post a minimal working example that I can copy-paste to test? |
Ok update I got the error to go away but to do it I had to do some janky fixes that I don't think should be necessary
`
` |
You're doing |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
Had similar issue: |
Having the same issue, funny thing is the whole model worked for training, but while running inference on test data the error automatically showed up |
Exactly the same issue I am facing. I am using Amazon SageMaker notebook instance |
Hi, I'm working with transformers version 4.4.2 and getting this error when not passing in the position_ids = position_ids.to(torch.long) Of course, you can do this work by passing in your own |
hi,I have met the same problem, just because use torch.Tensor( ),.when I check,I change it into torch.tensor,and it's OK. |
@doris-art # a bug in transformers 4.4.2 requires this
# https://github.com/huggingface/transformers/issues/2952
input_ids = params['input_ids']
seq_length = input_ids.size()[1]
position_ids = model.embeddings.position_ids
position_ids = position_ids[:, 0: seq_length].to(torch.long)
params['position_ids'] = position_ids |
I had the same issue in the past. after checking for the many issue for this error. i did some reverse engineering and found that my input been going as empty in the modal train. |
🐛 Bug
Issue
Hi everyone when I run the line:
with model defined as,
It returns the stated error. However this only happens when I am on my windows computer.
When I run the exact same code with the same python version and libraries it works perfectly fine.
I have the most up to date version of pytorch (1.4) and transformers installed.
Any help would be greatly appreciated
Information
Using the latest version of pytorch and transformers
Model I am using (Bert, XLNet ...): BertForSequenceClassification
Language I am using the model on (English, Chinese ...): English
The text was updated successfully, but these errors were encountered: