-
Notifications
You must be signed in to change notification settings - Fork 706
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Getting "RuntimeError: 'lengths' argument should be a 1D CPU int64 tensor, but got 1D cuda:0 Long tensor" #240
Comments
In the new version of PyTorch, the input parameter |
Obviously I tried. But as I said, none of them worked. But I had to get way down to torch 1.4.0 to get it done. |
Where did you use |
Yes. I did. as I mentioned below.
The length part is the one I tried to put into cpu as the exact same persons mruberry and ngimel suggested. That was the first web page I found as well when I was trying to fix the problem. |
Could you tell me the corresponding line number in the code? for example: DeepCTR-Torch/deepctr_torch/models/dien.py Lines 220 to 221 in b4d8181
Did you set or other places like DeepCTR-Torch/deepctr_torch/models/dien.py Line 356 in b4d8181
or DeepCTR-Torch/deepctr_torch/models/dien.py Line 365 in b4d8181
Could you print the device of the tensor before and after your |
Hello. I have done for all the pack_padded_sequences for example, masked_keys_length.cpu(). When I did this, it was converted to cpu one. But the error was still there. For me, only downgrading torch version worked. It is strange tho. That was the whole point of the question. It became CPU tensor, but it didnt work. Is it working on your side? |
@Jeriousman I add |
* add multitask mdoels 1. Add multi-task models: SharedBottom, ESMM, MMOE, PLE 2. Bugfix: #240 #232 * support python 3.9/3.10 (#259) * fix: variable name typo (#257) Co-authored-by: Jason Zan <[email protected]> Co-authored-by: Yi-Xuan Xu <[email protected]>
hi any one tell me same error on torch==1.8.0 , how to handle this |
Describe the bug(问题描述)
history = model.fit(x, y, batch_size=256, epochs=20, verbose=1, validation_split=0.4, shuffle=True)
When I try model.fit for DIEN model with run_dien.py of your default example, it works when I set device to cpu but with cuda, I get this error below.
So I tried lengths.cpu(), lengths.to('cpu') and all of them couldnt solve the problem. Can you please provide a solution?
Operating environment(运行环境):
The text was updated successfully, but these errors were encountered: