Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

indices should be either on cpu or on the same device as the indexed tensor (cpu) #14

Open
yinyjin opened this issue Mar 27, 2023 · 4 comments

Comments

@yinyjin
Copy link

yinyjin commented Mar 27, 2023

#when I train the model, i meet the wrong:

return _VF.meshgrid(tensors, **kwargs)  # type: ignore[attr-defined]
Traceback (most recent call last):
  File "main.py", line 345, in <module>
Traceback (most recent call last):
  File "main.py", line 345, in <module>
    main(args)
  File "main.py", line 295, in main
    main(args)
  File "main.py", line 295, in main
    model, criterion, data_loader_train, optimizer, device, epoch, args.clip_max_norm)
  File "/hdd/jy/code/DETA/engine.py", line 43, in train_one_epoch
    model, criterion, data_loader_train, optimizer, device, epoch, args.clip_max_norm)
  File "/hdd/jy/code/DETA/engine.py", line 43, in train_one_epoch
    loss_dict = criterion(outputs, targets)
  File "/home/jinying/miniconda3/envs/deta/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
    loss_dict = criterion(outputs, targets)
  File "/home/jinying/miniconda3/envs/deta/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
        return forward_call(*input, **kwargs)return forward_call(*input, **kwargs)

  File "/hdd/jy/code/DETA/models/deformable_detr.py", line 398, in forward
  File "/hdd/jy/code/DETA/models/deformable_detr.py", line 398, in forward
    indices = self.stg1_assigner(enc_outputs, bin_targets)
    indices = self.stg1_assigner(enc_outputs, bin_targets)  File "/home/jinying/miniconda3/envs/deta/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl

  File "/home/jinying/miniconda3/envs/deta/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "/hdd/jy/code/DETA/models/assigner.py", line 326, in forward
    return forward_call(*input, **kwargs)
  File "/hdd/jy/code/DETA/models/assigner.py", line 326, in forward
    pos_pr_inds = all_pr_inds[matched_labels == 1]
    pos_pr_inds = all_pr_inds[matched_labels == 1]
RuntimeError: indices should be either on cpu or on the same device as the indexed tensor (cpu)
RuntimeError: indices should be either on cpu or on the same device as the indexed tensor (cpu)
Traceback (most recent call last):
  File "./tools/launch.py", line 192, in <module>
    main()
  File "./tools/launch.py", line 188, in main
    cmd=process.args)
subprocess.CalledProcessError: Command '['./configs/deta.sh', '--coco_path', '/hdd/jy/code/data/coco2017']' returned non-zero exit status 1.

#Why?
#the wrong with the code or train environment?

@jozhang97
Copy link
Owner

Maybe you do not have access to a GPU?
If you want to run the code without a GPU, you will need to modify the code to remove any .cuda() or .to(device)

@AndreaDraperis99
Copy link

I have the same issue and if i run the command: cuda.is_available() the value returned is True so I don't think that this is the problem.

@AndreaDraperis99
Copy link

I resolved it. You need to go in the specific part of code and move to cuda the tensor that you use and move it to cuda() in this way.
all_pr_inds = all_pr_inds.cuda() because matched_labels is in gpu and all_pr_inds not

@Edenzzzz
Copy link

Edenzzzz commented Jun 6, 2023

I resolved it. You need to go in the specific part of code and move to cuda the tensor that you use and move it to cuda() in this way. all_pr_inds = all_pr_inds.cuda() because matched_labels is in gpu and all_pr_inds not

Yes it's a bug in the code

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants