Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ValueError: only one element tensors can be converted to Python scalars #5

Open
Chloe-gra opened this issue Feb 1, 2023 · 4 comments

Comments

@Chloe-gra
Copy link

/home/xueying/enter/envs/vil3dref3/lib/python3.8/site-packages/torch/nn/parallel/_functions.py:68: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.
warnings.warn('Was asked to gather along dimension 0, but all '
Traceback (most recent call last):
File "train.py", line 355, in
main(args)
File "train.py", line 199, in main
val_log = validate(model, model_cfg, val_dataloader)
File "/home/xueying/enter/envs/vil3dref3/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 28, in decorate_context
return func(*args, **kwargs)
File "train.py", line 286, in validate
loss_dict = {'loss/%s'%lk: lv.data.item() for lk, lv in losses.items()}
File "train.py", line 286, in
loss_dict = {'loss/%s'%lk: lv.data.item() for lk, lv in losses.items()}
ValueError: only one element tensors can be converted to Python scalars

When training the teacher model with groundtruth object labels, the error occurred.

@eslambakr
Copy link

Hi @Chloe-gra ,
I am facing the same error, Did u solve it?
Thanks in advance!

@eslambakr
Copy link

I solved by disabling the "DataParallel", by just adding and False to the condition.
elif torch.cuda.device_count() > 1 and False: in line 53 in utils/misc.py

@eslambakr
Copy link

Another solution you can set the "local_rank" to specific gpu_Id instead of -1
e.g., local_rank=0 instead of local_rank=-1

@iris0329
Copy link

You can try adding CUDA_VISIBLE_DEVICES=0 to train the model. It solved this error in my case.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants