Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

device not same problem. #3

Closed
RuqiBai opened this issue Apr 18, 2020 · 3 comments
Closed

device not same problem. #3

RuqiBai opened this issue Apr 18, 2020 · 3 comments

Comments

@RuqiBai
Copy link

RuqiBai commented Apr 18, 2020

Hello Harry,
Thanks for your code.
Here’s the situation I meet:
My server has two gpus, which are cuda: 0 and cuda: 1. However, if I send my model to cuda: 1, then attack would change it to cuda: 0 instead.

Code is here:
torchattacks/attack.py line 20
self.device = torch.device("cuda" if next(model.parameters()).is_cuda else "cpu")

Could it be okay to change it to:
self.device = next(model.parameters()).device

I test the code based on torch=1.4.0
This could help me maintain the same device in both my model and datasets.
Thanks a lot.

Ruqi Bai

@Harry24k
Copy link
Owner

Since I only use one GPU, I wasn't thinking about the problem at all.
I modified code as you suggested and checked there is no errors on demos.
Please check version 1.2!
Thank you.
Harry

@RuqiBai
Copy link
Author

RuqiBai commented Apr 19, 2020

Hi,
Thanks for your quick reply and update. Here's one same problem in torchattacks.py line: 30.
I don't find this file in the source code. But I do have one file in my package. (I use pip to import your package)

@Harry24k
Copy link
Owner

Sorry, I mixed it up with the previous versions and uploaded it.
Please upgrade once again to version 1.3.
Thanks a lot.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants