Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

SentenceTransformer.encode() moves self (Module) to device #1806

Closed
piteren opened this issue Jan 8, 2023 · 2 comments
Closed

SentenceTransformer.encode() moves self (Module) to device #1806

piteren opened this issue Jan 8, 2023 · 2 comments

Comments

@piteren
Copy link

piteren commented Jan 8, 2023

When you call SentenceTransformer.encode() it moves self (Module) to device which by default is None (Line #153 of SentenceTransformer.py). I don't know if it is desired behavior, but I was a bit confused. Imagine such scenario:

model = SentenceTransformer() # I know that I can give device here, while init, but I do not want to
model.to('cuda:1')
model.encode('some text') # it will be encoded on cuda:0 if you have 2 GPUs

@tomaarsen
Copy link
Collaborator

I ran into similar issues with moving models to other devices. My solution was to also update model._target_device when moving the model with to.

See the following example:
https://github.com/tomaarsen/setfit/blob/14602ea2773f77b82243624ed1bca5e0772519e7/src/setfit/modeling.py#L442-L445

@tomaarsen
Copy link
Collaborator

This confusing behaviour has been updated in #2351, and will be included in the upcoming release.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants