Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to generate in batches? #60

Open
AmanHaris opened this issue Feb 20, 2024 · 2 comments
Open

How to generate in batches? #60

AmanHaris opened this issue Feb 20, 2024 · 2 comments

Comments

@AmanHaris
Copy link

Hi Prithiviraj, thank you for the great work!
Is it possible to run this model with batches of input sentences so that we can leverage using the GPU much better? At the moment, setting use_gpu to True doesn't achieve much performance gains because we're not parallelizing across input phrases. Unless I missed something in the source code, in which case please let me know (and this would be good instruction to better emphasize in the documentation, at least in my case and I'm sure for many others if they try using this model for paraphrasing phrases in the 1mil+ data sizes)

@PrithivirajDamodaran
Copy link
Owner

Thanks for raising this. Very valid requirement. Will add to the list.

@zohebnsr
Copy link

Batch processing not supported as of now...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants