-
Notifications
You must be signed in to change notification settings - Fork 154
Open
Description
Hi Prithiviraj, thank you for the great work!
Is it possible to run this model with batches of input sentences so that we can leverage using the GPU much better? At the moment, setting use_gpu to True doesn't achieve much performance gains because we're not parallelizing across input phrases. Unless I missed something in the source code, in which case please let me know (and this would be good instruction to better emphasize in the documentation, at least in my case and I'm sure for many others if they try using this model for paraphrasing phrases in the 1mil+ data sizes)
PrithivirajDamodaran, zohebnsr, joelr45 and hg0428
Metadata
Metadata
Assignees
Labels
No labels