You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello!
I'm using PyABSA in an application where I have to do aspect term extractation and polarity for about 3000 texts every 15 minutes. At the moment, I'm using an Nvidia L4, however, it still takes about 30 minutes to process all the texts. Is there any way to speed up the inference process?
The text was updated successfully, but these errors were encountered:
Maybe you can use smaller modeling length (e.g., 80) and larger batch size (64 or 128).
And you can try the fp16 precision using torch.cuda.amp.autocast().
Hello!
I'm using PyABSA in an application where I have to do aspect term extractation and polarity for about 3000 texts every 15 minutes. At the moment, I'm using an Nvidia L4, however, it still takes about 30 minutes to process all the texts. Is there any way to speed up the inference process?
The text was updated successfully, but these errors were encountered: