You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I tried to load the pretrained wordCNN/LSTM, but I found that the embedding layer uses 400K vocab with 200 embeddings dimensions. It seems that you use the wikipedia pretrained embeddings from https://nlp.stanford.edu/projects/glove/.
However, you mentioned before in this reply that you used 10K and 20K vocabs for CNN and LSTM models.
Could you please explain which vocab sizes for did you use for the published results in the paper ?
And if possible, could you provide the 10-20K word vocabularies used for these models? (as you did with BERT)
Thank you!
Cheers
The text was updated successfully, but these errors were encountered:
I tried to load the pretrained wordCNN/LSTM, but I found that the embedding layer uses 400K vocab with 200 embeddings dimensions. It seems that you use the wikipedia pretrained embeddings from https://nlp.stanford.edu/projects/glove/.
However, you mentioned before in this reply that you used 10K and 20K vocabs for CNN and LSTM models.
Could you please explain which vocab sizes for did you use for the published results in the paper ?
And if possible, could you provide the 10-20K word vocabularies used for these models? (as you did with BERT)
Thank you!
Cheers
The text was updated successfully, but these errors were encountered: