You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
i have a question concerning the extraction of the context embedding of a specific word in a sentence: Should one mask that specific word in order to make the embedding truly contextual? Since during training the model also learns to simply reconstruct what it sees (given a small probability a word is not masked but unchanged and the model simply needs to reconstruct a word that is sees) I thought that without masking that specific word, contextual information might get lost?
thanks in advance
best,
Paul
The text was updated successfully, but these errors were encountered:
Hey everyone,
i have a question concerning the extraction of the context embedding of a specific word in a sentence: Should one mask that specific word in order to make the embedding truly contextual? Since during training the model also learns to simply reconstruct what it sees (given a small probability a word is not masked but unchanged and the model simply needs to reconstruct a word that is sees) I thought that without masking that specific word, contextual information might get lost?
thanks in advance
best,
Paul
The text was updated successfully, but these errors were encountered: