You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The original BERT was trained with raw text, and punctuation marks were generally seen attached to words. In emBERT, we take the output of emToken, so punctuation marks are tokens in their own right. This discrepancy might affect performance.
Check if this is really the case. The basic tokenization procedure does split punctuation from the end of words, so the problem might not be as acute as it seems at first sight.
Merge punctuation tokens with the words before sending them to the BERT model.
Alternatively, skip emToken altogether?
The text was updated successfully, but these errors were encountered:
The original BERT was trained with raw text, and punctuation marks were generally seen attached to words. In emBERT, we take the output of emToken, so punctuation marks are tokens in their own right. This discrepancy might affect performance.
The text was updated successfully, but these errors were encountered: