Abstract
When deriving contextualized word representations from language models, a decision needs to be made on how to obtain one for out-of-vocabulary (OOV) words that are segmented into subwords. What is the best way to represent these words with a single vector, and are these representations of worse quality than those of in-vocabulary words? We carry out an intrinsic evaluation of embeddings from different models on semantic similarity tasks involving OOV words. Our analysis reveals, among other interesting findings, that the quality of representations of words that are split is often, but not always, worse than that of the embeddings of known words. Their similarity values, however, must be interpreted with caution.- Anthology ID:
- 2024.tacl-1.17
- Volume:
- Transactions of the Association for Computational Linguistics, Volume 12
- Month:
- Year:
- 2024
- Address:
- Cambridge, MA
- Venue:
- TACL
- SIG:
- Publisher:
- MIT Press
- Note:
- Pages:
- 299–320
- Language:
- URL:
- https://aclanthology.org/2024.tacl-1.17
- DOI:
- 10.1162/tacl_a_00647
- Cite (ACL):
- Aina Garí Soler, Matthieu Labeau, and Chloé Clavel. 2024. The Impact of Word Splitting on the Semantic Content of Contextualized Word Representations. Transactions of the Association for Computational Linguistics, 12:299–320.
- Cite (Informal):
- The Impact of Word Splitting on the Semantic Content of Contextualized Word Representations (Soler et al., TACL 2024)
- PDF:
- https://preview.aclanthology.org/nschneid-patch-5/2024.tacl-1.17.pdf