Abstract
The size of the vocabulary is a central design choice in large pretrained language models, with respect to both performance and memory requirements. Typically, subword tokenization algorithms such as byte pair encoding and WordPiece are used. In this work, we investigate the compatibility of tokenizations for multilingual static and contextualized embedding spaces and propose a measure that reflects the compatibility of tokenizations across languages. Our goal is to prevent incompatible tokenizations, e.g., “wine” (word-level) in English vs. “v i n” (character-level) in French, which make it hard to learn good multilingual semantic representations. We show that our compatibility measure allows the system designer to create vocabularies across languages that are compatible – a desideratum that so far has been neglected in multilingual models.- Anthology ID:
- 2021.findings-emnlp.205
- Volume:
- Findings of the Association for Computational Linguistics: EMNLP 2021
- Month:
- November
- Year:
- 2021
- Address:
- Punta Cana, Dominican Republic
- Venue:
- Findings
- SIG:
- SIGDAT
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 2382–2399
- Language:
- URL:
- https://aclanthology.org/2021.findings-emnlp.205
- DOI:
- 10.18653/v1/2021.findings-emnlp.205
- Cite (ACL):
- Antonis Maronikolakis, Philipp Dufter, and Hinrich Schütze. 2021. Wine is not v i n. On the Compatibility of Tokenizations across Languages. In Findings of the Association for Computational Linguistics: EMNLP 2021, pages 2382–2399, Punta Cana, Dominican Republic. Association for Computational Linguistics.
- Cite (Informal):
- Wine is not v i n. On the Compatibility of Tokenizations across Languages (Maronikolakis et al., Findings 2021)
- PDF:
- https://preview.aclanthology.org/starsem-semeval-split/2021.findings-emnlp.205.pdf