Abstract
In the context of multilingual language model pre-training, vocabulary size for languages with a broad set of potential characters is an unsolved problem. We propose two algorithms applicable in any unsupervised multilingual pre-training task, increasing the elasticity of budget required for building the vocabulary in Byte-Pair Encoding inspired tokenizers, significantly reducing the cost of supporting Korean in a multilingual model.- Anthology ID:
- 2020.lrec-1.429
- Volume:
- Proceedings of the Twelfth Language Resources and Evaluation Conference
- Month:
- May
- Year:
- 2020
- Address:
- Marseille, France
- Editors:
- Nicoletta Calzolari, Frédéric Béchet, Philippe Blache, Khalid Choukri, Christopher Cieri, Thierry Declerck, Sara Goggi, Hitoshi Isahara, Bente Maegaard, Joseph Mariani, Hélène Mazo, Asuncion Moreno, Jan Odijk, Stelios Piperidis
- Venue:
- LREC
- SIG:
- Publisher:
- European Language Resources Association
- Note:
- Pages:
- 3490–3497
- Language:
- English
- URL:
- https://aclanthology.org/2020.lrec-1.429
- DOI:
- Cite (ACL):
- Sangwhan Moon and Naoaki Okazaki. 2020. Jamo Pair Encoding: Subcharacter Representation-based Extreme Korean Vocabulary Compression for Efficient Subword Tokenization. In Proceedings of the Twelfth Language Resources and Evaluation Conference, pages 3490–3497, Marseille, France. European Language Resources Association.
- Cite (Informal):
- Jamo Pair Encoding: Subcharacter Representation-based Extreme Korean Vocabulary Compression for Efficient Subword Tokenization (Moon & Okazaki, LREC 2020)
- PDF:
- https://preview.aclanthology.org/proper-vol2-ingestion/2020.lrec-1.429.pdf