Learn Your Tokens: Word-Pooled Tokenization for Language Modeling

Avijit Thawani, Saurabh Ghanekar, Xiaoyuan Zhu, Jay Pujara


Abstract
Language models typically tokenize text into subwords, using a deterministic, hand-engineered heuristic of combining characters into longer surface-level strings such as ‘ing’ or whole words. Recent literature has repeatedly shown the limitations of such a tokenization strategy, particularly for documents not written in English and for representing numbers. On the other extreme, byte/character-level language models are much less restricted but suffer from increased sequence description lengths and a subsequent quadratic expansion in self-attention computation. Recent attempts to compress and limit these context lengths with fixed size convolutions is helpful but completely ignores the word boundary. This paper considers an alternative ‘learn your tokens’ scheme which utilizes the word boundary to pool bytes/characters into word representations, which are fed to the primary language model, before again decoding individual characters/bytes per word in parallel. We find that our moderately expressive and moderately fast end-to-end tokenizer outperform by over ‘300%‘ both subwords and byte/character models over the intrinsic language modeling metric of next-word prediction across datasets. It particularly outshines on rare words, outperforming by a factor of 30! We extensively study the language modeling setup for all three categories of tokenizers and theoretically analyze how our end-to-end models can also be a strong trade-off in efficiency and robustness.
Anthology ID:
2023.findings-emnlp.662
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2023
Month:
December
Year:
2023
Address:
Singapore
Editors:
Houda Bouamor, Juan Pino, Kalika Bali
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
9883–9893
Language:
URL:
https://aclanthology.org/2023.findings-emnlp.662
DOI:
10.18653/v1/2023.findings-emnlp.662
Bibkey:
Cite (ACL):
Avijit Thawani, Saurabh Ghanekar, Xiaoyuan Zhu, and Jay Pujara. 2023. Learn Your Tokens: Word-Pooled Tokenization for Language Modeling. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 9883–9893, Singapore. Association for Computational Linguistics.
Cite (Informal):
Learn Your Tokens: Word-Pooled Tokenization for Language Modeling (Thawani et al., Findings 2023)
Copy Citation:
PDF:
https://preview.aclanthology.org/nschneid-patch-5/2023.findings-emnlp.662.pdf