An Embarrassingly Simple Method to Mitigate Undesirable Properties of Pretrained Language Model Tokenizers

Valentin Hofmann, Hinrich Schuetze, Janet Pierrehumbert


Abstract
We introduce FLOTA (Few Longest Token Approximation), a simple yet effective method to improve the tokenization of pretrained language models (PLMs). FLOTA uses the vocabulary of a standard tokenizer but tries to preserve the morphological structure of words during tokenization. We evaluate FLOTA on morphological gold segmentations as well as a text classification task, using BERT, GPT-2, and XLNet as example PLMs. FLOTA leads to performance gains, makes inference more efficient, and enhances the robustness of PLMs with respect to whitespace noise.
Anthology ID:
2022.acl-short.43
Volume:
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)
Month:
May
Year:
2022
Address:
Dublin, Ireland
Editors:
Smaranda Muresan, Preslav Nakov, Aline Villavicencio
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
385–393
Language:
URL:
https://aclanthology.org/2022.acl-short.43
DOI:
10.18653/v1/2022.acl-short.43
Bibkey:
Cite (ACL):
Valentin Hofmann, Hinrich Schuetze, and Janet Pierrehumbert. 2022. An Embarrassingly Simple Method to Mitigate Undesirable Properties of Pretrained Language Model Tokenizers. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 385–393, Dublin, Ireland. Association for Computational Linguistics.
Cite (Informal):
An Embarrassingly Simple Method to Mitigate Undesirable Properties of Pretrained Language Model Tokenizers (Hofmann et al., ACL 2022)
Copy Citation:
PDF:
https://preview.aclanthology.org/nschneid-patch-1/2022.acl-short.43.pdf
Software:
 2022.acl-short.43.software.zip
Video:
 https://preview.aclanthology.org/nschneid-patch-1/2022.acl-short.43.mp4
Code
 valentinhofmann/flota