Itay Itzhak


2022

pdf
Models In a Spelling Bee: Language Models Implicitly Learn the Character Composition of Tokens
Itay Itzhak | Omer Levy
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

Standard pretrained language models operateon sequences of subword tokens without direct access to the characters that compose eachtoken’s string representation. We probe theembedding layer of pretrained language models and show that models learn the internalcharacter composition of whole word and subword tokens to a surprising extent, withoutever seeing the characters coupled with the tokens. Our results show that the embedding layers of RoBERTa and GPT2 each hold enoughinformation to accurately spell up to a thirdof the vocabulary and reach high characterngram overlap across all token types. We further test whether enriching subword modelswith character information can improve language modeling, and observe that this methodhas a near-identical learning curve as training without spelling-based enrichment. Overall, our results suggest that language modeling objectives incentivize the model to implicitly learn some notion of spelling, and that explicitly teaching the model how to spell doesnot appear to enhance its performance on suchtasks.
Search
Co-authors
Venues