How Do Language Models Acquire Character-Level Information?

Soma Sato, Ryohei Sasano


Abstract
Language models (LMs) have been reported to implicitly encode character-level information, despite not being explicitly provided during training. However, the mechanisms underlying this phenomenon remain largely unexplored. To reveal the mechanisms, we analyze how models acquire character-level knowledge by comparing LMs trained under controlled settings, such as specifying the pre-training dataset or tokenizer, with those trained under standard settings. We categorize the contributing factors into those independent of tokenization. Our analysis reveals that merge rules and orthographic constraints constitute primary factors arising from tokenization, whereas semantic associations of substrings and syntactic information function as key factors independent of tokenization.
Anthology ID:
2026.eacl-long.282
Volume:
Proceedings of the 19th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
March
Year:
2026
Address:
Rabat, Morocco
Editors:
Vera Demberg, Kentaro Inui, Lluís Marquez
Venue:
EACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
5987–5997
Language:
URL:
https://preview.aclanthology.org/ingest-eacl/2026.eacl-long.282/
DOI:
Bibkey:
Cite (ACL):
Soma Sato and Ryohei Sasano. 2026. How Do Language Models Acquire Character-Level Information?. In Proceedings of the 19th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers), pages 5987–5997, Rabat, Morocco. Association for Computational Linguistics.
Cite (Informal):
How Do Language Models Acquire Character-Level Information? (Sato & Sasano, EACL 2026)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-eacl/2026.eacl-long.282.pdf