Unsupervised Morphological Tree Tokenizer

Qingyang Zhu, Xiang Hu, Pengyu Ji, Wei Wu, Kewei Tu


Abstract
As a cornerstone in language modeling, tokenization involves segmenting text inputs into pre-defined atomic units. Conventional statistical tokenizers often disrupt constituent boundaries within words, thereby corrupting semantic information. To address this drawback, we introduce morphological structure guidance to tokenization and propose a deep model to induce character-level structures of words. Specifically, the deep model jointly encodes internal structures and representations of words with a mechanism named MorphOverriding to ensure the indecomposability of morphemes. By training the model with self-supervised objectives, our method is capable of inducing character-level structures that align with morphological rules without annotated training data. Based on the induced structures, our algorithm tokenizes words through vocabulary matching in a top-down manner. Empirical results indicate that the proposed method effectively retains complete morphemes and outperforms widely adopted methods such as BPE and WordPiece on both morphological segmentation tasks and language modeling tasks.
Anthology ID:
2025.findings-acl.1146
Volume:
Findings of the Association for Computational Linguistics: ACL 2025
Month:
July
Year:
2025
Address:
Vienna, Austria
Editors:
Wanxiang Che, Joyce Nabende, Ekaterina Shutova, Mohammad Taher Pilehvar
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
22299–22312
Language:
URL:
https://preview.aclanthology.org/landing_page/2025.findings-acl.1146/
DOI:
Bibkey:
Cite (ACL):
Qingyang Zhu, Xiang Hu, Pengyu Ji, Wei Wu, and Kewei Tu. 2025. Unsupervised Morphological Tree Tokenizer. In Findings of the Association for Computational Linguistics: ACL 2025, pages 22299–22312, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
Unsupervised Morphological Tree Tokenizer (Zhu et al., Findings 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/landing_page/2025.findings-acl.1146.pdf