Splintering Nonconcatenative Languages for Better Tokenization

Bar Gazit, Shaltiel Shmidman, Avi Shmidman, Yuval Pinter


Abstract
Common subword tokenization algorithms like BPE and UnigramLM assume that text can be split into meaningful units by concatenative measures alone. This is not true for languages such as Hebrew and Arabic, where morphology is encoded in root-template patterns, or Malay and Georgian, where split affixes are common. We present SPLINTER, a pre-processing step which rearranges text into a linear form that better represents such nonconcatenative morphologies, enabling meaningful contiguous segments to be found by the tokenizer. We demonstrate SPLINTER’s merit using both intrinsic measures evaluating token vocabularies in Hebrew, Arabic, and Malay; as well as on downstream tasks using BERT-architecture models trained for Hebrew.
Anthology ID:
2025.findings-acl.1151
Volume:
Findings of the Association for Computational Linguistics: ACL 2025
Month:
July
Year:
2025
Address:
Vienna, Austria
Editors:
Wanxiang Che, Joyce Nabende, Ekaterina Shutova, Mohammad Taher Pilehvar
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
22405–22417
Language:
URL:
https://preview.aclanthology.org/landing_page/2025.findings-acl.1151/
DOI:
Bibkey:
Cite (ACL):
Bar Gazit, Shaltiel Shmidman, Avi Shmidman, and Yuval Pinter. 2025. Splintering Nonconcatenative Languages for Better Tokenization. In Findings of the Association for Computational Linguistics: ACL 2025, pages 22405–22417, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
Splintering Nonconcatenative Languages for Better Tokenization (Gazit et al., Findings 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/landing_page/2025.findings-acl.1151.pdf