EXECUTE: A Multilingual Benchmark for LLM Token Understanding

Lukas Edman, Helmut Schmid, Alexander Fraser


Abstract
The CUTE benchmark showed that LLMs struggle with character understanding in English. We extend it to more languages with diverse scripts and writing systems, introducing EXECUTE. Our simplified framework allows easy expansion to any language. Tests across multiple LLMs reveal that challenges in other languages are not always on the character level as in English. Some languages show word-level processing issues, some show no issues at all. We also examine sub-character tasks in Chinese, Japanese, and Korean to assess LLMs’ understanding of character components.
Anthology ID:
2025.findings-acl.95
Volume:
Findings of the Association for Computational Linguistics: ACL 2025
Month:
July
Year:
2025
Address:
Vienna, Austria
Editors:
Wanxiang Che, Joyce Nabende, Ekaterina Shutova, Mohammad Taher Pilehvar
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1878–1887
Language:
URL:
https://preview.aclanthology.org/landing_page/2025.findings-acl.95/
DOI:
Bibkey:
Cite (ACL):
Lukas Edman, Helmut Schmid, and Alexander Fraser. 2025. EXECUTE: A Multilingual Benchmark for LLM Token Understanding. In Findings of the Association for Computational Linguistics: ACL 2025, pages 1878–1887, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
EXECUTE: A Multilingual Benchmark for LLM Token Understanding (Edman et al., Findings 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/landing_page/2025.findings-acl.95.pdf