Theoretical Analysis of Hierarchical Language Recognition and Generation by Transformers without Positional Encoding

Daichi Hayakawa, Issei Sato


Abstract
In this study, we provide constructive proof that Transformers can recognize and generate hierarchical language efficiently with respect to model size, even without the need for a specific positional encoding.Specifically, we show that causal masking and a starting token enable Transformers to compute positional information and depth within hierarchical structures.We demonstrate that Transformers without positional encoding can generate hierarchical languages. Furthermore, we suggest that explicit positional encoding might have a detrimental effect on generalization with respect to sequence length.
Anthology ID:
2025.acl-long.1488
Volume:
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
July
Year:
2025
Address:
Vienna, Austria
Editors:
Wanxiang Che, Joyce Nabende, Ekaterina Shutova, Mohammad Taher Pilehvar
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
30777–30834
Language:
URL:
https://preview.aclanthology.org/ingestion-acl-25/2025.acl-long.1488/
DOI:
Bibkey:
Cite (ACL):
Daichi Hayakawa and Issei Sato. 2025. Theoretical Analysis of Hierarchical Language Recognition and Generation by Transformers without Positional Encoding. In Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 30777–30834, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
Theoretical Analysis of Hierarchical Language Recognition and Generation by Transformers without Positional Encoding (Hayakawa & Sato, ACL 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingestion-acl-25/2025.acl-long.1488.pdf