Daichi Hayakawa


2025

pdf bib
Theoretical Analysis of Hierarchical Language Recognition and Generation by Transformers without Positional Encoding
Daichi Hayakawa | Issei Sato
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

In this study, we provide constructive proof that Transformers can recognize and generate hierarchical language efficiently with respect to model size, even without the need for a specific positional encoding.Specifically, we show that causal masking and a starting token enable Transformers to compute positional information and depth within hierarchical structures.We demonstrate that Transformers without positional encoding can generate hierarchical languages. Furthermore, we suggest that explicit positional encoding might have a detrimental effect on generalization with respect to sequence length.

2003

pdf bib
Acquiring Vocabulary for Predictive Text Entry through Dynamic Reuse of a Small User Corpus
Kumiko Tanaka-Ishii | Daichi Hayakawa | Masato Takeichi
Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics