Issei Sato


2025

pdf bib
Theoretical Analysis of Hierarchical Language Recognition and Generation by Transformers without Positional Encoding
Daichi Hayakawa | Issei Sato
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

In this study, we provide constructive proof that Transformers can recognize and generate hierarchical language efficiently with respect to model size, even without the need for a specific positional encoding.Specifically, we show that causal masking and a starting token enable Transformers to compute positional information and depth within hierarchical structures.We demonstrate that Transformers without positional encoding can generate hierarchical languages. Furthermore, we suggest that explicit positional encoding might have a detrimental effect on generalization with respect to sequence length.

2014

pdf bib
Formalizing Word Sampling for Vocabulary Prediction as Graph-based Active Learning
Yo Ehara | Yusuke Miyao | Hidekazu Oiwa | Issei Sato | Hiroshi Nakagawa
Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)

2013

pdf bib
Understanding seed selection in bootstrapping
Yo Ehara | Issei Sato | Hidekazu Oiwa | Hiroshi Nakagawa
Proceedings of TextGraphs-8 Graph-based Methods for Natural Language Processing

2012

pdf bib
Mining Words in the Minds of Second Language Learners: Learner-Specific Word Difficulty
Yo Ehara | Issei Sato | Hidekazu Oiwa | Hiroshi Nakagawa
Proceedings of COLING 2012

pdf bib
Reducing Wrong Labels in Distant Supervision for Relation Extraction
Shingo Takamatsu | Issei Sato | Hiroshi Nakagawa
Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

2007

pdf bib
Bayesian Document Generative Model with Explicit Multiple Topics
Issei Sato | Hiroshi Nakagawa
Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL)