Eisaku Maeda


2025

pdf bib
AnaToM: A Dataset Generation Framework for Evaluating Theory of Mind Reasoning Toward the Anatomy of Difficulty through Structurally Controlled Story Generation
Jundai Suzuki | Ryoma Ishigaki | Eisaku Maeda
Proceedings of the 14th International Joint Conference on Natural Language Processing and the 4th Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics

Evaluating Theory of Mind (ToM) in Large Language Models (LLMs) is an important area of research for understanding the social intelligence of AI. Recent ToM benchmarks have made significant strides in enhancing the complexity, comprehensiveness, and practicality of evaluation. However, while the focus has been on constructing “more difficult” or “more comprehensive” tasks, there has been insufficient systematic analysis of the structural factors that inherently determine the difficulty of ToM reasoning—that is, “what” makes reasoning difficult. To address this challenge, we propose a new dataset generation framework for ToM evaluation named AnaToM. To realize an “Anatomy of Difficulty” in ToM reasoning, AnaToM strictly controls structural parameters such as the number of entities and the timeline in a story. This parameter control enables the isolation and identification of factors affecting the ToM of LLMs, allowing for a more precise examination of their reasoning mechanisms. The proposed framework provides a systematic methodology for diagnosing the limits of LLM reasoning abilities and offers new guidelines for future benchmark design.

2024

pdf bib
Knowledge Editing of Large Language Models Unconstrained by Word Order
Ryoma Ishigaki | Jundai Suzuki | Masaki Shuzo | Eisaku Maeda
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 4: Student Research Workshop)

Large Language Models (LLMs) are considered to have potentially extensive knowledge, but because their internal processing is black-boxed, it has been difficult to directly edit the knowledge held by the LLMs themselves. To address this issue, a method called local modification-based knowledge editing has been developed. This method identifies the knowledge neurons that encode the target knowledge and adjusts the parameters associated with these neurons to update the knowledge. Knowledge neurons are identified by masking the o part from sentences representing relational triplets (s, r, o), having the LLM predict the masked part, and observing the LLM's activation during the prediction. When the architecture is decoder-based, the predicted o needs to be located at the end of the sentence. Previous local modification-based knowledge editing methods for decoder-based models have assumed SVO languages and faced challenges when applied to SOV languages such as Japanese. In this study, we propose a knowledge editing method that eliminates the need for word order constraints by converting the input for identifying knowledge neurons into a question where o is the answer. We conducted validation experiments on 500 examples and confirmed that the proposed method is effective for Japanese, a non-SVO language. We also applied this method to English, an SVO language, and demonstrated that it outperforms conventional methods.

pdf bib
Effective Prompt-tuning for Correcting Hallucinations in LLM-generated Japanese Sentences
Haruki Hatakeyama | Masaki Shuzo | Eisaku Maeda
Proceedings of the 38th Pacific Asia Conference on Language, Information and Computation

2010

pdf bib
User-adaptive Coordination of Agent Communicative Behavior in Spoken Dialogue
Kohji Dohsaka | Atsushi Kanemoto | Ryuichiro Higashinaka | Yasuhiro Minami | Eisaku Maeda
Proceedings of the SIGDIAL 2010 Conference

2009

pdf bib
Effects of Conversational Agents on Human Communication in Thought-Evoking Multi-Party Dialogues
Kohji Dohsaka | Ryota Asai | Ryuichiro Higashinaka | Yasuhiro Minami | Eisaku Maeda
Proceedings of the SIGDIAL 2009 Conference

2004

pdf bib
Dependency-based Sentence Alignment for Multiple Document Summarization
Tsutomu Hirao | Jun Suzuki | Hideki Isozaki | Eisaku Maeda
COLING 2004: Proceedings of the 20th International Conference on Computational Linguistics

pdf bib
Convolution Kernels with Feature Selection for Natural Language Processing Tasks
Jun Suzuki | Hideki Isozaki | Eisaku Maeda
Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics (ACL-04)

2003

pdf bib
Hierarchical Directed Acyclic Graph Kernel: Methods for Structured Natural Language Data
Jun Suzuki | Tsutomu Hirao | Yutaka Sasaki | Eisaku Maeda
Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics

pdf bib
Spoken Interactive ODQA System: SPIQA
Chiori Hori | Takaaki Hori | Hajime Tsukada | Hideki Isozaki | Yutaka Sasaki | Eisaku Maeda
The Companion Volume to the Proceedings of 41st Annual Meeting of the Association for Computational Linguistics

pdf bib
Question Classification using HDAG Kernel
Jun Suzuki | Hirotoshi Taira | Yutaka Sasaki | Eisaku Maeda
Proceedings of the ACL 2003 Workshop on Multilingual Summarization and Question Answering

2002

pdf bib
Answering it with Charts: Dialogue in Natural Language and Charts
Tsuneaki Kato | Mitsunori Matsushita | Eisaku Maeda
COLING 2002: The 19th International Conference on Computational Linguistics

pdf bib
Extracting Important Sentences with Support Vector Machines
Tsutomu Hirao | Hideki Isozaki | Eisaku Maeda | Yuji Matsumoto
COLING 2002: The 19th International Conference on Computational Linguistics

pdf bib
SVM Answer Selection for Open-Domain Question Answering
Jun Suzuki | Yutaka Sasaki | Eisaku Maeda
COLING 2002: The 19th International Conference on Computational Linguistics