Chenrui Mao
2025
How do LLMs’ Preferences Affect Event Argument Extraction? CAT: Addressing Preference Traps in Unsupervised EAE
Yunhao Wei
|
Kai Shuang
|
Zhiyi Li
|
Chenrui Mao
Findings of the Association for Computational Linguistics: ACL 2025
Large Language Models (LLMs) have significantly improved the performance of unsupervised Event Argument Extraction (EAE) tasks. However, LLMs’ inherent preferences severely hinder their effectiveness in EAE, leading to what we term preference traps, namely, the Prior Knowledge Trap, the Sycophancy Hallucination Trap, and the Output Contradiction Trap. Existing approaches often fall into these traps due to misalignments between their prior knowledge, instructions, or output constraints and LLMs’ preferences, which significantly limits further performance gains. To address this issue, we propose Choose-After-Think (CAT), an unsupervised EAE framework designed to handle these preference traps through targeted measures. CAT innovatively divides the EAE task into two phases: identification of event information (argument roles) (Think Phase) and selection of the final answers from a candidate set (Choose Phase). This two-phase approach reduces the impact of individual token probability anomalies and ensures the integrity of EAE results. Experimental results demonstrate that CAT (based on the local 7B model, zero-shot setting) matches the performance of the best DeepSeek-R1 API model, with a significantly lower time cost.
2024
Scented-EAE: Stage-Customized Entity Type Embedding for Event Argument Extraction
Yu Yang
|
Jinyu Guo
|
Kai Shuang
|
Chenrui Mao
Findings of the Association for Computational Linguistics: ACL 2024
Existing methods for incorporating entities into EAE rely on prompts or NER. They typically fail to explicitly explore the role of entity types, which results in shallow argument comprehension and often encounter three issues: (1) weak semantic associations due to missing role-entity correspondence cues; (2) compromised semantic integrity from abandoning context after recognizing entities regardless of their types; (3) one-sided semantic understanding relying solely on argument role semantics. To tackle these issues, we propose Scented-EAE, an EAE model with stage-customized entity type embedding to explicitly underscore and explore the role of entity types, thus intervening in argument selection. Specifically, at the input stage, we strengthen semantic associations by prompting role-entity correspondence after extending a non-autoregressive decoder as part of the encoder. At the intermediate stage, we preserve semantic integrity by optimizing our proposed BIO-aware NER and EAE via a novel IPE joint learning. At the output stage, we expand semantic understanding dimensions by determining arguments using span selectors from argument roles and entity types. Experiments show that our model achieves state-of-the-art performance on mainstream benchmarks. In addition, it also exhibits robustness in low-resource settings with the help of prompts and entity types.