How do LLMs’ Preferences Affect Event Argument Extraction? CAT: Addressing Preference Traps in Unsupervised EAE

Yunhao Wei, Kai Shuang, Zhiyi Li, Chenrui Mao


Abstract
Large Language Models (LLMs) have significantly improved the performance of unsupervised Event Argument Extraction (EAE) tasks. However, LLMs’ inherent preferences severely hinder their effectiveness in EAE, leading to what we term preference traps, namely, the Prior Knowledge Trap, the Sycophancy Hallucination Trap, and the Output Contradiction Trap. Existing approaches often fall into these traps due to misalignments between their prior knowledge, instructions, or output constraints and LLMs’ preferences, which significantly limits further performance gains. To address this issue, we propose Choose-After-Think (CAT), an unsupervised EAE framework designed to handle these preference traps through targeted measures. CAT innovatively divides the EAE task into two phases: identification of event information (argument roles) (Think Phase) and selection of the final answers from a candidate set (Choose Phase). This two-phase approach reduces the impact of individual token probability anomalies and ensures the integrity of EAE results. Experimental results demonstrate that CAT (based on the local 7B model, zero-shot setting) matches the performance of the best DeepSeek-R1 API model, with a significantly lower time cost.
Anthology ID:
2025.findings-acl.1000
Volume:
Findings of the Association for Computational Linguistics: ACL 2025
Month:
July
Year:
2025
Address:
Vienna, Austria
Editors:
Wanxiang Che, Joyce Nabende, Ekaterina Shutova, Mohammad Taher Pilehvar
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
19529–19543
Language:
URL:
https://preview.aclanthology.org/mtsummit-25-ingestion/2025.findings-acl.1000/
DOI:
10.18653/v1/2025.findings-acl.1000
Bibkey:
Cite (ACL):
Yunhao Wei, Kai Shuang, Zhiyi Li, and Chenrui Mao. 2025. How do LLMs’ Preferences Affect Event Argument Extraction? CAT: Addressing Preference Traps in Unsupervised EAE. In Findings of the Association for Computational Linguistics: ACL 2025, pages 19529–19543, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
How do LLMs’ Preferences Affect Event Argument Extraction? CAT: Addressing Preference Traps in Unsupervised EAE (Wei et al., Findings 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/mtsummit-25-ingestion/2025.findings-acl.1000.pdf