Yao Liu
Other people with similar names: Yao Liu
2025
GEMS: Generation-Based Event Argument Extraction via Multi-perspective Prompts and Ontology Steering
Run Lin
|
Yao Liu
|
Yanglei Gan
|
Yuxiang Cai
|
Tian Lan
|
Qiao Liu
Findings of the Association for Computational Linguistics: ACL 2025
Generative methods significantly advance event argument extraction by probabilistically generating event argument sequences in a structured format. However, existing approaches primarily rely on a single prompt to generate event arguments in a fixed, predetermined order. Such a rigid approach overlooks the complex structural and dynamic interdependencies among event arguments. In this work, we present GEMS, a multi-prompt learning framework that Generates Event arguments via Multi-perspective prompts and ontology Steering. Specifically, GEMS utilizes multiple unfilled prompts for each sentence, predicting event arguments in varying sequences to explicitly capture the interrelationships between arguments. These predictions are subsequently aggregated using a voting mechanism. Furthermore, an ontology-driven steering mechanism is proposed to ensure that the generated arguments are contextually appropriate and consistent with event-specific knowledge. Extensive experiments on two benchmark datasets demonstrate that GEMS achieves state-of-the-art performance, particularly in low-resource settings. The source code is available at: https://github.com/AONE-NLP/EAE-GEMS
SCE: Semantic Consistency Enhanced Reinforcement Learning for Multi-Hop Knowledge Graph Reasoning
Yanwen Huang
|
Yao Liu
|
Qiao Liu
|
Rui Hou
|
Tingting Dai
Findings of the Association for Computational Linguistics: EMNLP 2025
Multi-hop reasoning with reinforcement learning has proven effective in discovering inference paths in incomplete knowledge graphs. However, a major challenge remains: spurious paths (incorrect reasoning paths that accidentally lead to correct answers) often arise due to reward mechanisms that prioritize final results over reasoning quality. While existing approaches attempt to mitigate this issue using external rules, they often neglect the internal semantic consistency between the target triple and the intermediate triples along the reasoning path. In this paper, we propose a novel framework, Semantic Consistency Enhanced Reinforcement Learning (SCE), which incorporates semantic consistency into the reward function to guide multi-hop reasoning. Experimental results demonstrate that SCE outperforms strong baseline methods and facilitates the discovery of more interpretable reasoning paths.
Search
Fix author
Co-authors
- Qiao Liu 2
- Yuxiang Cai 1
- Tingting Dai 1
- Yanglei Gan 1
- Rui Hou 1
- show all...