Huiming Fan
2025
How do Language Models Reshape Entity Alignment? A Survey of LM-Driven EA Methods: Advances, Benchmarks, and Future
Zerui Chen
|
Huiming Fan
|
Qianyu Wang
|
Tao He
|
Ming Liu
|
Heng Chang
|
Weijiang Yu
|
Ze Li
|
Bing Qin
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Entity alignment (EA), critical for knowledge graph (KG) integration, identifies equivalent entities across different KGs. Traditional methods often face challenges in semantic understanding and scalability. The rise of language models (LMs), particularly large language models (LLMs), has provided powerful new strategies. This paper systematically reviews LM-driven EA methods, proposing a novel taxonomy that categorizes methods in three key stages: data preparation, feature embedding, and alignment. We further summarize key benchmarks, evaluation metrics, and discuss future directions. This paper aims to provide researchers and practitioners with a clear and comprehensive understanding of how language models reshape the field of entity alignment.
Self-Critique Guided Iterative Reasoning for Multi-hop Question Answering
Zheng Chu
|
Huiming Fan
|
Jingchang Chen
|
Qianyu Wang
|
Mingda Yang
|
Jiafeng Liang
|
Zhongjie Wang
|
Hao Li
|
Guo Tang
|
Ming Liu
|
Bing Qin
Findings of the Association for Computational Linguistics: ACL 2025
Although large language models (LLMs) have demonstrated remarkable reasoning capabilities, they still face challenges in knowledge-intensive multi-hop reasoning. Recent work explores iterative retrieval to address complex problems. However, the absence of intermediate guidance often leads to inaccurate retrieval and intermediate reasoning errors, leading to incorrect reasoning. To address these, we propose Self-Critique Guided Iterative Reasoning (SiGIR), which uses self-critique feedback to guide the iterative reasoning process. Specifically, through end-to-end training, we enable the model to iteratively address complex problems via question decomposition, while also being able to self-evaluate its intermediate reasoning steps. During iterative reasoning, the model engages in branching exploration and employs self-evaluation to guide the selection of promising reasoning trajectories. Extensive experiments on three multi-hop reasoning datasets demonstrate the effectiveness of our proposed method, surpassing the previous SOTA by 8.6%. Furthermore, our thorough analysis offers insights for future research. Our code, data, and models are available at https://github.com/zchuz/SiGIR-MHQA.
Search
Fix author
Co-authors
- Ming Liu 2
- Bing Qin (秦兵) 2
- Qianyu Wang 2
- Heng Chang 1
- Zerui Chen 1
- show all...