Shichao Wang
2025
Resource-Friendly Dynamic Enhancement Chain for Multi-Hop Question Answering
Binquan Ji
|
Haibo Luo
|
YifeiLu YifeiLu
|
Lei Hei
|
Jiaqi Wang
|
Tingjing Liao
|
Wang Lingyu
|
Shichao Wang
|
Feiliang Ren
Findings of the Association for Computational Linguistics: ACL 2025
Knowledge-intensive multi-hop question answering (QA) tasks, which require integrating evidence from multiple sources to address complex queries, often necessitate multiple rounds of retrieval and iterative generation by large language models (LLMs). However, incorporating many documents and extended contexts poses challenges—such as hallucinations and semantic drift—for lightweight LLMs with fewer parameters. This work proposes a novel framework called DEC (Dynamic Enhancement Chain). DEC first decomposes complex questions into logically coherent subquestions to form a hallucination-free reasoning chain. It then iteratively refines these subquestions through context-aware rewriting to generate effective query formulations. For retrieval, we introduce a lightweight discriminative keyword extraction module that leverages extracted keywords to achieve targeted, precise document recall with relatively low computational overhead. Extensive experiments on three multi-hop QA datasets demonstrate that DEC performs on par with or surpasses state-of-the-art benchmarks while significantly reducing token consumption. Notably, our approach attains state-of-the-art results on models with 8B parameters, showcasing its effectiveness in various scenarios, particularly in resource-constrained environments.
2021
Incorporating Circumstances into Narrative Event Prediction
Shichao Wang
|
Xiangrui Cai
|
HongBin Wang
|
Xiaojie Yuan
Findings of the Association for Computational Linguistics: EMNLP 2021
The narrative event prediction aims to predict what happens after a sequence of events, which is essential to modeling sophisticated real-world events. Existing studies focus on mining the inter-events relationships while ignoring how the events happened, which we called circumstances. With our observation, the event circumstances indicate what will happen next. To incorporate event circumstances into the narrative event prediction, we propose the CircEvent, which adopts the two multi-head attention to retrieve circumstances at the local and global levels. We also introduce a regularization of attention weights to leverage the alignment between events and local circumstances. The experimental results demonstrate our CircEvent outperforms existing baselines by 12.2%. The further analysis demonstrates the effectiveness of our multi-head attention modules and regularization.
Search
Fix author
Co-authors
- Xiangrui Cai 1
- Lei Hei 1
- Binquan Ji 1
- Tingjing Liao 1
- Wang Lingyu 1
- show all...