Binrong Liu


Fixing paper assignments

  1. Please select all papers that belong to the same person.
  2. Indicate below which author they should be assigned to.
Provide a valid ORCID iD here. This will be used to match future papers to this author.
Provide the name of the school or the university where the author has received or will receive their highest degree (e.g., Ph.D. institution for researchers, or current affiliation for students). This will be used to form the new author page ID, if needed.

TODO: "submit" and "cancel" buttons here


2025

pdf bib
DS-MHP: Improving Chain-of-Thought through Dynamic Subgraph-Guided Multi-Hop Path
Yongqiang Liu | Qiyao Peng | Binrong Liu | Hongtao Liu | XueWei Li | Wenjun Wang
Findings of the Association for Computational Linguistics: EMNLP 2025

Large language models (LLMs) excel in natural language tasks, with Chain-of-Thought (CoT) prompting enhancing reasoning through step-by-step decomposition. However, CoT struggles in knowledge-intensive tasks with multiple entities and implicit multi-hop relations, failing to connect entities systematically in zero-shot settings. Existing knowledge graph methods, limited by static structures, lack adaptability in complex scenarios. We propose DS-MHP, a zero-shot framework to enhance LLM reasoning in multi-entity relation tasks. DS-MHP operates in three stages: 1) constructing query-specific subgraphs by extracting entities and relations; 2) generating and refining multi-hop paths using a hybrid strategy of Breadth-First Search, greedy expansion, and LLM supplementation; and 3) guiding LLMs with subgraphs and paths, aggregating answers via majority voting. Evaluated on 12 datasets spanning commonsense, logical, symbolic, and arithmetic reasoning, DS-MHP outperforms baselines and state-of-the-art methods in nearly all benchmarks. It achieves overall average accuracy increases of 3.9% on Mistral-7B and 3.6% on GPT-3.5 Turbo compared to SOTA, with significant gains in logical and symbolic reasoning. Additionally, DS-MHP reduces runtime and LLM calls compared to SOTA, enhancing computational efficiency. These improvements demonstrate DS-MHP’s superior reasoning accuracy, explainability, and efficiency in complex multi-entity tasks.