Heyuan Huang

May refer to several people

Other people with similar names: Heyuan Huang (JHU)


2025

pdf bib
RAG+: Enhancing Retrieval-Augmented Generation with Application-Aware Reasoning
Yu Wang | Shiwan Zhao | Zhihu Wang | Ming Fan | Xicheng Zhang | Yubo Zhang | Zhengfan Wang | Heyuan Huang | Ting Liu
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing

The integration of external knowledge through Retrieval-Augmented Generation (RAG) has become foundational in enhancing large language models (LLMs) for knowledge-intensive tasks. However, existing RAG paradigms often overlook the cognitive step of applying knowledge, leaving a gap between retrieved facts and task-specific reasoning. In this work, we introduce RAG+, a principled and modular extension that explicitly incorporates application-aware reasoning into the RAG pipeline. RAG+ constructs a dual corpus consisting of knowledge and aligned application examples, created either manually or automatically, and jointly retrieves both during inference. This design enables LLMs not only to access relevant information but also to apply it within structured, goal-oriented reasoning processes. Experiments across mathematical, law, and medical domains, conducted on multiple models, demonstrate that RAG+ consistently outperforms standard RAG variants, achieving average improvements of 3–5%, and peak gains up to 13.5% in complex scenarios. By bridging retrieval with actionable application, RAG+ advances a more cognitively grounded framework for knowledge integration, representing a step toward more interpretable and capable LLMs.

pdf bib
Re-TASK: Revisiting LLM Tasks from Capability, Skill, and Knowledge Perspectives
Zhihu Wang | Shiwan Zhao | Yu Wang | Heyuan Huang | Sitao Xie | Yubo Zhang | Jiaxin Shi | Zhixing Wang | Hongyan Li | Junchi Yan
Findings of the Association for Computational Linguistics: ACL 2025

The Chain-of-Thought (CoT) paradigm has become a pivotal method for solving complex problems with large language models (LLMs). However, its application to domain-specific tasks remains challenging, as LLMs often fail to decompose tasks accurately or execute subtasks effectively. This paper introduces the Re-TASK framework, a novel theoretical model that Revisits LLM Tasks from cApability, Skill, and Knowledge perspectives, drawing on the principles of Bloom’s Taxonomy and Knowledge Space Theory. While CoT provides a workflow-centric perspective on tasks, Re-TASK introduces a Chain-of-Learning (CoL) paradigm that highlights task dependencies on specific capability items, further broken down into their constituent knowledge and skill components. To address CoT failures, we propose a Re-TASK prompting strategy, which strengthens task-relevant capabilities through targeted knowledge injection and skill adaptation. Experiments across diverse domains demonstrate the effectiveness of Re-TASK. In particular, we achieve improvements of 45.00% on Yi-1.5-9B and 24.50% on Llama3-Chinese-8B for legal tasks. These results highlight the potential of Re-TASK to significantly enhance LLM performance and its applicability in specialized domains. We release our code and data at https://github.com/Uylee/Re-TASK.

pdf bib
OAgents: An Empirical Study of Building Effective Agents
He Zhu | Tianrui Qin | King Zhu | Heyuan Huang | Yeyi Guan | Jinxiang Xia | Hanhao Li | Yi Yao | Ningning Wang | Pai Liu | Tianhao Peng | Xin Gui | Li Xiaowan | Yuhui Liu | Xiangru Tang | Jian Yang | Ge Zhang | Xitong Gao | Yuchen Eleanor Jiang | Changwang Zhang | Jun Wang | Jiaheng Liu | Wangchunshu Zhou
Findings of the Association for Computational Linguistics: EMNLP 2025

Recently, Agentic AI has become an increasingly popular field of research. However, we argue that current practices on agent research are far from standard, rigorous scientific research, which makes it hard to conduct apples-to-apples comparisons among and against existing methods. As a result, it is still obscure how different design choices in an agent framework impact its effectiveness, and measuring progress on agent research remains very hard. In this work, we conduct a systematic empirical study on the GAIA benchmark to investigate the impact of different popular design choices within key agent components in a fair and rigorous way. To begin with, we find that the lack of a standard evaluation protocol makes previous works, even the open-sourced ones, not reproducible, and the variance between different random runs is often non-negligible. Therefore, we first introduce a more robust evaluation protocol to make comparisons more stable. Our empirical study then unveils which components and designs, as well as correlations between these designs, are the keys for building effective agents, while others are not and redundant, despite seemingly making sense. With the insights gained from our empirical study, we build and open-source OAgents, a new foundation agent framework that achieves state-of-the-art performance among open-source projects, providing a good starting point and guidelines for building effective agents. More importantly, supports various design choices for agent components in a modularized way, facilitating future scientific research on Agentic AI.