2025
pdf
bib
abs
WarriorCoder: Learning from Expert Battles to Augment Code Large Language Models
Huawen Feng
|
Pu Zhao
|
Qingfeng Sun
|
Can Xu
|
Fangkai Yang
|
Lu Wang
|
Qianli Ma
|
Qingwei Lin
|
Saravan Rajmohan
|
Dongmei Zhang
|
Qi Zhang
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Despite recent progress achieved by code large language models (LLMs), their remarkable abilities are largely dependent on fine-tuning on the high-quality data, posing challenges for data collection and annotation. To address this, current methods often design various data flywheels to collect complex code instructions, enabling models to handle more intricate tasks. However, these approaches typically rely on off-the-shelf datasets and data augmentation from a limited set of proprietary LLMs (e.g., Claude, GPT4, and so on), which restricts the diversity of the constructed data and makes it prone to systemic biases. In this paper, we propose **WarriorCoder**, a novel paradigm learns from expert battles to address these limitations. Specifically, we create an arena where leading expert code LLMs challenge each other, with evaluations conducted by impartial judges. This competitive framework generates novel training data from scratch, leveraging the strengths of all participants. Experimental results show that **WarriorCoder** achieves state-of-the-art performance compared to previous models of the same size, even without relying on proprietary LLMs.
pdf
bib
abs
AXIS: Efficient Human-Agent-Computer Interaction with API-First LLM-Based Agents
Junting Lu
|
Zhiyang Zhang
|
Fangkai Yang
|
Jue Zhang
|
Lu Wang
|
Chao Du
|
Qingwei Lin
|
Saravan Rajmohan
|
Dongmei Zhang
|
Qi Zhang
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Multimodal large language models (MLLMs) have enabled LLM-based agents to directly interact with application user interfaces (UIs), enhancing agents’ performance in complex tasks. However, these agents often suffer from high latency and low reliability due to the extensive sequential UI interactions. To address this issue, we propose AXIS, a novel LLM-based agents framework that prioritize actions through application programming interfaces (APIs) over UI actions. This framework also facilitates the creation and expansion of APIs through automated exploration of applications. Our experiments on Microsoft Word demonstrate that AXIS reduces task completion time by 65%-70% and cognitive workload by 38%-53%, while maintaining accuracy of 97%-98% compared to humans. Our work contributes to a new human-agent-computer interaction (HACI) framework and explores a fresh UI design principle for application providers to turn applications into agents in the era of LLMs, paving the way towards an agent-centric operating system (Agent OS). The code and dataset will be available at https://aka.ms/haci_axis.
pdf
bib
abs
Thread: A Logic-Based Data Organization Paradigm for How-To Question Answering with Retrieval Augmented Generation
Kaikai An
|
Fangkai Yang
|
Liqun Li
|
Junting Lu
|
Sitao Cheng
|
Shuzheng Si
|
Lu Wang
|
Pu Zhao
|
Lele Cao
|
Qingwei Lin
|
Saravan Rajmohan
|
Dongmei Zhang
|
Baobao Chang
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Recent advances in retrieval-augmented generation (RAG) have substantially improved question-answering systems, particularly for factoid ‘5Ws’ questions. However, significant challenges remain when addressing ‘1H’ questions, specifically how-to questions, which are integral for decision-making and require dynamic, step-by-step responses. The key limitation lies in the prevalent data organization paradigm, chunk, which commonly divides documents into fixed-size segments, and disrupts the logical coherence and connections within the context. To address this, we propose THREAD, a novel data organization paradigm enabling systems to handle how-to questions more effectively. Specifically, we introduce a new knowledge granularity, ‘logic unit’ (LU), where large language models transform documents into more structured and loosely interconnected LUs. Extensive experiments across both open-domain and industrial settings show that THREAD outperforms existing paradigms significantly, improving the success rate of handling how-to questions by 21% to 33%. Additionally, THREAD demonstrates high adaptability across diverse document formats, reducing retrieval information by up to 75% compared to chunk, and also shows better generalizability to ‘5Ws’ questions, such as multi-hop questions, outperforming other paradigms.
pdf
bib
abs
Token-level Proximal Policy Optimization for Query Generation
Yichen Ouyang
|
Lu Wang
|
Fangkai Yang
|
Pu Zhao
|
Chenghua Huang
|
Jianfeng Liu
|
Bochen Pang
|
Yaming Yang
|
Yuefeng Zhan
|
Hao Sun
|
Qingwei Lin
|
Saravan Rajmohan
|
Weiwei Deng
|
Dongmei Zhang
|
Feng Sun
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Query generation is a critical task for web search engines (e.g. Google, Bing) and recommendation systems. Recently, state-of-the-art query generation methods leverage Large Language Models (LLMs) for their strong capabilities in context understanding and text generation. However, they still face challenges in generating high-quality queries in terms of inferring user intent based on their web search interaction history. In this paper, we propose Token-level Proximal Policy Optimization (TPPO), a noval approach designed to empower LLMs perform better in query generation through fine-tuning. TPPO is based on the Reinforcement Learning from AI Feedback (RLAIF) paradigm, consisting of a token-level reward model and a token-level proximal policy optimization module to address the sparse reward challenge in traditional RLAIF frameworks. We conducted experiments on both open-source dataset and an industrial dataset that was collected from a globally-used search engine, demonstrating that TPPO significantly improves the performance of query generation for LLMs and outperforms its existing competitors.
pdf
bib
abs
AdaptFlow: Adaptive Workflow Optimization via Meta-Learning
Runchuan Zhu
|
Bowen Jiang
|
Lingrui Mei
|
Fangkai Yang
|
Lu Wang
|
Haoxiang Gao
|
Fengshuo Bai
|
Pu Zhao
|
Qingwei Lin
|
Saravan Rajmohan
|
Dongmei Zhang
Findings of the Association for Computational Linguistics: EMNLP 2025
Recent advances in large language models (LLMs) have sparked growing interest in agentic workflows—structured sequences of LLM invocations designed to solve complex tasks. However, existing approaches often rely on static templates or manually designed workflows, which limit adaptability to diverse tasks and hinder scalability. We propose AdaptFlow, a natural language-based meta-learning framework inspired by model-agnostic meta-learning (MAML). AdaptFlow uses a bi-level optimization process: the inner loop performs task-specific adaptation via LLM-generated feedback, while the outer loop consolidates these refinements into a shared, generalizable initialization. Evaluated across question answering, code generation, and mathematical reasoning benchmarks, AdaptFlow consistently outperforms both manually crafted and automatically searched baselines, achieving state-of-the-art results with strong generalization across tasks and models.
pdf
bib
abs
ICL-Bandit: Relevance Labeling in Advertisement Recommendation Systems via LLM
Lu Wang
|
Chiming Duan
|
Pu Zhao
|
Fangkai Yang
|
Yong Shi
|
Xuefeng Luo
|
Bingjing Xu
|
Weiwei Deng
|
Qingwei Lin
|
Dongmei Zhang
Findings of the Association for Computational Linguistics: EMNLP 2025
Measuring the relevance between user queries and advertisements is a critical task for advertisement (ad) recommendation systems, such as Microsoft Bing Ads and Google Ads. Traditionally, this requires expert data labeling, which is both costly and time-consuming. Recent advances have explored using Large Language Models (LLMs) for labeling, but these models often lack domain-specific knowledge. In-context learning (ICL), which involves providing a few demonstrations, is a common practice to enhance LLM performance on domain-specific tasks. However, retrieving high-quality demonstrations in a vast exploration space remains challenging. In this paper, we introduce ICL-Bandit, a practical and effective approach that leverages ICL to enhance the query-ad relevance labeling capabilities of LLMs. We develop a novel bandit learning method to identify and provide superior demonstrations for ICL, thereby improving labeling performance. Experimental results demonstrate that ICL-Bandit achieves state-of-the-art performance compared to existing methods. Additionally, ICL-Bandit has been deployed in Company X, that serves billions of users worldwide, confirming its robustness and effectiveness.