Zexu Sun
2025
KAPA: A Deliberative Agent Framework with Tree-Structured Knowledge Base for Multi-Domain User Intent Understanding
Jiakai Tang
|
Shiqi Shen
|
ZhipengWang ZhipengWang
|
Gong Zhi
|
Xueyang Feng
|
Zexu Sun
|
Haoran Tan
|
Xu Chen
Findings of the Association for Computational Linguistics: ACL 2025
Dialogue assistants have become ubiquitous in modern applications, fundamentally reshaping human daily communication patterns and information access behaviors. In real-world conversational interactions, however, user queries are often volatile, ambiguous, and diverse, making it difficult accurately and efficiently grasp the user’s underlying intentions. To address this challenge, we propose a simple yet effective deliberative agent framework that leverages human thought process to build high-level domain knowledge. To further achieve efficient knowledge accumulation and retrieval, we design a tree-structured knowledge base to store refined experience and data. Moreover, we construct a new benchmark, User-Intent-Understanding (UIU), which covers multi-domain, multi-tone, and sequential multi-turn personalized user queries. Extensive experiments demonstrate the effectiveness of our proposed method across multi-step evaluations.
2024
Controllable Preference Optimization: Toward Controllable Multi-Objective Alignment
Yiju Guo
|
Ganqu Cui
|
Lifan Yuan
|
Ning Ding
|
Zexu Sun
|
Bowen Sun
|
Huimin Chen
|
Ruobing Xie
|
Jie Zhou
|
Yankai Lin
|
Zhiyuan Liu
|
Maosong Sun
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Alignment in artificial intelligence pursues the consistency between model responses and human preferences as well as values. In practice, the multifaceted nature of human preferences inadvertently introduces what is known as the ”alignment tax”–a compromise where enhancements in alignment within one objective (e.g., harmlessness) can diminish performance in others (e.g., helpfulness). However, existing alignment techniques are mostly unidirectional, leading to suboptimal trade-offs and poor flexibility over various objectives. To navigate this challenge, we argue the prominence of grounding LLMs with evident preferences. We introduce controllable preference optimization (CPO), which explicitly specifies preference scores for different objectives, thereby guiding the model to generate responses that meet the requirements. Our experimental analysis reveals that the aligned models can provide responses that match various preferences among the ”3H” (helpfulness, honesty, harmlessness) desiderata. Furthermore, by introducing diverse data and alignment goals, we surpass baseline methods in aligning with single objectives, hence mitigating the impact of the alignment tax and achieving improvements in multi-objective alignment.
Search
Fix author
Co-authors
- Huimin Chen 1
- Xu Chen (徐晨) 1
- Ganqu Cui 1
- Ning Ding 1
- Xueyang Feng 1
- show all...