Zihao Li
Other people with similar names: Zihao Li , Zihao Li , Zihao Li (Helsinki)
2025
Token-level Preference Self-Alignment Optimization for Multi-style Outline Controllable Generation
Zihao Li
|
Xuekong Xu
|
Ziyao Chen
|
Lixin Zou
|
Ethanhjwu Ethanhjwu
|
Qiang Chen
|
Chenliang Li
Findings of the Association for Computational Linguistics: ACL 2025
Multi-style outline controllable generation is crucial for multiple applications, including document semantic structuring and retrieval-augmented generation.The great success of preference alignment approaches encourages their application in controllable generation tasks.However, these attempts encounter several limitations: (1) response pair requirements, (2) substantial computation costs, and (3) insufficient exploitation of fine-grained preference signals.To address these problems, we propose a token-level preference self-alignment optimization, named TKPO, for outline controllable generation. TKPO extends the Bradley-Terry model from pair-wise to list-wise comparison, which is further applied at the token level for fine-grained preference signal utilization. In comparison to the representative methods, e.g., DPO, TKPO does not require response pairs; instead, we propose a controllable attributes-driven method to construct reject samples for self-alignment. Additionally, TKPO optimizes only the base model, thereby avoiding additional memory usage and substantial computational costs.We curate two outline controllable generation datasets with regard to language style and level-of-detail.Extensive experiments demonstrate that TKPO outperforms DPO by up to 19.28% in performance while requiring only 56.25% in training time.We release the code and datasets resources at https://github.com/WHUIR/TKPO.
CharacterCraft: Bridging the Literature-Reality Dialogue Gap for Practical Role-Playing Agents
Xuyan Yin
|
Xinran Yang
|
Zihao Li
|
Lixin Zou
|
Chenliang Li
Findings of the Association for Computational Linguistics: EMNLP 2025
Recent advancements in large language models (LLMs) have given rise to the emergence of role-playing agents (RPAs). The development of high-quality dialogue datasets is critical for advancing RPAs. However, existing datasets have two main issues: (1) the bias between query distributions and real-world user language usage, and (2) the challenge of ensuring responses accurately reflect character traits.To address these issues, we propose CharacterCraft, a novel framework designed for practical RPAs, comprising a tailored Chinese role-playing dataset and a robust evaluation method. First, we develop a specialized model for Chinese dialogue extraction, achieving state-of-the-art performance. Using this model, we then extract a large amount of character dialogue from novels, ensuring high data quality (issue 2).To mitigate the literature-reality dialogue bias in extracted dialogue (issue 1), we introduce an iterative augmentation-reconstruction method, which revises queries to better align with common language usage. Additionally, we propose a context-aware memory retrieval module for fine-grained alignment with the character and introduce a reference-guided LLM-as-a-judge evaluation method for more reliable assessments by comparing their responses to source material dialogues.Our automated pipeline produces a large-scale high-quality Chinese role-playing dataset with 21,392 samples and 121,418 utterances. The experimental results demonstrate the effectiveness of our framework and reveal the limitations of existing RPAs when faced with diverse scenes.Our repository is at https://github.com/yin214/CharacterCraft.
Search
Fix author
Co-authors
- Chenliang Li 2
- Lixin Zou 2
- Ziyao Chen 1
- Qiang Chen 1
- Ethanhjwu Ethanhjwu 1
- show all...