Suhang Zheng
Also published as: SuHang Zheng
2025
GAPO: Learning Preferential Prompt through Generative Adversarial Policy Optimization
Zhouhong Gu
|
Xingzhou Chen
|
Xiaoran Shi
|
Tao Wang
|
Suhang Zheng
|
Tianyu Li
|
Hongwei Feng
|
Yanghua Xiao
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Recent advances in large language models have highlighted the critical need for precise control over model outputs through predefined constraints. While existing methods attempt to achieve this through either direct instruction-response synthesis or preferential response optimization, they often struggle with constraint understanding and adaptation. This limitation becomes particularly evident when handling fine-grained constraints, leading to either hallucination or brittle performance. We introduce Generative Adversarial Policy Optimization (GAPO), a novel framework that combines GAN-based training dynamics with an encoder-only reward model to progressively learn and adapt to increasingly complex constraints. GAPO leverages adversarial training to automatically generate training samples of varying difficulty while utilizing the encoder-only architecture to better capture prompt-response relationships. Extensive experiments demonstrate GAPO’s superior performance across multiple benchmarks, particularly in scenarios requiring fine-grained constraint handling, where it significantly outperforms existing methods like PPO, DPO, and KTO. Our results suggest that GAPO’s unique approach to preferential prompt learning offers a more robust and effective solution for controlling LLM outputs.
2020
Zero-shot Text Classification via Reinforced Self-training
Zhiquan Ye
|
Yuxia Geng
|
Jiaoyan Chen
|
Jingmin Chen
|
Xiaoxiao Xu
|
SuHang Zheng
|
Feng Wang
|
Jun Zhang
|
Huajun Chen
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
Zero-shot learning has been a tough problem since no labeled data is available for unseen classes during training, especially for classes with low similarity. In this situation, transferring from seen classes to unseen classes is extremely hard. To tackle this problem, in this paper we propose a self-training based method to efficiently leverage unlabeled data. Traditional self-training methods use fixed heuristics to select instances from unlabeled data, whose performance varies among different datasets. We propose a reinforcement learning framework to learn data selection strategy automatically and provide more reliable selection. Experimental results on both benchmarks and a real-world e-commerce dataset show that our approach significantly outperforms previous methods in zero-shot text classification
Search
Fix author
Co-authors
- Jiaoyan Chen 1
- Jingmin Chen 1
- Huajun Chen 1
- Xingzhou Chen 1
- Hongwei Feng 1
- show all...
Venues
- acl2