Aishan Maoliniyazi
2025
Towards Better Value Principles for Large Language Model Alignment: A Systematic Evaluation and Enhancement
Bingbing Xu
|
Jing Yao
|
Xiaoyuan Yi
|
Aishan Maoliniyazi
|
Xing Xie
|
Xiaofeng Meng
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
As Large Language Models (LLMs) advance, aligning them with human values is critical for their responsible development. Value principles serve as the foundation for clarifying alignment goals.Multiple sets of value principles have been proposed, such as HHH (helpful, honest, harmless) and instructions for data synthesis in reinforcement learning from AI feedback (RLAIF). However, most of them are heuristically crafted, without consideration of three primary challenges in practical LLM alignment: 1) Comprehensiveness to deal with diverse and even unforeseen scenarios in which LLMs could be applied; 2) Precision to provide LLMs with clear and actionable guidance in specific scenarios; and 3) Compatability to avoid internal contracts between principles.In this paper, we formalize quantitative metrics to evaluate value principles along the three desirable properties. Building on these metrics, we propose the Hierarchical Value Principle framework (HiVaP), which constructs a hierarchical principle set and retrieves principles tailored to each scenario in a cascading way, addressing above challenges.Experimental results validate that the three metrics capture the effectiveness of value principles for LLM alignment, and our HiVaP framework that enhances these metrics leads to superior alignment. Warning: This paper contains several toxic and offensive statements.