Abstract
This paper addresses the challenges of aligning large language models (LLMs) with human values via preference learning (PL), focusing on incomplete and corrupted data in preference datasets. We propose a novel method for robustly and completely recalibrating values within these datasets to enhance LLMs’ resilience against the issues. In particular, we devise a guaranteed polynomial time ranking algorithm that robustifies several existing models, such as the classic Bradley–Terry–Luce (BTL) model and certain generalizations of it. To the best of our knowledge, our present work is the first to propose an algorithm that provably recovers an 𝜖-optimal ranking with high probability while allowing as large as O(n) perturbed pairwise comparison results per model response. Furthermore, we show robust recovery results in the partially observed setting. Our experiments confirm that our algorithms handle adversarial noise and unobserved comparisons well in LLM preference dataset settings. This work contributes to the development and scaling of more reliable and ethically aligned AI models by equipping the dataset curation pipeline with the ability to handle missing and maliciously manipulated inputs.- Anthology ID:
- 2024.dash-1.5
- Volume:
- Proceedings of the Fifth Workshop on Data Science with Human-in-the-Loop (DaSH 2024)
- Month:
- June
- Year:
- 2024
- Address:
- Mexico City, Mexico
- Editors:
- Eduard Dragut, Yunyao Li, Lucian Popa, Slobodan Vucetic, Shashank Srivastava
- Venues:
- DaSH | WS
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 31–39
- Language:
- URL:
- https://aclanthology.org/2024.dash-1.5
- DOI:
- 10.18653/v1/2024.dash-1.5
- Cite (ACL):
- Son The Nguyen, Niranjan Uma Naresh, and Theja Tulabandhula. 2024. CURATRON: Complete and Robust Preference Data for Rigorous Alignment of Large Language Models. In Proceedings of the Fifth Workshop on Data Science with Human-in-the-Loop (DaSH 2024), pages 31–39, Mexico City, Mexico. Association for Computational Linguistics.
- Cite (Informal):
- CURATRON: Complete and Robust Preference Data for Rigorous Alignment of Large Language Models (Nguyen et al., DaSH-WS 2024)
- PDF:
- https://preview.aclanthology.org/nschneid-patch-4/2024.dash-1.5.pdf