Inverse Reinforcement Learning Meets Large Language Model Alignment

Mihaela van der Schaar, Hao Sun


Abstract
In the era of Large Language Models (LLMs), alignment has emerged as a fundamental yet challenging problem in the pursuit of more reliable, controllable, and capable machine intelligence. The recent success of reasoning models and conversational AI systems has underscored the critical role of reinforcement learning (RL) in enhancing these systems, driving increased research interest at the intersection of RL and LLM alignment.This tutorial will provide a comprehensive review of recent advances in LLM alignment through the lens of inverse reinforcement learning (IRL), emphasizing the distinctions between RL techniques employed in LLM alignment and those in conventional RL tasks. In particular, we highlight the necessity of constructing neural reward models from human data and discuss the formal and practical implications of this paradigm shift. The tutorial will begin with fundamental concepts in RL to provide a foundation for the audience unfamiliar with the field. We then examine recent advances in this research agenda, discussing key challenges and opportunities in conducting IRL for LLM alignment. Beyond methodological considerations, we explore practical aspects, including datasets, benchmarks, evaluation metrics, infrastructure, and computationally efficient training and inference techniques.Finally, we draw insights from the literature on sparse-reward RL to identify open questions and potential research directions. By synthesizing findings from diverse studies, we aim to provide a structured and critical overview of the field, highlight unresolved challenges, and outline promising future directions for improving LLM alignment through RL and IRL techniques.
Anthology ID:
2025.acl-tutorials.1
Volume:
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 5: Tutorial Abstracts)
Month:
July
Year:
2025
Address:
Vienna, Austria
Editors:
Yuki Arase, David Jurgens, Fei Xia
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1
Language:
URL:
https://preview.aclanthology.org/ingestion-acl-25/2025.acl-tutorials.1/
DOI:
Bibkey:
Cite (ACL):
Mihaela van der Schaar and Hao Sun. 2025. Inverse Reinforcement Learning Meets Large Language Model Alignment. In Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 5: Tutorial Abstracts), pages 1–1, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
Inverse Reinforcement Learning Meets Large Language Model Alignment (van der Schaar & Sun, ACL 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingestion-acl-25/2025.acl-tutorials.1.pdf