Matthieu Geist


2024

pdf
Contrastive Policy Gradient: Aligning LLMs on sequence-level scores in a supervised-friendly fashion
Yannis Flet-Berliac | Nathan Grinsztajn | Florian Strub | Eugene Choi | Bill Wu | Chris Cremer | Arash Ahmadian | Yash Chandak | Mohammad Gheshlaghi Azar | Olivier Pietquin | Matthieu Geist
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing

Reinforcement Learning (RL) has been used to finetune Large Language Models (LLMs) using a reward model trained from preference data, to better align with human judgment. The recently introduced direct alignment methods, which are often simpler, more stable, and computationally lighter, can more directly achieve this. However, these approaches cannot optimize arbitrary rewards, and the preference-based ones are not the only rewards of interest for LLMs (eg, unit tests for code generation or textual entailment for summarization, among others). RL-finetuning is usually done with a variation of policy gradient, which calls for on-policy or near-on-policy samples, requiring costly generations. We introduce *Contrastive Policy Gradient*, or CoPG, a simple and mathematically principled new RL algorithm that can estimate the optimal policy even from off-policy data. It can be seen as an off-policy policy gradient approach that does not rely on important sampling techniques and highlights the importance of using (the right) state baseline. We show this approach to generalize the direct alignment method IPO (identity preference optimization) and classic policy gradient. We experiment with the proposed CoPGon a toy bandit problem to illustrate its properties, as well as for finetuning LLMs on a summarization task, using a learned reward function considered as ground truth for the purpose of the experiments.

2023

pdf
Factually Consistent Summarization via Reinforcement Learning with Textual Entailment Feedback
Paul Roit | Johan Ferret | Lior Shani | Roee Aharoni | Geoffrey Cideron | Robert Dadashi | Matthieu Geist | Sertan Girgin | Leonard Hussenot | Orgad Keller | Nikola Momchev | Sabela Ramos Garea | Piotr Stanczyk | Nino Vieillard | Olivier Bachem | Gal Elidan | Avinatan Hassidim | Olivier Pietquin | Idan Szpektor
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

Despite the seeming success of contemporary grounded text generation systems, they often tend to generate factually inconsistent text with respect to their input. This phenomenon is emphasized in tasks like summarization, in which the generated summaries should be corroborated by their source article. In this work we leverage recent progress on textual entailment models to directly address this problem for abstractive summarization systems. We use reinforcement learning with reference-free, textual-entailment rewards to optimize for factual consistency and explore the ensuing trade-offs, as improved consistency may come at the cost of less informative or more extractive summaries. Our results, according to both automatic metrics and human evaluation, show that our method considerably improves the faithfulness, salience and conciseness of the generated summaries.

2013

pdf
Model-free POMDP optimisation of tutoring systems with echo-state networks
Lucie Daubigney | Matthieu Geist | Olivier Pietquin
Proceedings of the SIGDIAL 2013 Conference

2012

pdf
Optimisation d’un tuteur intelligent à partir d’un jeu de données fixé (Optimization of a tutoring system from a fixed set of data) [in French]
Lucie Daubigney | Matthieu Geist | Olivier Pietquin
Proceedings of the Joint Conference JEP-TALN-RECITAL 2012, volume 1: JEP

2010

pdf
Sparse Approximate Dynamic Programming for Dialog Management
Senthilkumar Chandramohan | Matthieu Geist | Olivier Pietquin
Proceedings of the SIGDIAL 2010 Conference