2025
pdf
bib
abs
Meta-Rewarding Language Models: Self-Improving Alignment with LLM-as-a-Meta-Judge
Tianhao Wu
|
Weizhe Yuan
|
Olga Golovneva
|
Jing Xu
|
Yuandong Tian
|
Jiantao Jiao
|
Jason E Weston
|
Sainbayar Sukhbaatar
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Large Language Models (LLMs) are rapidly surpassing human knowledge in many domains. While improving these models traditionally relies on costly human data, recent self-rewarding mechanisms have shown that LLMs can improve by judging their own responses instead of relying on human labelers. However, existing methods have primarily focused on improving model responses rather than judgment capabilities, resulting in rapid saturation during iterative training. To address this issue, we introduce a novel Meta-Rewarding step to the self-improvement process, where the model judges its own judgements and uses that feedback to refine its judgment skills. Surprisingly, this unsupervised approach improves the model’s ability to judge and follow instructions, as demonstrated by a win rate improvement of Llama-3-8B-Instruct from 22.9% to 39.4% on AlpacaEval 2, and 20.6% to 29.1% on Arena-Hard. These results strongly suggest the potential for self-improving models without human supervision.
pdf
bib
abs
Following Length Constraints in Instructions
Weizhe Yuan
|
Ilia Kulikov
|
Ping Yu
|
Kyunghyun Cho
|
Sainbayar Sukhbaatar
|
Jason E Weston
|
Jing Xu
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Aligned instruction following models can better fulfill user requests than their unaligned counterparts. However, it has been shown that there is a length bias in evaluation of such models, and that training algorithms tend to exploit this bias by learning longer responses. In this work we show how to train models that can be controlled at inference time with instructions containing desired length constraints. Such models are superior in length instructed evaluations, outperforming standard instruction following models such as GPT4, Llama 3 and Mixtral.
pdf
bib
abs
Step-KTO: Optimizing Mathematical Reasoning through Stepwise Binary Feedback
Yen-Ting Lin
|
Di Jin
|
Tengyu Xu
|
Tianhao Wu
|
Sainbayar Sukhbaatar
|
Chen Zhu
|
Yun He
|
Yun-Nung Chen
|
Jason E Weston
|
Yuandong Tian
|
Arash Rahnama
|
Sinong Wang
|
Hao Ma
|
Han Fang
Proceedings of The 3rd Workshop on Mathematical Natural Language Processing (MathNLP 2025)
Large language models (LLMs) have recently demonstrated remarkable success in mathematical reasoning. Despite progress in methods like chain-of-thought prompting and self-consistency sampling, these advances often focus on final correctness without ensuring that the underlying reasoning process is coherent and reliable. This paper introduces Step-KTO, a training framework that combines process-level and outcome-level binary feedback to guide LLMs toward more trustworthy reasoning trajectories. By providing binary evaluations for both the intermediate reasoning steps and the final answer, Step-KTO encourages the model to adhere to logical progressions rather than relying on superficial shortcuts. Our experiments on challenging mathematical benchmarks show that Step-KTO significantly improves both final answer accuracy and the quality of intermediate reasoning steps. For example, on the MATH-500 dataset, Step-KTO achieves a notable improvement in Pass@1 accuracy over strong baselines. These results highlight the promise of integrating stepwise process feedback into LLM training, paving the way toward more interpretable and dependable reasoning capabilities.
2023
pdf
bib
abs
The CRINGE Loss: Learning what language not to model
Leonard Adolphs
|
Tianyu Gao
|
Jing Xu
|
Kurt Shuster
|
Sainbayar Sukhbaatar
|
Jason Weston
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Standard language model training employs gold human documents or human-human interaction data, and treats all training data as positive examples. Growing evidence shows that even with very large amounts of positive training data, issues remain that can be alleviated with relatively small amounts of negative data – examples of what the model should not do. In this work, we propose a novel procedure to train with such data called the “CRINGE” loss (ContRastive Iterative Negative GEneration). We show the effectiveness of this approach across three different experiments on the tasks of safe generation, contradiction avoidance, and open-domain dialogue. Our models outperform multiple strong baselines and are conceptually simple, easy to train and implement.
2022
pdf
bib
abs
Director: Generator-Classifiers For Supervised Language Modeling
Kushal Arora
|
Kurt Shuster
|
Sainbayar Sukhbaatar
|
Jason Weston
Proceedings of the 2nd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 12th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)
Current language models achieve low perplexity but their resulting generations still suffer from toxic responses, repetitiveness, and contradictions. The standard language modeling setup fails to address these issues. In this paper, we introduce a new architecture, Director, that consists of a unified generator-classifier with both a language modeling and a classification head for each output token. Training is conducted jointly using both standard language modeling data, and data labeled with desirable and undesirable sequences. Experiments in several settings show that the model has competitive training and decoding speed compared to standard language models while yielding superior results, avoiding undesirable behaviors while maintaining generation quality. It also outperforms existing model guiding approaches in terms of both accuracy and efficiency. Our code is made publicly available.
2019
pdf
bib
abs
Adaptive Attention Span in Transformers
Sainbayar Sukhbaatar
|
Edouard Grave
|
Piotr Bojanowski
|
Armand Joulin
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics
We propose a novel self-attention mechanism that can learn its optimal attention span. This allows us to extend significantly the maximum context size used in Transformer, while maintaining control over their memory footprint and computational time. We show the effectiveness of our approach on the task of character level language modeling, where we achieve state-of-the-art performances on text8 and enwiki8 by using a maximum context of 8k characters.
pdf
bib
abs
Training Hybrid Language Models by Marginalizing over Segmentations
Edouard Grave
|
Sainbayar Sukhbaatar
|
Piotr Bojanowski
|
Armand Joulin
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics
In this paper, we study the problem of hybrid language modeling, that is using models which can predict both characters and larger units such as character ngrams or words. Using such models, multiple potential segmentations usually exist for a given string, for example one using words and one using characters only. Thus, the probability of a string is the sum of the probabilities of all the possible segmentations. Here, we show how it is possible to marginalize over the segmentations efficiently, in order to compute the true probability of a sequence. We apply our technique on three datasets, comprising seven languages, showing improvements over a strong character level language model.