Yukti Makhija
2025
FRACTAL: Fine-Grained Scoring from Aggregate Text Labels
Yukti Makhija
|
Priyanka Agrawal
|
Rishi Saket
|
Aravindan Raghuveer
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Fine-Tuning of LLMs using RLHF / RLAIF has been shown as a critical step to improve the performance of LLMs in complex generation tasks. These methods typically use response-level human or model feedback for alignment. Recent works indicate that finer sentence or span-level labels provide more accurate and interpretable feedback for LLM optimization. In this work, we propose FRACTAL, a suite of models to disaggregate response-level labels into sentence-level (pseudo-)labels through Multiple Instance Learning (MIL) and Learning from Label Proportions (LLP) formulations, novel usage of prior information, and maximum likelihood calibration. We perform close to 2000 experiments across 6 datasets and 4 tasks that show that FRACTAL can reach up to 93% of the performance of the fully supervised baseline while requiring only around 10% of the gold labels. Furthermore, in a downstream eval, employing step-level pseudo scores in RLHF for a math reasoning task leads to 5% absolute improvement in performance. Our work is the first to develop response-level feedback to sentence-level scoring techniques leveraging sentence-level prior information, along with comprehensive evaluations on multiple tasks as well as end-to-end finetuning evaluations.