Supriyo Ghosh
2025
TACO-RL: Task Aware Prompt Compression Optimization with Reinforcement Learning
Shivam Shandilya
|
Menglin Xia
|
Supriyo Ghosh
|
Huiqiang Jiang
|
Jue Zhang
|
Qianhui Wu
|
Victor Rühle
|
Saravan Rajmohan
Findings of the Association for Computational Linguistics: ACL 2025
The increasing prevalence of large language models (LLMs) such as GPT-4 in various applications has led to a surge in the size of prompts required for optimal performance, leading to challenges in computational efficiency. Prompt compression aims to reduce the inference cost by minimizing input tokens without compromising on the task performance. However, existing prompt compression techniques either rely on sub-optimal metrics such as information entropy or model it as a task-agnostic token classification problem that fails to capture task-specific information.To address these issues, we propose a novel and efficient reinforcement learning (RL) based task-aware prompt compression method. To ensure low latency requirements, we leverage existing Transformer encoder-based token classification model while guiding the learning process with task-specific reward signals using lightweight REINFORCE algorithm. We evaluate the performance of our method on three diverse and challenging tasks including text summarization, question answering and code summarization. We demonstrate that our RL-guided compression method improves the task performance by 8% - 189% across these three scenarios over state-of-the-art compression techniques while satisfying the same compression rate and latency requirements.
CARMO: Dynamic Criteria Generation for Context Aware Reward Modelling
Taneesh Gupta
|
Shivam Shandilya
|
Xuchao Zhang
|
Rahul Madhavan
|
Supriyo Ghosh
|
Chetan Bansal
|
Huaxiu Yao
|
Saravan Rajmohan
Findings of the Association for Computational Linguistics: ACL 2025
Reward modeling in large language models is known to be susceptible to reward hacking, causing models to latch onto superficial features such as the tendency to generate lists or unnecessarily long responses. In RLHF, and more generally during post-training, flawed reward signals often lead to outputs that optimize for these spurious correlates instead of genuine quality or correctness. We propose **Carmo (Context-Aware Reward Modeling)**, a novel approach that first generates dynamic, context-relevant criteria to ground the reward model prior to producing reward scores. Unlike prior methods that use static rubrics, Carmo leverages powerful LLMs to adaptively create evaluation criteria, e.g., logical consistency, clarity, and depth, tailored to the user query. Our theoretical analysis shows that such criteria generation can mitigate reward hacking. We further demonstrate how Carmo can be distilled into smaller models, thereby lowering the computational cost of alignment. We establish a new state-of-the-art performance on zero shot setting for generative models, with a 2.1% improvement on Reward Bench. Furthermore, alignment performed on the Carmo-curated preference dataset achieves **22.5% and 21.1% LC-WR (%) and WR (%) on Mistral-Base (7B)**. We release our datasets at [huggingface/CARMO](https://huggingface.co/datasets/Multi-preference-Optimization/CARMO-UltraFeedback).
Search
Fix author
Co-authors
- Saravan Rajmohan 2
- Shivam Shandilya 2
- Chetan Bansal 1
- Taneesh Gupta 1
- Huiqiang Jiang 1
- show all...