Sravan Babu Bodapati


2025

pdf bib
Accelerated Test-Time Scaling with Model-Free Speculative Sampling
Woomin Song | Saket Dingliwal | Sai Muralidhar Jayanthi | Bhavana Ganesh | Jinwoo Shin | Aram Galstyan | Sravan Babu Bodapati
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing

Language models have demonstrated remarkable capabilities in reasoning tasks through test-time scaling techniques like best-of-N sampling and tree search. However, these approaches often demand substantial computational resources, creating a critical trade-off between performance and efficiency. We introduce STAND (STochastic Adaptive N-gram Drafting), a novel model-free speculative decoding approach that exploits the inherent redundancy in reasoning trajectories to achieve significant acceleration without compromising accuracy. Our analysis shows that reasoning paths frequently reuse similar reasoning patterns, enabling efficient model-free token prediction without requiring separate draft models. By introducing stochastic drafting and preserving probabilistic information through a memory-efficient logit-based N-gram module, combined with optimized Gumbel-Top-K sampling and data-driven tree construction, STAND significantly improves token acceptance rates. Extensive evaluations across multiple models and reasoning tasks (AIME-2024, GPQA-Diamond, and LiveCodeBench) demonstrate that STAND reduces inference latency by 60-65% compared to standard autoregressive decoding while maintaining accuracy. Furthermore, STAND consistently outperforms state-of-the-art speculative decoding methods across diverse inference patterns, including single-trajectory decoding, batch decoding, and test-time tree search. As a model-free approach, STAND can be applied to any existing language model without additional training, making it a powerful plug-and-play solution for accelerating language model reasoning.

pdf bib
Wanda++: Pruning Large Language Models via Regional Gradients
Yifan Yang | Kai Zhen | Bhavana Ganesh | Aram Galstyan | Goeric Huybrechts | Markus Müller | Jonas M. Kübler | Rupak Vignesh Swaminathan | Athanasios Mouchtaris | Sravan Babu Bodapati | Nathan Susanj | Zheng Zhang | Jack FitzGerald | Abhishek Kumar
Findings of the Association for Computational Linguistics: ACL 2025

Large Language Models (LLMs) pruning seeks to remove unimportant weights for inference speedup with minimal accuracy impact. However, existing methods often suffer from accuracy degradation without full-model sparsity-aware fine-tuning. This paper presents Wanda++, a novel pruning framework that outperforms the state-of-the-art methods by utilizing decoder-block-level regional gradients. Specifically, Wanda++ improves the pruning score with regional gradients for the first time and proposes an efficient regional optimization method to minimize pruning-induced output discrepancies between the dense and sparse decoder output. Notably, Wanda++ improves perplexity by up to 32% over Wanda in the language modeling task and generalizes effectively to downstream tasks. Moreover, despite updating weights with regional optimization, Wanda++ remains orthogonal to sparsity-aware fine-tuning, further reducing perplexity with LoRA in great extend. Our approach is lightweight, pruning a 7B LLaMA model in under 10 minutes on a single H100 GPU.

2024

pdf bib
ConSiDERS-The-Human Evaluation Framework: Rethinking Human Evaluation for Generative Large Language Models
Aparna Elangovan | Ling Liu | Lei Xu | Sravan Babu Bodapati | Dan Roth
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

In this position paper, we argue that human evaluation of generative large language models (LLMs) should be a multidisciplinary undertaking that draws upon the insights from disciplines such as user experience research and human behavioral psychology to ensure that the experimental design and results are reliable. The conclusions from these evaluations, therefore, must consider factors such as usability, aesthetics and cognitive biases. We highlight how cognitive biases can conflate fluent information and truthfulness, and how cognitive uncertainty affects the reliability of rating scores such as Likert. Furthermore, the evaluation should differentiate the capabilities and weaknesses of increasingly powerful large language models - which requires effective test sets. Scalability of human evaluation is also crucial to wider adoption. Hence, to design an effective human evaluation system in the age of generative NLP we propose the ConSiDERS-The-Human evaluation framework consisting of 6 pillars - Consistency, Scoring Criteria, Differentiating, User Experience, Responsible, and Scalability.

2019

pdf bib
Multi Sense Embeddings from Topic Models
Shobhit Jain | Sravan Babu Bodapati | Ramesh Nallapati | Anima Anandkumar
Proceedings of the 3rd International Conference on Natural Language and Speech Processing