Nishanth Sridhar Nakshatri
2025
Constrained Decoding with Speculative Lookaheads
Nishanth Sridhar Nakshatri
|
Shamik Roy
|
Rajarshi Das
|
Suthee Chaidaroon
|
Leonid Boytsov
|
Rashmi Gangadharaiah
Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)
Constrained decoding with lookahead heuristics (CDLH) is a highly effective method for aligning LLM generations to human preferences. However, the extensive lookahead roll-out operations for each generated token makes CDLH prohibitively expensive, resulting in low adoption in practice. In contrast, common decoding strategies such as greedy decoding are extremely efficient, but achieve very low constraint satisfaction. We propose constrained decoding with speculative lookaheads (CDSL), a technique that significantly improves upon the inference efficiency of CDLH without experiencing the drastic performance reduction seen with greedy decoding. CDSL is motivated by the recently proposed idea of speculative decoding that uses a much smaller draft LLM for generation and a larger target LLM for verification. In CDSL, the draft model is used to generate lookaheads which is verified by a combination of target LLM and task-specific reward functions. This process accelerates decoding by reducing the computational burden while maintaining strong performance. We evaluate CDSL in two constraint decoding tasks with three LLM families and achieve 2.2x to 12.15x speedup over CDLH without significant performance reduction.
2022
Towards Few-Shot Identification of Morality Frames using In-Context Learning
Shamik Roy
|
Nishanth Sridhar Nakshatri
|
Dan Goldwasser
Proceedings of the Fifth Workshop on Natural Language Processing and Computational Social Science (NLP+CSS)
Data scarcity is a common problem in NLP, especially when the annotation pertains to nuanced socio-linguistic concepts that require specialized knowledge. As a result, few-shot identification of these concepts is desirable. Few-shot in-context learning using pre-trained Large Language Models (LLMs) has been recently applied successfully in many NLP tasks. In this paper, we study few-shot identification of a psycho-linguistic concept, Morality Frames (Roy et al., 2021), using LLMs. Morality frames are a representation framework that provides a holistic view of the moral sentiment expressed in text, identifying the relevant moral foundation (Haidt and Graham, 2007) and at a finer level of granularity, the moral sentiment expressed towards the entities mentioned in the text. Previous studies relied on human annotation to identify morality frames in text which is expensive. In this paper, we propose prompting based approaches using pretrained Large Language Models for identification of morality frames, relying only on few-shot exemplars. We compare our models’ performance with few-shot RoBERTa and found promising results.
Search
Fix data
Co-authors
- Shamik Roy 2
- Leonid Boytsov 1
- Suthee Chaidaroon 1
- Rajarshi Das 1
- Rashmi Gangadharaiah 1
- show all...