Istabrak Abbes


2025

pdf bib
Small Encoders Can Rival Large Decoders in Detecting Groundedness
Istabrak Abbes | Gabriele Prato | Quentin Fournier | Fernando Rodriguez | Alaa Boukhary | Adam Elwood | Sarath Chandar
Findings of the Association for Computational Linguistics: ACL 2025

Augmenting large language models (LLMs) with external context significantly improves their performance in natural language processing (NLP) tasks. However, LLMs struggle to answer queries reliably when the provided context lacks information, often resorting to ungrounded speculation or internal knowledge. Groundedness – generating responses strictly supported by the context – is essential for ensuring factual consistency and trustworthiness. This study focuses on detecting whether a given query is grounded in a document provided in context before the costly answer generation by LLMs. Such a detection mechanism can significantly reduce both inference time and resource consumption. We show that lightweight, task-specific encoder models such as RoBERTa and NomicBERT, fine-tuned on curated datasets, can achieve accuracy comparable to state-of-the-art LLMs, such as Llama3 8B and GPT4o, in groundedness detection while reducing inference latency by orders of magnitude.