Alexander Radovic


2025

pdf bib
Learning Auxiliary Tasks Improves Reference-Free Hallucination Detection in Open-Domain Long-Form Generation
Chengwei Qin | Wenxuan Zhou | Karthik Abinav Sankararaman | Nanshu Wang | Tengyu Xu | Alexander Radovic | Eryk Helenowski | Arya Talebzadeh | Aditya Tayade | Sinong Wang | Shafiq Joty | Han Fang | Hao Ma
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)

Hallucination, the generation of factually incorrect information, remains a significant challenge for large language models (LLMs), especially in open-domain long-form generation. Existing approaches for detecting hallucination in long-form tasks either focus on limited domains or rely heavily on external fact-checking tools, which may not always be available.In this work, we systematically investigate reference-free hallucination detection in open-domain long-form responses. Our findings reveal that internal states (e.g., model’s output probability and entropy) alone are insufficient for reliably (i.e., better than random guessing) distinguishing between factual and hallucinated content. To enhance detection, we explore various existing approaches, including prompting-based methods, probing, and fine-tuning, with fine-tuning proving the most effective. To further improve the accuracy, we introduce a new paradigm, named RATE-FT, that augments fine-tuning with an auxiliary task for the model to jointly learn with the main task of hallucination detection. With extensive experiments and analysis using a variety of model families & datasets, we demonstrate the effectiveness and generalizability of our method, e.g., +3% over general fine-tuning methods on LongFact.

pdf bib
Dynamic Strategy Planning for Efficient Question Answering with Large Language Models
Tanmay Parekh | Pradyot Prakash | Alexander Radovic | Akshay Shekher | Denis Savenkov
Findings of the Association for Computational Linguistics: NAACL 2025

Research has shown an effectiveness of reasoning (e.g. Chain-of-Thought), planning (e.g. SelfAsk) and retrieval augmented generation strategies to improve performance of Large Language Models (LLMs) on various tasks, such as question answering. However, using a single fixed strategy for answering all different kinds of questions is sub-optimal in performance and inefficient in terms of generated tokens and retrievals. In our work, we propose a novel technique, DyPlan, to induce a dynamic strategy selection process in LLMs for cost-effective question-answering. DyPlan incorporates an initial decision step to select the most suitable strategy conditioned on the input question and guides the LLM’s response generation accordingly. We extend DyPlan to DyPlan-verify, adding an internal verification and correction process to further enrich the generated answer. Experimentation on three prominent multi-hop question answering (MHQA) datasets reveals how DyPlan can improve model performance by 7-13% while reducing the cost by 11-32% relative to the best baseline model.