2025
pdf
bib
abs
An Automatic Method to Estimate Correctness of RAG
Chi Zhang
|
Vivek V. Datla
|
Aditya Shrivastava
|
Alfy Samuel
|
Zhiqi Huang
|
Anoop Kumar
|
Daben Liu
Proceedings of the 31st International Conference on Computational Linguistics: Industry Track
In sectors in where data quality is critical, like finance and healthcare, it is crucial to have confidence in not only the outputs generated by retrieval-augmented generation (RAG) models but also the process followed by the model while arriving at the output. Existing methods, such as hallucination detection and input-output entailment measurements, fail to capture the model’s internal state during answer generation. This paper introduces a novel approach to predict the correctness of the generated answer by modeling the model’s uncertainty on quantified perturbations of input. Extensive experiments across multiple large language models (LLMs) demonstrate that our approach quantifies RAG robustness by aligning predictions with ground truth with a Avg.Mean Square Error (MSE) 0.002 while offering flexibility for diverse qualitative metrics.
pdf
bib
abs
TruthTorchLM: A Comprehensive Library for Predicting Truthfulness in LLM Outputs
Duygu Nur Yaldiz
|
Yavuz Faruk Bakman
|
Sungmin Kang
|
Alperen Öziş
|
Hayrettin Eren Yildiz
|
Mitash Ashish Shah
|
Zhiqi Huang
|
Anoop Kumar
|
Alfy Samuel
|
Daben Liu
|
Sai Praneeth Karimireddy
|
Salman Avestimehr
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing: System Demonstrations
Generative Large Language Models (LLMs) inevitably produce untruthful responses. Accurately predicting the truthfulness of these outputs is critical, especially in high-stakes settings. To accelerate research in this domain and make truthfulness prediction methods more accessible, we introduce TruthTorchLM an open-source, comprehensive Python library featuring over 30 truthfulness prediction methods, which we refer to as Truth Methods. Unlike existing toolkits such as Guardrails, which focus solely on document-grounded verification, or LM-Polygraph, which is limited to uncertainty-based methods, TruthTorchLM offers a broad and extensible collection of techniques. These methods span diverse trade-offs in computational cost, access level (e.g., black-box vs. white-box), grounding document requirements, and supervision type (self-supervised or supervised). TruthTorchLM is seamlessly compatible with both HuggingFace and LiteLLM, enabling support for locally hosted and API-based models. It also provides a unified interface for generation, evaluation, calibration, and long-form truthfulness prediction, along with a flexible framework for extending the library with new methods. We conduct an evaluation of representative truth methods on three datasets, TriviaQA, GSM8K, and FactScore-Bio.
pdf
bib
abs
Harmonizing Diverse Models: A Layer-wise Merging Strategy for Consistent Generation
Xujun Peng
|
Anoop Kumar
|
Jingyu Wu
|
Parker Glenn
|
Daben Liu
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing: Industry Track
Retrieval-Augmented Generation (RAG) systems leverage Large Language Models (LLMs) to generate accurate and reliable responses that are grounded in retrieved context. However, LLMs often generate inconsistent outputs for semantically equivalent inputs, a problem exacerbated by limited consistency-focused data and the limitations of existing fine-tuning methods for improving consistency. We propose a new approach combining systematic synthetic data generation, triplet loss for better embeddings, and a novel layer-wise model merging approach. Using consistency-aware weights derived from intermediate layer activations, our method effectively integrates knowledge from specialized models. Experimental results how that our merged model significantly enhances output consistency, achieving approximately 47.5% improvement in response similarity over the baseline, thus offering a practical solution for increasing the the reliability of an industrial RAG system.
pdf
bib
abs
Readability Reconsidered A Cross-Dataset Analysis of Reference-Free Metrics
Catarina Belem
|
Parker Glenn
|
Alfy Samuel
|
Anoop Kumar
|
Daben Liu
Proceedings of the Fourth Workshop on Text Simplification, Accessibility and Readability (TSAR 2025)
Automatic readability assessment plays a key role in ensuring effective communication between humans and language models. Despite significant progress the field is hindered by inconsistent definitions of readability and measurements that rely on surface-level text properties. In this work we investigate the factors shaping human perceptions of readability through the analysis of 1.2k judgments finding that beyond surface-level cues information content and topic strongly shape text comprehensibility. Furthermore we evaluate 15 popular readability metrics across 5 datasets contrasting them with 5 more nuanced model-based metrics. Our results show that four model-based metrics consistently place among the top 4 in rank correlations with human judgments while the best performing traditional metric achieves an average rank of 7.8. These findings highlight a mismatch between current readability metrics and human perceptions pointing to model-based approaches as a more promising direction.
pdf
bib
abs
Confidence-Based Response Abstinence: Improving LLM Trustworthiness via Activation-Based Uncertainty Estimation
Zhiqi Huang
|
Vivek Datla
|
Chenyang Zhu
|
Alfy Samuel
|
Daben Liu
|
Anoop Kumar
|
Ritesh Soni
Proceedings of the 2nd Workshop on Uncertainty-Aware NLP (UncertaiNLP 2025)
We propose a method for confidence estimation in retrieval-augmented generation (RAG) systems that aligns closely with the correctness of large language model (LLM) outputs. Confidence estimation is especially critical in high-stakes domains such as finance and healthcare, where the cost of an incorrect answer outweighs that of not answering the question. Our approach extends prior uncertainty quantification methods by leveraging raw feed-forward network (FFN) activations as auto-regressive signals, avoiding the information loss inherent in token logits and probabilities after projection and softmax normalization. We model confidence prediction as a sequence classification task, and regularize training with a Huber loss term to improve robustness against noisy supervision. Applied in a real-world financial industry customer-support setting with complex knowledge bases, our method outperforms strong baselines and maintains high accuracy under strict latency constraints. Experiments on Llama 3.1 8B model show that using activations from only the 16th layer preserves accuracy while reducing response latency. Our results demonstrate that activation-based confidence modeling offers a scalable, architecture-aware path toward trustworthy RAG deployment.