Kushan Mitra
2025
FactLens: Benchmarking Fine-Grained Fact Verification
Kushan Mitra
|
Dan Zhang
|
Sajjadur Rahman
|
Estevam Hruschka
Findings of the Association for Computational Linguistics: ACL 2025
Large Language Models (LLMs) have shown impressive capability in language generation and understanding, but their tendency to hallucinate and produce factually incorrect information remains a key limitation. To verify LLM-generated contents and claims from other sources, traditional verification approaches often rely on holistic models that assign a single factuality label to complex claims, potentially obscuring nuanced errors. In this paper, we advocate for a shift towards fine-grained verification, where complex claims are broken down into smaller sub-claims for individual verification, allowing for more precise identification of inaccuracies, improved transparency, and reduced ambiguity in evidence retrieval. However, generating sub-claims poses challenges, such as maintaining context and ensuring semantic equivalence with respect to the original claim. We introduce **FactLens**, a benchmark for evaluating fine-grained fact verification, with metrics and automated evaluators of sub-claim quality. The benchmark data is manually curated to ensure high-quality ground truth. Our results show alignment between automated FactLens evaluators and human judgments, and we discuss the impact of sub-claim characteristics on the overall verification performance.
2024
MEGAnno+: A Human-LLM Collaborative Annotation System
Hannah Kim
|
Kushan Mitra
|
Rafael Li Chen
|
Sajjadur Rahman
|
Dan Zhang
Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics: System Demonstrations
Large language models (LLMs) can label data faster and cheaper than humans for various NLP tasks. Despite their prowess, LLMs may fall short in understanding of complex, sociocultural, or domain-specific context, potentially leading to incorrect annotations. Therefore, we advocate a collaborative approach where humans and LLMs work together to produce reliable and high-quality labels. We present MEGAnno+, a human-LLM collaborative annotation system that offers effective LLM agent and annotation management, convenient and robust LLM annotation, and exploratory verification of LLM labels by humans.
Characterizing Large Language Models as Rationalizers of Knowledge-intensive Tasks
Aditi Mishra
|
Sajjadur Rahman
|
Kushan Mitra
|
Hannah Kim
|
Estevam Hruschka
Findings of the Association for Computational Linguistics: ACL 2024
Large language models (LLMs) are proficient at generating fluent text with minimal task-specific supervision. However, their ability to generate rationales for knowledge-intensive tasks (KITs) remains under-explored. Generating rationales for KIT solutions, such as commonsense multiple-choice QA, requires external knowledge to support predictions and refute alternate options. In this work, we consider the task of generating retrieval-augmented rationalization of KIT model predictions via external knowledge guidance within a few-shot setting. Surprisingly, crowd-workers preferred LLM-generated rationales over existing crowd-sourced rationales, generated in a similar knowledge-guided setting, on aspects such as factuality, sufficiency, and convincingness. However, fine-grained evaluation of such rationales highlights the need for further improvements in conciseness, novelty, and domain invariance. Additionally, through an expert-sourced study evaluating the reliability of the rationales, we demonstrate that humans’ trust in LLM-generated rationales erodes when communicated faithfully, i.e., without taking model prediction accuracy into account. We find that even instrumenting simple guardrails can be effective for reliable rationalization.
Search
Fix author
Co-authors
- Sajjadur Rahman 3
- Estevam Hruschka 2
- Hannah Kim 2
- Dan Zhang 2
- Rafael Li Chen 1
- show all...