AttributionBench: How Hard is Automatic Attribution Evaluation?

Yifei Li, Xiang Yue, Zeyi Liao, Huan Sun


Abstract
Modern generative search engines enhance the reliability of large language model (LLM) responses by providing cited evidence. However, evaluating the answer’s attribution, i.e., whether every claim within the generated responses is fully supported by its cited evidence, remains an open problem. This verification, traditionally dependent on costly human evaluation, underscores the urgent need for automatic attribution evaluation methods. To bridge the gap in the absence of standardized benchmarks for these methods, we present AttributionBench, a comprehensive benchmark compiled from various existing attribution datasets. Our extensive experiments on AttributionBench reveal the challenges of automatic attribution evaluation, even for state-of-the-art LLMs. Specifically, our findings show that even a fine-tuned GPT-3.5 only achieves around 80% macro-F1 under a binary classification formulation. A detailed analysis of more than 300 error cases indicates that a majority of failures stem from the model’s inability to process nuanced information, and the discrepancy between the information the model has access to and that human annotators do.
Anthology ID:
2024.findings-acl.886
Volume:
Findings of the Association for Computational Linguistics: ACL 2024
Month:
August
Year:
2024
Address:
Bangkok, Thailand
Editors:
Lun-Wei Ku, Andre Martins, Vivek Srikumar
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
14919–14935
Language:
URL:
https://aclanthology.org/2024.findings-acl.886
DOI:
10.18653/v1/2024.findings-acl.886
Bibkey:
Cite (ACL):
Yifei Li, Xiang Yue, Zeyi Liao, and Huan Sun. 2024. AttributionBench: How Hard is Automatic Attribution Evaluation?. In Findings of the Association for Computational Linguistics: ACL 2024, pages 14919–14935, Bangkok, Thailand. Association for Computational Linguistics.
Cite (Informal):
AttributionBench: How Hard is Automatic Attribution Evaluation? (Li et al., Findings 2024)
Copy Citation:
PDF:
https://preview.aclanthology.org/autopr/2024.findings-acl.886.pdf