Source Attribution for Large Language Models

Vipula Rawte, Koustava Goswami, Puneet Mathur, Nedim Lipka


Abstract
As Large Language Models (LLMs) become more widely used for tasks like document summarization, question answering, and information extraction, improving their trustworthiness and interpretability has become increasingly important. One key strategy for achieving this is extbfattribution, a process that tracks the sources of the generated responses. This tutorial will explore various attribution techniques, including model-driven attribution, post-retrieval answering, and post-generation attribution. We will also discuss the challenges involved in implementing these approaches, and also look at the advanced topics such as model-based attribution for complex cases, table attribution, multimodal attribution, and multilingual attribution.
Anthology ID:
2025.ijcnlp-tutorials.1
Volume:
Proceedings of the 14th International Joint Conference on Natural Language Processing and the 4th Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics: Tutorial Abstract
Month:
December
Year:
2025
Address:
Mumbai, India
Editors:
Benjamin Heinzerling, Lun-Wei Ku
Venue:
IJCNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1–5
Language:
URL:
https://preview.aclanthology.org/ingest-ijcnlp-aacl/2025.ijcnlp-tutorials.1/
DOI:
Bibkey:
Cite (ACL):
Vipula Rawte, Koustava Goswami, Puneet Mathur, and Nedim Lipka. 2025. Source Attribution for Large Language Models. In Proceedings of the 14th International Joint Conference on Natural Language Processing and the 4th Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics: Tutorial Abstract, pages 1–5, Mumbai, India. Association for Computational Linguistics.
Cite (Informal):
Source Attribution for Large Language Models (Rawte et al., IJCNLP 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-ijcnlp-aacl/2025.ijcnlp-tutorials.1.pdf