Transparentize the Internal and External Knowledge Utilization in LLMs with Trustworthy Citation

Jiajun Shen, Tong Zhou, Yubo Chen, Delai Qiu, Shengping Liu, Kang Liu, Jun Zhao


Abstract
While hallucinations of large language models could be alleviated through retrieval-augmented generation and citation generation, how the model utilizes internal knowledge is still opaque, and the trustworthiness of its generated answers remains questionable. In this work, we introduce Context-Prior Augmented Citation Generation task, requiring models to generate citations considering both external and internal knowledge while providing trustworthy references, with 5 evaluation metrics focusing on 3 aspects: answer helpfulness, citation faithfulness, and trustworthiness. We introduce RAEL, the paradigm for our task, and also design INTRALIGN, an integrated method containing customary data generation and an alignment algorithm. Our experimental results show that our method achieves a better cross-scenario performance with regard to other baselines. Our extended experiments further reveal that retrieval quality, question types, and model knowledge have considerable influence on the trustworthiness in citation generation.
Anthology ID:
2025.findings-acl.919
Volume:
Findings of the Association for Computational Linguistics: ACL 2025
Month:
July
Year:
2025
Address:
Vienna, Austria
Editors:
Wanxiang Che, Joyce Nabende, Ekaterina Shutova, Mohammad Taher Pilehvar
Venues:
Findings | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
17858–17877
Language:
URL:
https://preview.aclanthology.org/ingestion-acl-25/2025.findings-acl.919/
DOI:
Bibkey:
Cite (ACL):
Jiajun Shen, Tong Zhou, Yubo Chen, Delai Qiu, Shengping Liu, Kang Liu, and Jun Zhao. 2025. Transparentize the Internal and External Knowledge Utilization in LLMs with Trustworthy Citation. In Findings of the Association for Computational Linguistics: ACL 2025, pages 17858–17877, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
Transparentize the Internal and External Knowledge Utilization in LLMs with Trustworthy Citation (Shen et al., Findings 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingestion-acl-25/2025.findings-acl.919.pdf