Who Remembers What? Tracing Information Fidelity in Human-AI Chains

Suvojit Acharjee, Utathya Aich, Diptarka Mandal, Asfak Ali


Abstract
In many real-world settings like journalism, law, medicine, and science communication, information is passed from one person or system to another through multiple rounds of summarization or rewriting. This process, known as multi-hop information transfer, also happens increasingly in workflows involving large language models (LLMs). But while summarization models and factuality metrics have improved, we still don’t fully understand how meaning and factual accuracy hold up across long chains of transformations, especially when both humans and LLMs are involved.In this paper, we take a fresh look at this problem by combining insights from cognitive science (Bartlett’s serial reproduction) and information theory (Shannon’s noisy-channel model). We build a new dataset of 700 five-step transmission chains that include human-only, LLM-only, mixed human-LLM, and cross-LLM settings across a wide range of source texts. To track how meaning degrades, we introduce three new metrics: Information Degradation Rate (IDR) for semantic drift, Meaning Preservation Entropy (MPE) for uncertainty in factual content, and Cascaded Hallucination Propagation Index (CHPI) for how hallucinations accumulate over time. Our findings reveal that hybrid chains behave asymmetrically. When a human summary is refined by a language model, the final output tends to preserve meaning well, suggesting that models can improve upon human-written summaries. The code and data will be available at : https://github.com/transtrace6/TransTrace.git.
Anthology ID:
2025.ijcnlp-long.146
Volume:
Proceedings of the 14th International Joint Conference on Natural Language Processing and the 4th Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics
Month:
December
Year:
2025
Address:
Mumbai, India
Editors:
Kentaro Inui, Sakriani Sakti, Haofen Wang, Derek F. Wong, Pushpak Bhattacharyya, Biplab Banerjee, Asif Ekbal, Tanmoy Chakraborty, Dhirendra Pratap Singh
Venues:
IJCNLP | AACL
SIG:
Publisher:
The Asian Federation of Natural Language Processing and The Association for Computational Linguistics
Note:
Pages:
2718–2726
Language:
URL:
https://preview.aclanthology.org/ingest-ijcnlp-aacl/2025.ijcnlp-long.146/
DOI:
Bibkey:
Cite (ACL):
Suvojit Acharjee, Utathya Aich, Diptarka Mandal, and Asfak Ali. 2025. Who Remembers What? Tracing Information Fidelity in Human-AI Chains. In Proceedings of the 14th International Joint Conference on Natural Language Processing and the 4th Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics, pages 2718–2726, Mumbai, India. The Asian Federation of Natural Language Processing and The Association for Computational Linguistics.
Cite (Informal):
Who Remembers What? Tracing Information Fidelity in Human-AI Chains (Acharjee et al., IJCNLP-AACL 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-ijcnlp-aacl/2025.ijcnlp-long.146.pdf