CMA-R: Causal Mediation Analysis for Explaining Rumour Detection

Lin Tian, Xiuzhen Zhang, Jey Han Lau


Abstract
We apply causal mediation analysis to explain the decision-making process of neural models for rumour detection on Twitter.Interventions at the input and network level reveal the causal impacts of tweets and words in the model output.We find that our approach CMA-R – Causal Mediation Analysis for Rumour detection – identifies salient tweets that explain model predictions and show strong agreement with human judgements for critical tweets determining the truthfulness of stories.CMA-R can further highlight causally impactful words in the salient tweets, providing another layer of interpretability and transparency into these blackbox rumour detection systems. Code is available at: https://github.com/ltian678/cma-r.
Anthology ID:
2024.findings-eacl.116
Volume:
Findings of the Association for Computational Linguistics: EACL 2024
Month:
March
Year:
2024
Address:
St. Julian’s, Malta
Editors:
Yvette Graham, Matthew Purver
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1667–1675
Language:
URL:
https://aclanthology.org/2024.findings-eacl.116
DOI:
Bibkey:
Cite (ACL):
Lin Tian, Xiuzhen Zhang, and Jey Han Lau. 2024. CMA-R: Causal Mediation Analysis for Explaining Rumour Detection. In Findings of the Association for Computational Linguistics: EACL 2024, pages 1667–1675, St. Julian’s, Malta. Association for Computational Linguistics.
Cite (Informal):
CMA-R: Causal Mediation Analysis for Explaining Rumour Detection (Tian et al., Findings 2024)
Copy Citation:
PDF:
https://preview.aclanthology.org/nschneid-patch-4/2024.findings-eacl.116.pdf