On the Transferability of Causal Knowledge for Language Models

Gourab Dey, Yash Kumar Lal


Abstract
Language understanding includes identifying logical connections between events in a discourse, such as news and instructional text. We study the transferability of causal knowledge across these two domains by analyzing the extent to which understanding preconditions in narratives such as news articles can help models reason about cooking recipes, and vice-versa. Our experiments show that using instructions to pretrain small models on one domain before similarly finetuning it on the other shows a slight improvement over just finetuning it. We also find that finetuning the models on a mix of both types of data is better (~3-7%) for understanding causal relations in instructional text. While we find that the improvements do not translate to larger or already instruction tuned models, our analysis highlights the aspects of a plan that are better captured through the interoperability of causal knowledge.
Anthology ID:
2025.wnu-1.3
Volume:
Proceedings of the The 7th Workshop on Narrative Understanding
Month:
May
Year:
2025
Address:
Albuquerque, New Mexico
Editors:
Elizabeth Clark, Yash Kumar Lal, Snigdha Chaturvedi, Mohit Iyyer, Anneliese Brei, Ashutosh Modi, Khyathi Raghavi Chandu
Venues:
WNU | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
8–14
Language:
URL:
https://preview.aclanthology.org/fix-sig-urls/2025.wnu-1.3/
DOI:
Bibkey:
Cite (ACL):
Gourab Dey and Yash Kumar Lal. 2025. On the Transferability of Causal Knowledge for Language Models. In Proceedings of the The 7th Workshop on Narrative Understanding, pages 8–14, Albuquerque, New Mexico. Association for Computational Linguistics.
Cite (Informal):
On the Transferability of Causal Knowledge for Language Models (Dey & Lal, WNU 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/fix-sig-urls/2025.wnu-1.3.pdf