Can Input Attributions Explain Inductive Reasoning in In-Context Learning?

Mengyu Ye, Tatsuki Kuribayashi, Goro Kobayashi, Jun Suzuki


Abstract
Interpreting the internal process of neural models has long been a challenge. This challenge remains relevant in the era of large language models (LLMs) and in-context learning (ICL); for example, ICL poses a new issue of interpreting which example in the few-shot examples contributed to identifying/solving the task. To this end, in this paper, we design synthetic diagnostic tasks of inductive reasoning, inspired by the generalization tests in linguistics; here, most in-context examples are ambiguous w.r.t. their underlying rule, and one critical example disambiguates the task demonstrated. The question is whether conventional input attribution (IA) methods can track such a reasoning process, i.e., identify the influential example, in ICL. Our experiments provide several practical findings; for example, a certain simple IA method works the best, and the larger the model, the generally harder it is to interpret the ICL with gradient-based IA methods.
Anthology ID:
2025.findings-acl.1092
Volume:
Findings of the Association for Computational Linguistics: ACL 2025
Month:
July
Year:
2025
Address:
Vienna, Austria
Editors:
Wanxiang Che, Joyce Nabende, Ekaterina Shutova, Mohammad Taher Pilehvar
Venues:
Findings | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
21199–21225
Language:
URL:
https://preview.aclanthology.org/ingestion-acl-25/2025.findings-acl.1092/
DOI:
Bibkey:
Cite (ACL):
Mengyu Ye, Tatsuki Kuribayashi, Goro Kobayashi, and Jun Suzuki. 2025. Can Input Attributions Explain Inductive Reasoning in In-Context Learning?. In Findings of the Association for Computational Linguistics: ACL 2025, pages 21199–21225, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
Can Input Attributions Explain Inductive Reasoning in In-Context Learning? (Ye et al., Findings 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingestion-acl-25/2025.findings-acl.1092.pdf