Induction Heads as an Essential Mechanism for Pattern Matching in In-context Learning

Joy Crosbie, Ekaterina Shutova


Abstract
Large language models (LLMs) have shown a remarkable ability to learn and perform complex tasks through in-context learning (ICL). However, a comprehensive understanding of its internal mechanisms is still lacking. This paper explores the role of induction heads in a few-shot ICL setting. We analyse two state-of-the-art models, Llama-3-8B and InternLM2-20B on abstract pattern recognition and NLP tasks. Our results show that even a minimal ablation of induction heads leads to ICL performance decreases of up to ~32% for abstract pattern recognition tasks, bringing the performance close to random. For NLP tasks, this ablation substantially decreases the model’s ability to benefit from examples, bringing few-shot ICL performance close to that of zero-shot prompts. We further use attention knockout to disable specific induction patterns, and present fine-grained evidence for the role that the induction mechanism plays in ICL.
Anthology ID:
2025.findings-naacl.283
Volume:
Findings of the Association for Computational Linguistics: NAACL 2025
Month:
April
Year:
2025
Address:
Albuquerque, New Mexico
Editors:
Luis Chiruzzo, Alan Ritter, Lu Wang
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
5034–5096
Language:
URL:
https://preview.aclanthology.org/fix-sig-urls/2025.findings-naacl.283/
DOI:
Bibkey:
Cite (ACL):
Joy Crosbie and Ekaterina Shutova. 2025. Induction Heads as an Essential Mechanism for Pattern Matching in In-context Learning. In Findings of the Association for Computational Linguistics: NAACL 2025, pages 5034–5096, Albuquerque, New Mexico. Association for Computational Linguistics.
Cite (Informal):
Induction Heads as an Essential Mechanism for Pattern Matching in In-context Learning (Crosbie & Shutova, Findings 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/fix-sig-urls/2025.findings-naacl.283.pdf