What In-Context Learning “Learns” In-Context: Disentangling Task Recognition and Task Learning

Jane Pan, Tianyu Gao, Howard Chen, Danqi Chen


Abstract
Large language models (LLMs) exploit in-context learning (ICL) to solve tasks with only a few demonstrations, but its mechanisms are not yet well-understood. Some works suggest that LLMs only recall already learned concepts from pre-training, while others hint that ICL performs implicit learning over demonstrations. We characterize two ways through which ICL leverages demonstrations. Task recognition (TR) captures the extent to which LLMs can recognize a task through demonstrations – even without ground-truth labels – and apply their pre-trained priors, whereas task learning (TL) is the ability to capture new input-label mappings unseen in pre-training. Using a wide range of classification datasets and three LLM families (GPT-3, LLaMA and OPT), we design controlled experiments to disentangle the roles of TR and TL in ICL. We show that (1) models can achieve non-trivial performance with only TR, and TR does not further improve with larger models or more demonstrations; (2) LLMs acquire TL as the model scales, and TL’s performance consistently improves with more demonstrations in context. Our findings unravel two different forces behind ICL and we advocate for discriminating them in future ICL research due to their distinct nature.
Anthology ID:
2023.findings-acl.527
Volume:
Findings of the Association for Computational Linguistics: ACL 2023
Month:
July
Year:
2023
Address:
Toronto, Canada
Editors:
Anna Rogers, Jordan Boyd-Graber, Naoaki Okazaki
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
8298–8319
Language:
URL:
https://aclanthology.org/2023.findings-acl.527
DOI:
10.18653/v1/2023.findings-acl.527
Bibkey:
Cite (ACL):
Jane Pan, Tianyu Gao, Howard Chen, and Danqi Chen. 2023. What In-Context Learning “Learns” In-Context: Disentangling Task Recognition and Task Learning. In Findings of the Association for Computational Linguistics: ACL 2023, pages 8298–8319, Toronto, Canada. Association for Computational Linguistics.
Cite (Informal):
What In-Context Learning “Learns” In-Context: Disentangling Task Recognition and Task Learning (Pan et al., Findings 2023)
Copy Citation:
PDF:
https://preview.aclanthology.org/improve-issue-templates/2023.findings-acl.527.pdf