Abstract
In-context learning (ICL) in Large Language Models (LLMs) has emerged as a powerful new learning paradigm. However, its underlying mechanism is still not well understood. In particular, it is challenging to map it to the “standard’ machine learning framework, where one uses a training set S to find a best-fitting function f(x) in some hypothesis class. Here we make progress on this problem by showing that the functions learned by ICL often have a very simple structure: they correspond to the transformer LLM whose only inputs are the query x and a single “task vector’ calculated from the training set. Thus, ICL can be seen as compressing S into a single task vector 𝜃(S) and then using this task vector to modulate the transformer to produce the output. We support the above claim via comprehensive experiments across a range of models and tasks.- Anthology ID:
- 2023.findings-emnlp.624
- Volume:
- Findings of the Association for Computational Linguistics: EMNLP 2023
- Month:
- December
- Year:
- 2023
- Address:
- Singapore
- Editors:
- Houda Bouamor, Juan Pino, Kalika Bali
- Venue:
- Findings
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 9318–9333
- Language:
- URL:
- https://aclanthology.org/2023.findings-emnlp.624
- DOI:
- 10.18653/v1/2023.findings-emnlp.624
- Cite (ACL):
- Roee Hendel, Mor Geva, and Amir Globerson. 2023. In-Context Learning Creates Task Vectors. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 9318–9333, Singapore. Association for Computational Linguistics.
- Cite (Informal):
- In-Context Learning Creates Task Vectors (Hendel et al., Findings 2023)
- PDF:
- https://preview.aclanthology.org/fix-volume-bibkeys/2023.findings-emnlp.624.pdf