To What Extent Can In-Context Learning Solve Unseen Tasks?
Ryoma Shinto, Masashi Takeshita, Rafal Rzepka, Toshihiko Itoh
Abstract
While Large Language Models (LLMs) are known for their In-Context Learning (ICL) capabilities, there is no consensus on the underlying mechanisms. A key point of debate is whether ICL allows models to adapt to unseen tasks without parameter updates—that is, whether they can extrapolate. In this study, we address this question by constructing an arithmetic dataset based on the bivariate linear function z=ax+by to train a model and quantitatively evaluate its interpolation and extrapolation abilities through ICL. Our results show that while extrapolation was not achieved within our experimental design, tasks that were partially learned could be solved. We also found that the model acquires internal representations that can distinguish unseen tasks, and that greater task diversity in the training dataset improves ICL capabilities.- Anthology ID:
- 2025.ijcnlp-srw.23
- Volume:
- The 14th International Joint Conference on Natural Language Processing and The 4th Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics
- Month:
- December
- Year:
- 2025
- Address:
- Mumbai, India
- Editors:
- Santosh T.y.s.s, Shuichiro Shimizu, Yifan Gong
- Venue:
- IJCNLP
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 277–288
- Language:
- URL:
- https://preview.aclanthology.org/ingest-ijcnlp-aacl/2025.ijcnlp-srw.23/
- DOI:
- Cite (ACL):
- Ryoma Shinto, Masashi Takeshita, Rafal Rzepka, and Toshihiko Itoh. 2025. To What Extent Can In-Context Learning Solve Unseen Tasks?. In The 14th International Joint Conference on Natural Language Processing and The 4th Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics, pages 277–288, Mumbai, India. Association for Computational Linguistics.
- Cite (Informal):
- To What Extent Can In-Context Learning Solve Unseen Tasks? (Shinto et al., IJCNLP 2025)
- PDF:
- https://preview.aclanthology.org/ingest-ijcnlp-aacl/2025.ijcnlp-srw.23.pdf