Exploring Explanations Improves the Robustness of In-Context Learning

Ukyo Honda, Tatsushi Oka


Abstract
In-context learning (ICL) has emerged as a successful paradigm for leveraging large language models (LLMs).However, it often struggles to generalize beyond the distribution of the provided demonstrations.A recent advancement in enhancing robustness is ICL with explanations (X-ICL), which improves prediction reliability by guiding LLMs to understand and articulate the reasoning behind correct labels.Building on this approach, we introduce an advanced framework that extends X-ICL by systematically exploring explanations for all possible labels (X2-ICL), thereby enabling more comprehensive and robust decision-making.Experimental results on multiple natural language understanding datasets validate the effectiveness of X2-ICL, demonstrating significantly improved robustness to out-of-distribution data compared to the existing ICL approaches.
Anthology ID:
2025.acl-long.1155
Volume:
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
July
Year:
2025
Address:
Vienna, Austria
Editors:
Wanxiang Che, Joyce Nabende, Ekaterina Shutova, Mohammad Taher Pilehvar
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
23693–23714
Language:
URL:
https://preview.aclanthology.org/ingestion-acl-25/2025.acl-long.1155/
DOI:
Bibkey:
Cite (ACL):
Ukyo Honda and Tatsushi Oka. 2025. Exploring Explanations Improves the Robustness of In-Context Learning. In Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 23693–23714, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
Exploring Explanations Improves the Robustness of In-Context Learning (Honda & Oka, ACL 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingestion-acl-25/2025.acl-long.1155.pdf