Abstract
In-context learning has been extensively validated in large language models. However, the mechanism and selection strategy for in-context example selection, which is a crucial ingredient in this approach, lacks systematic and in-depth research. In this paper, we propose a data compression approach to the selection of in-context examples. We introduce a two-stage method that can effectively choose relevant examples and retain sufficient information about the training dataset within the in-context examples. Our method shows a significant improvement of an average of 5.90% across five different real-world datasets using four language models.- Anthology ID:
- 2024.findings-acl.50
- Volume:
- Findings of the Association for Computational Linguistics ACL 2024
- Month:
- August
- Year:
- 2024
- Address:
- Bangkok, Thailand and virtual meeting
- Editors:
- Lun-Wei Ku, Andre Martins, Vivek Srikumar
- Venue:
- Findings
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 871–877
- Language:
- URL:
- https://aclanthology.org/2024.findings-acl.50
- DOI:
- Cite (ACL):
- ZhongXiang Sun, Kepu Zhang, Haoyu Wang, Xiao Zhang, and Jun Xu. 2024. Effective In-Context Example Selection through Data Compression. In Findings of the Association for Computational Linguistics ACL 2024, pages 871–877, Bangkok, Thailand and virtual meeting. Association for Computational Linguistics.
- Cite (Informal):
- Effective In-Context Example Selection through Data Compression (Sun et al., Findings 2024)
- PDF:
- https://preview.aclanthology.org/ingest-bitext-workshop/2024.findings-acl.50.pdf