Yancheng Yuan
2025
Enhancing Input-Label Mapping in In-Context Learning with Contrastive Decoding
Keqin Peng
|
Liang Ding
|
Yuanxin Ouyang
|
Meng Fang
|
Yancheng Yuan
|
Dacheng Tao
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)
Large language models (LLMs) excel at a range of tasks through in-context learning (ICL), where only a few task examples guide their predictions. However, prior research highlights that LLMs often overlook input-label mapping information in ICL, relying more on their pre-trained knowledge. To address this issue, we introduce In-Context Contrastive Decoding (ICCD), a novel method that emphasizes input-label mapping by contrasting the output distributions between positive and negative in-context examples. Experiments on 7 natural language understanding (NLU) tasks show that our ICCD method brings consistent and significant improvement (up to +1.8 improvement on average) upon 6 different scales of LLMs without requiring additional training. Our approach is versatile, enhancing performance with various demonstration selection methods, demonstrating its broad applicability and effectiveness. The code and scripts are released at https://github.com/Romainpkq/CD_ICL.
2024
Revisiting Demonstration Selection Strategies in In-Context Learning
Keqin Peng
|
Liang Ding
|
Yancheng Yuan
|
Xuebo Liu
|
Min Zhang
|
Yuanxin Ouyang
|
Dacheng Tao
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Large language models (LLMs) have shown an impressive ability to perform a wide range of tasks using in-context learning (ICL), where a few examples are used to describe a task to the model. However, the performance of ICL varies significantly with the choice of demonstrations, and previous research usually focuses on the data aspect ignoring the model’s effect. In this work, we first revisit the factors contributing to this variance from the model aspect, and find that the demonstration choice is both data- and model-dependent. We further propose a conjecture that the performance of a demonstration positively correlates with its contribution to the model’s understanding of the test samples, and accordingly propose a data- and model-dependent demonstration selection method, TopK + ConE. Empirically, our method yields consistent improvements in both language understanding and generation tasks with different model scales. Further analyses confirm that, besides the generality and stability under different circumstances, our method provides a unified explanation for the effectiveness of previous methods. Code is publicly available at https://github.com/Romainpkq/revisit_demon_selection_in_ICL.
Search
Fix author
Co-authors
- Liang Ding 2
- Yuanxin Ouyang 2
- Keqin Peng 2
- Dacheng Tao 2
- Meng Fang 1
- show all...
Venues
- acl2