Dual Debiasing for Noisy In-Context Learning for Text Generation

Siqi Liang, Sumyeong Ahn, Paramveer Dhillon, Jiayu Zhou


Abstract
In-context learning (ICL) relies heavily on high-quality demonstrations drawn from large annotated corpora. Existing approaches detect noisy annotations by ranking local perplexities, presuming that noisy samples yield higher perplexities than their clean counterparts. However, this assumption breaks down when the noise ratio is high and many demonstrations are flawed.We re-examine the perplexity-based paradigm for text generation under noisy annotations, highlighting two sources of bias in perplexity: the annotation itself and the domain-specific knowledge inherent in large language models (LLMs). To overcome these biases, we introduce a dual-debiasing framework that uses synthesized neighbors to explicitly correct perplexity estimates, yielding a robust Sample Cleanliness Score. This metric uncovers absolute sample cleanliness regardless of the overall corpus noise level.Extensive experiments demonstrate our method’s superior noise-detection capabilities and show that its final ICL performance is comparable to that of a fully clean demonstration corpus. Moreover, our approach remains robust even when noise ratios are extremely high.
Anthology ID:
2025.findings-acl.665
Volume:
Findings of the Association for Computational Linguistics: ACL 2025
Month:
July
Year:
2025
Address:
Vienna, Austria
Editors:
Wanxiang Che, Joyce Nabende, Ekaterina Shutova, Mohammad Taher Pilehvar
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
12855–12868
Language:
URL:
https://preview.aclanthology.org/landing_page/2025.findings-acl.665/
DOI:
Bibkey:
Cite (ACL):
Siqi Liang, Sumyeong Ahn, Paramveer Dhillon, and Jiayu Zhou. 2025. Dual Debiasing for Noisy In-Context Learning for Text Generation. In Findings of the Association for Computational Linguistics: ACL 2025, pages 12855–12868, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
Dual Debiasing for Noisy In-Context Learning for Text Generation (Liang et al., Findings 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/landing_page/2025.findings-acl.665.pdf