Distinguishability Calibration to In-Context Learning
Hongjing Li, Hanqi Yan, Yanran Li, Li Qian, Yulan He, Lin Gui
Abstract
Recent years have witnessed increasing interests in prompt-based learning in which models can be trained on only a few annotated instances, making them suitable in low-resource settings. It is even challenging in fine-grained classification as the pre-trained language models tend to generate similar output embedding which makes it difficult to discriminate for the prompt-based classifier. In this work, we alleviate this information diffusion issue by proposing a calibration method based on a transformation which rotates the embedding feature into a new metric space where we adapt the ratio of each dimension to a uniform distribution to guarantee the distinguishability of learned embeddings. Furthermore, we take the advantage of hyperbolic embedding to capture the relation between dimensions by a coarse-fine metric learning strategy to enhance interpretability. Extensive experiments on the three datasets under various settings demonstrate the effectiveness of our approach.- Anthology ID:
- 2023.findings-eacl.102
- Volume:
- Findings of the Association for Computational Linguistics: EACL 2023
- Month:
- May
- Year:
- 2023
- Address:
- Dubrovnik, Croatia
- Venue:
- Findings
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 1385–1397
- Language:
- URL:
- https://aclanthology.org/2023.findings-eacl.102
- DOI:
- Cite (ACL):
- Hongjing Li, Hanqi Yan, Yanran Li, Li Qian, Yulan He, and Lin Gui. 2023. Distinguishability Calibration to In-Context Learning. In Findings of the Association for Computational Linguistics: EACL 2023, pages 1385–1397, Dubrovnik, Croatia. Association for Computational Linguistics.
- Cite (Informal):
- Distinguishability Calibration to In-Context Learning (Li et al., Findings 2023)
- PDF:
- https://preview.aclanthology.org/nodalida-main-page/2023.findings-eacl.102.pdf