kNN-CM: A Non-parametric Inference-Phase Adaptation of Parametric Text Classifiers
Rishabh Bhardwaj, Yingting Li, Navonil Majumder, Bo Cheng, Soujanya Poria
Abstract
Semi-parametric models exhibit the properties of both parametric and non-parametric modeling and have been shown to be effective in the next-word prediction language modeling task. However, there is a lack of studies on the text-discriminating properties of such models. We propose an inference-phase approach—k-Nearest Neighbor Classification Model (kNN-CM)—that enhances the capacity of a pre-trained parametric text classifier by incorporating a simple neighborhood search through the representation space of (memorized) training samples. The final class prediction of kNN-CM is based on the convex combination of probabilities obtained from kNN search and prediction of the classifier. Our experiments show consistent performance improvements on eight SuperGLUE tasks, three adversarial natural language inference (ANLI) datasets, 11 question-answering (QA) datasets, and two sentiment classification datasets.- Anthology ID:
- 2023.findings-emnlp.903
- Volume:
- Findings of the Association for Computational Linguistics: EMNLP 2023
- Month:
- December
- Year:
- 2023
- Address:
- Singapore
- Editors:
- Houda Bouamor, Juan Pino, Kalika Bali
- Venue:
- Findings
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 13546–13557
- Language:
- URL:
- https://aclanthology.org/2023.findings-emnlp.903
- DOI:
- 10.18653/v1/2023.findings-emnlp.903
- Cite (ACL):
- Rishabh Bhardwaj, Yingting Li, Navonil Majumder, Bo Cheng, and Soujanya Poria. 2023. kNN-CM: A Non-parametric Inference-Phase Adaptation of Parametric Text Classifiers. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 13546–13557, Singapore. Association for Computational Linguistics.
- Cite (Informal):
- kNN-CM: A Non-parametric Inference-Phase Adaptation of Parametric Text Classifiers (Bhardwaj et al., Findings 2023)
- PDF:
- https://preview.aclanthology.org/dois-2013-emnlp/2023.findings-emnlp.903.pdf