When in Doubt: Improving Classification Performance with Alternating Normalization

Menglin Jia, Austin Reiter, Ser-Nam Lim, Yoav Artzi, Claire Cardie


Abstract
We introduce Classification with Alternating Normalization (CAN), a non-parametric post-processing step for classification. CAN improves classification accuracy for challenging examples by re-adjusting their predicted class probability distribution using the predicted class distributions of high-confidence validation examples. CAN is easily applicable to any probabilistic classifier, with minimal computation overhead. We analyze the properties of CAN using simulated experiments, and empirically demonstrate its effectiveness across a diverse set of classification tasks.
Anthology ID:
2021.findings-emnlp.148
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2021
Month:
November
Year:
2021
Address:
Punta Cana, Dominican Republic
Editors:
Marie-Francine Moens, Xuanjing Huang, Lucia Specia, Scott Wen-tau Yih
Venue:
Findings
SIG:
SIGDAT
Publisher:
Association for Computational Linguistics
Note:
Pages:
1716–1723
Language:
URL:
https://aclanthology.org/2021.findings-emnlp.148
DOI:
10.18653/v1/2021.findings-emnlp.148
Bibkey:
Cite (ACL):
Menglin Jia, Austin Reiter, Ser-Nam Lim, Yoav Artzi, and Claire Cardie. 2021. When in Doubt: Improving Classification Performance with Alternating Normalization. In Findings of the Association for Computational Linguistics: EMNLP 2021, pages 1716–1723, Punta Cana, Dominican Republic. Association for Computational Linguistics.
Cite (Informal):
When in Doubt: Improving Classification Performance with Alternating Normalization (Jia et al., Findings 2021)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-2024-clasp/2021.findings-emnlp.148.pdf
Video:
 https://preview.aclanthology.org/ingest-2024-clasp/2021.findings-emnlp.148.mp4
Code
 KMnP/can
Data
DialogRE