Understanding Hard Negatives in Noise Contrastive Estimation

Wenzheng Zhang, Karl Stratos


Abstract
The choice of negative examples is important in noise contrastive estimation. Recent works find that hard negatives—highest-scoring incorrect examples under the model—are effective in practice, but they are used without a formal justification. We develop analytical tools to understand the role of hard negatives. Specifically, we view the contrastive loss as a biased estimator of the gradient of the cross-entropy loss, and show both theoretically and empirically that setting the negative distribution to be the model distribution results in bias reduction. We also derive a general form of the score function that unifies various architectures used in text retrieval. By combining hard negatives with appropriate score functions, we obtain strong results on the challenging task of zero-shot entity linking.
Anthology ID:
2021.naacl-main.86
Volume:
Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
Month:
June
Year:
2021
Address:
Online
Venue:
NAACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1090–1101
Language:
URL:
https://aclanthology.org/2021.naacl-main.86
DOI:
10.18653/v1/2021.naacl-main.86
Bibkey:
Cite (ACL):
Wenzheng Zhang and Karl Stratos. 2021. Understanding Hard Negatives in Noise Contrastive Estimation. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1090–1101, Online. Association for Computational Linguistics.
Cite (Informal):
Understanding Hard Negatives in Noise Contrastive Estimation (Zhang & Stratos, NAACL 2021)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingestion-script-update/2021.naacl-main.86.pdf
Video:
 https://preview.aclanthology.org/ingestion-script-update/2021.naacl-main.86.mp4
Code
 WenzhengZhang/hard-nce-el
Data
AIDA CoNLL-YAGOKILTZESHEL