Varsha Suresh


2021

pdf bib
Not All Negatives are Equal: Label-Aware Contrastive Loss for Fine-grained Text Classification
Varsha Suresh | Desmond Ong
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing

Fine-grained classification involves dealing with datasets with larger number of classes with subtle differences between them. Guiding the model to focus on differentiating dimensions between these commonly confusable classes is key to improving performance on fine-grained tasks. In this work, we analyse the contrastive fine-tuning of pre-trained language models on two fine-grained text classification tasks, emotion classification and sentiment analysis. We adaptively embed class relationships into a contrastive objective function to help differently weigh the positives and negatives, and in particular, weighting closely confusable negatives more than less similar negative examples. We find that Label-aware Contrastive Loss outperforms previous contrastive methods, in the presence of larger number and/or more confusable classes, and helps models to produce output distributions that are more differentiated.
Search
Co-authors
Venues