EICO: Improving Few-Shot Text Classification via Explicit and Implicit Consistency Regularization

Lei Zhao, Cheng Yao


Abstract
While the prompt-based fine-tuning methods had advanced few-shot natural language understanding tasks, self-training methods are also being explored. This work revisits the consistency regularization in self-training and presents explicit and implicit consistency regularization enhanced language model (EICO). By employing both explicit and implicit consistency regularization, EICO advances the performance of prompt-based few-shot text classification. For implicit consistency regularization, we generate pseudo-label from the weakly-augmented view and predict pseudo-label from the strongly-augmented view. For explicit consistency regularization, we minimize the difference between the prediction of the augmentation view and the prediction of the original view. We conducted extensive experiments on six text classification datasets and found that with sixteen labeled examples, EICO achieves competitive performance compared to existing self-training few-shot learning methods.
Anthology ID:
2022.findings-acl.283
Volume:
Findings of the Association for Computational Linguistics: ACL 2022
Month:
May
Year:
2022
Address:
Dublin, Ireland
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
3582–3587
Language:
URL:
https://aclanthology.org/2022.findings-acl.283
DOI:
10.18653/v1/2022.findings-acl.283
Bibkey:
Cite (ACL):
Lei Zhao and Cheng Yao. 2022. EICO: Improving Few-Shot Text Classification via Explicit and Implicit Consistency Regularization. In Findings of the Association for Computational Linguistics: ACL 2022, pages 3582–3587, Dublin, Ireland. Association for Computational Linguistics.
Cite (Informal):
EICO: Improving Few-Shot Text Classification via Explicit and Implicit Consistency Regularization (Zhao & Yao, Findings 2022)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingestion-script-update/2022.findings-acl.283.pdf
Data
MPQA Opinion CorpusSST