FewNLU: Benchmarking State-of-the-Art Methods for Few-Shot Natural Language Understanding

Yanan Zheng, Jing Zhou, Yujie Qian, Ming Ding, Chonghua Liao, Li Jian, Ruslan Salakhutdinov, Jie Tang, Sebastian Ruder, Zhilin Yang


Abstract
The few-shot natural language understanding (NLU) task has attracted much recent attention. However, prior methods have been evaluated under a disparate set of protocols, which hinders fair comparison and measuring the progress of the field. To address this issue, we introduce an evaluation framework that improves previous evaluation procedures in three key aspects, i.e., test performance, dev-test correlation, and stability. Under this new evaluation framework, we re-evaluate several state-of-the-art few-shot methods for NLU tasks. Our framework reveals new insights: (1) both the absolute performance and relative gap of the methods were not accurately estimated in prior literature; (2) no single method dominates most tasks with consistent performance; (3) improvements of some methods diminish with a larger pretrained model; and (4) gains from different methods are often complementary and the best combined model performs close to a strong fully-supervised baseline. We open-source our toolkit, FewNLU, that implements our evaluation framework along with a number of state-of-the-art methods.
Anthology ID:
2022.acl-long.38
Volume:
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
May
Year:
2022
Address:
Dublin, Ireland
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
501–516
Language:
URL:
https://aclanthology.org/2022.acl-long.38
DOI:
10.18653/v1/2022.acl-long.38
Bibkey:
Cite (ACL):
Yanan Zheng, Jing Zhou, Yujie Qian, Ming Ding, Chonghua Liao, Li Jian, Ruslan Salakhutdinov, Jie Tang, Sebastian Ruder, and Zhilin Yang. 2022. FewNLU: Benchmarking State-of-the-Art Methods for Few-Shot Natural Language Understanding. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 501–516, Dublin, Ireland. Association for Computational Linguistics.
Cite (Informal):
FewNLU: Benchmarking State-of-the-Art Methods for Few-Shot Natural Language Understanding (Zheng et al., ACL 2022)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingestion-script-update/2022.acl-long.38.pdf
Software:
 2022.acl-long.38.software.zip
Code
 THUDM/FewNLU
Data
FewGLUE_64_labeledBoolQCOPAFewGlueGLUEMultiRCSuperGLUEWSC