Abstract
Even though large pre-trained multilingual models (e.g. mBERT, XLM-R) have led to significant performance gains on a wide range of cross-lingual NLP tasks, success on many downstream tasks still relies on the availability of sufficient annotated data. Traditional fine-tuning of pre-trained models using only a few target samples can cause over-fitting. This can be quite limiting as most languages in the world are under-resourced. In this work, we investigate cross-lingual adaptation using a simple nearest-neighbor few-shot (<15 samples) inference technique for classification tasks. We experiment using a total of 16 distinct languages across two NLP tasks- XNLI and PAWS-X. Our approach consistently improves traditional fine-tuning using only a handful of labeled samples in target locales. We also demonstrate its generalization capability across tasks.- Anthology ID:
- 2021.emnlp-main.131
- Volume:
- Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing
- Month:
- November
- Year:
- 2021
- Address:
- Online and Punta Cana, Dominican Republic
- Editors:
- Marie-Francine Moens, Xuanjing Huang, Lucia Specia, Scott Wen-tau Yih
- Venue:
- EMNLP
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 1745–1753
- Language:
- URL:
- https://aclanthology.org/2021.emnlp-main.131
- DOI:
- 10.18653/v1/2021.emnlp-main.131
- Cite (ACL):
- M Saiful Bari, Batool Haider, and Saab Mansour. 2021. Nearest Neighbour Few-Shot Learning for Cross-lingual Classification. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 1745–1753, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
- Cite (Informal):
- Nearest Neighbour Few-Shot Learning for Cross-lingual Classification (Bari et al., EMNLP 2021)
- PDF:
- https://preview.aclanthology.org/proper-vol2-ingestion/2021.emnlp-main.131.pdf
- Data
- XNLI