Beyond Fine-tuning: Few-Sample Sentence Embedding Transfer

Siddhant Garg, Rohit Kumar Sharma, Yingyu Liang


Abstract
Fine-tuning (FT) pre-trained sentence embedding models on small datasets has been shown to have limitations. In this paper we show that concatenating the embeddings from the pre-trained model with those from a simple sentence embedding model trained only on the target data, can improve over the performance of FT for few-sample tasks. To this end, a linear classifier is trained on the combined embeddings, either by freezing the embedding model weights or training the classifier and embedding models end-to-end. We perform evaluation on seven small datasets from NLP tasks and show that our approach with end-to-end training outperforms FT with negligible computational overhead. Further, we also show that sophisticated combination techniques like CCA and KCCA do not work as well in practice as concatenation. We provide theoretical analysis to explain this empirical observation.
Anthology ID:
2020.aacl-main.47
Volume:
Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing
Month:
December
Year:
2020
Address:
Suzhou, China
Editors:
Kam-Fai Wong, Kevin Knight, Hua Wu
Venue:
AACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
460–469
Language:
URL:
https://aclanthology.org/2020.aacl-main.47
DOI:
Bibkey:
Cite (ACL):
Siddhant Garg, Rohit Kumar Sharma, and Yingyu Liang. 2020. Beyond Fine-tuning: Few-Sample Sentence Embedding Transfer. In Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing, pages 460–469, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
Beyond Fine-tuning: Few-Sample Sentence Embedding Transfer (Garg et al., AACL 2020)
Copy Citation:
PDF:
https://preview.aclanthology.org/emnlp-22-attachments/2020.aacl-main.47.pdf
Data
GLUEMPQA Opinion Corpus