Probing Out-of-Distribution Robustness of Language Models with Parameter-Efficient Transfer Learning

Hyunsoo Cho, Choonghyun Park, Junyeob Kim, Hyuhng Joon Kim, Kang Min Yoo, Sang-goo Lee


Abstract
As the size of the pre-trained language model (PLM) continues to increase, numerous parameter-efficient transfer learning methods have been proposed recently to compensate for the high cost of fine-tuning. While large PLMs and various PETL methods have achieved impressive results on various benchmarks, it is uncertain whether they can effectively handle inputs that have been distributionally shifted. In this study, we systematically explore how the ability to detect out-of-distribution (OOD) changes as the size of the PLM grows or the transfer methods are altered. Specifically, we evaluated various PETL techniques, including fine-tuning, Adapter, LoRA, and prefix-tuning, with various language models with different scales.
Anthology ID:
2023.starsem-1.21
Volume:
Proceedings of the 12th Joint Conference on Lexical and Computational Semantics (*SEM 2023)
Month:
July
Year:
2023
Address:
Toronto, Canada
Editors:
Alexis Palmer, Jose Camacho-collados
Venue:
*SEM
SIG:
SIGLEX
Publisher:
Association for Computational Linguistics
Note:
Pages:
225–235
Language:
URL:
https://aclanthology.org/2023.starsem-1.21
DOI:
10.18653/v1/2023.starsem-1.21
Bibkey:
Cite (ACL):
Hyunsoo Cho, Choonghyun Park, Junyeob Kim, Hyuhng Joon Kim, Kang Min Yoo, and Sang-goo Lee. 2023. Probing Out-of-Distribution Robustness of Language Models with Parameter-Efficient Transfer Learning. In Proceedings of the 12th Joint Conference on Lexical and Computational Semantics (*SEM 2023), pages 225–235, Toronto, Canada. Association for Computational Linguistics.
Cite (Informal):
Probing Out-of-Distribution Robustness of Language Models with Parameter-Efficient Transfer Learning (Cho et al., *SEM 2023)
Copy Citation:
PDF:
https://preview.aclanthology.org/dois-2013-emnlp/2023.starsem-1.21.pdf