oBERTa: Improving Sparse Transfer Learning via improved initialization, distillation, and pruning regimes

Daniel Campos, Alexandre Marques, Mark Kurtz, Cheng Xiang Zhai


Anthology ID:
2023.sustainlp-1.3
Volume:
Proceedings of The Fourth Workshop on Simple and Efficient Natural Language Processing (SustaiNLP)
Month:
July
Year:
2023
Address:
Toronto, Canada (Hybrid)
Venue:
sustainlp
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
39–58
Language:
URL:
https://aclanthology.org/2023.sustainlp-1.3
DOI:
Bibkey:
Cite (ACL):
Daniel Campos, Alexandre Marques, Mark Kurtz, and Cheng Xiang Zhai. 2023. oBERTa: Improving Sparse Transfer Learning via improved initialization, distillation, and pruning regimes. In Proceedings of The Fourth Workshop on Simple and Efficient Natural Language Processing (SustaiNLP), pages 39–58, Toronto, Canada (Hybrid). Association for Computational Linguistics.
Cite (Informal):
oBERTa: Improving Sparse Transfer Learning via improved initialization, distillation, and pruning regimes (Campos et al., sustainlp 2023)
Copy Citation:
PDF:
https://preview.aclanthology.org/paclic-22-ingestion/2023.sustainlp-1.3.pdf