Abstract
We introduce BitFit, a sparse-finetuning method where only the bias-terms of the model (or a subset of them) are being modified. We show that with small-to-medium training data, applying BitFit on pre-trained BERT models is competitive with (and sometimes better than) fine-tuning the entire model. For larger data, the method is competitive with other sparse fine-tuning methods.Besides their practical utility, these findings are relevant for the question of understanding the commonly-used process of finetuning: they support the hypothesis that finetuning is mainly about exposing knowledge induced by language-modeling training, rather than learning new task-specific linguistic knowledge.- Anthology ID:
- 2022.acl-short.1
- Volume:
- Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)
- Month:
- May
- Year:
- 2022
- Address:
- Dublin, Ireland
- Venue:
- ACL
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 1–9
- Language:
- URL:
- https://aclanthology.org/2022.acl-short.1
- DOI:
- 10.18653/v1/2022.acl-short.1
- Cite (ACL):
- Elad Ben Zaken, Yoav Goldberg, and Shauli Ravfogel. 2022. BitFit: Simple Parameter-efficient Fine-tuning for Transformer-based Masked Language-models. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 1–9, Dublin, Ireland. Association for Computational Linguistics.
- Cite (Informal):
- BitFit: Simple Parameter-efficient Fine-tuning for Transformer-based Masked Language-models (Ben Zaken et al., ACL 2022)
- PDF:
- https://preview.aclanthology.org/auto-file-uploads/2022.acl-short.1.pdf
- Code
- benzakenelad/BitFit + additional community code
- Data
- CoLA, GLUE, MRPC, QNLI, SQuAD, SST