Do you have the right scissors? Tailoring Pre-trained Language Models via Monte-Carlo Methods

Ning Miao, Yuxuan Song, Hao Zhou, Lei Li


Abstract
It has been a common approach to pre-train a language model on a large corpus and fine-tune it on task-specific data. In practice, we observe that fine-tuning a pre-trained model on a small dataset may lead to over- and/or under-estimate problem. In this paper, we propose MC-Tailor, a novel method to alleviate the above issue in text generation tasks by truncating and transferring the probability mass from over-estimated regions to under-estimated ones. Experiments on a variety of text generation datasets show that MC-Tailor consistently and significantly outperforms the fine-tuning approach.
Anthology ID:
2020.acl-main.314
Volume:
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
Month:
July
Year:
2020
Address:
Online
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
3436–3441
Language:
URL:
https://aclanthology.org/2020.acl-main.314
DOI:
10.18653/v1/2020.acl-main.314
Bibkey:
Cite (ACL):
Ning Miao, Yuxuan Song, Hao Zhou, and Lei Li. 2020. Do you have the right scissors? Tailoring Pre-trained Language Models via Monte-Carlo Methods. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 3436–3441, Online. Association for Computational Linguistics.
Cite (Informal):
Do you have the right scissors? Tailoring Pre-trained Language Models via Monte-Carlo Methods (Miao et al., ACL 2020)
Copy Citation:
PDF:
https://preview.aclanthology.org/emnlp-22-attachments/2020.acl-main.314.pdf
Video:
 http://slideslive.com/38928919
Code
 NingMiao/MC-tailor
Data
DailyDialog