Deriving Language Models from Masked Language Models

Lucas Torroba Hennigen, Yoon Kim


Abstract
Masked language models (MLM) do not explicitly define a distribution over language, i.e., they are not language models per se. However, recent work has implicitly treated them as such for the purposes of generation and scoring. This paper studies methods for deriving explicit joint distributions from MLMs, focusing on distributions over two tokens, which makes it possible to calculate exact distributional properties. We find that an approach based on identifying joints whose conditionals are closest to those of the MLM works well and outperforms existing Markov random field-based approaches. We further find that this derived model’s conditionals can even occasionally outperform the original MLM’s conditionals.
Anthology ID:
2023.acl-short.99
Volume:
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)
Month:
July
Year:
2023
Address:
Toronto, Canada
Editors:
Anna Rogers, Jordan Boyd-Graber, Naoaki Okazaki
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1149–1159
Language:
URL:
https://aclanthology.org/2023.acl-short.99
DOI:
10.18653/v1/2023.acl-short.99
Bibkey:
Cite (ACL):
Lucas Torroba Hennigen and Yoon Kim. 2023. Deriving Language Models from Masked Language Models. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 1149–1159, Toronto, Canada. Association for Computational Linguistics.
Cite (Informal):
Deriving Language Models from Masked Language Models (Torroba Hennigen & Kim, ACL 2023)
Copy Citation:
PDF:
https://preview.aclanthology.org/dois-2013-emnlp/2023.acl-short.99.pdf
Video:
 https://preview.aclanthology.org/dois-2013-emnlp/2023.acl-short.99.mp4