Masked Latent Semantic Modeling: an Efficient Pre-training Alternative to Masked Language Modeling

Gábor Berend


Abstract
In this paper, we propose an alternative to the classic masked language modeling (MLM) pre-training paradigm, where the objective is altered from the reconstruction of the exact identity of randomly selected masked subwords to the prediction of their latent semantic properties. We coin the proposed pre-training technique masked latent semantic modeling (MLSM for short). In order to make the contextualized determination of the latent semantic properties of the masked subwords possible, we rely on an unsupervised technique which uses sparse coding. Our experimental results reveal that the fine-tuned performance of those models that we pre-trained via MLSM is consistently and significantly better compared to the use of vanilla MLM pretraining and other strong baselines.
Anthology ID:
2023.findings-acl.876
Volume:
Findings of the Association for Computational Linguistics: ACL 2023
Month:
July
Year:
2023
Address:
Toronto, Canada
Editors:
Anna Rogers, Jordan Boyd-Graber, Naoaki Okazaki
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
13949–13962
Language:
URL:
https://aclanthology.org/2023.findings-acl.876
DOI:
10.18653/v1/2023.findings-acl.876
Bibkey:
Cite (ACL):
Gábor Berend. 2023. Masked Latent Semantic Modeling: an Efficient Pre-training Alternative to Masked Language Modeling. In Findings of the Association for Computational Linguistics: ACL 2023, pages 13949–13962, Toronto, Canada. Association for Computational Linguistics.
Cite (Informal):
Masked Latent Semantic Modeling: an Efficient Pre-training Alternative to Masked Language Modeling (Berend, Findings 2023)
Copy Citation:
PDF:
https://preview.aclanthology.org/dois-2013-emnlp/2023.findings-acl.876.pdf