Probing BERT’s priors with serial reproduction chains

Takateru Yamakoshi, Thomas Griffiths, Robert Hawkins


Abstract
Sampling is a promising bottom-up method for exposing what generative models have learned about language, but it remains unclear how to generate representative samples from popular masked language models (MLMs) like BERT. The MLM objective yields a dependency network with no guarantee of consistent conditional distributions, posing a problem for naive approaches. Drawing from theories of iterated learning in cognitive science, we explore the use of serial reproduction chains to sample from BERT’s priors. In particular, we observe that a unique and consistent estimator of the ground-truth joint distribution is given by a Generative Stochastic Network (GSN) sampler, which randomly selects which token to mask and reconstruct on each step. We show that the lexical and syntactic statistics of sentences from GSN chains closely match the ground-truth corpus distribution and perform better than other methods in a large corpus of naturalness judgments. Our findings establish a firmer theoretical foundation for bottom-up probing and highlight richer deviations from human priors.
Anthology ID:
2022.findings-acl.314
Volume:
Findings of the Association for Computational Linguistics: ACL 2022
Month:
May
Year:
2022
Address:
Dublin, Ireland
Editors:
Smaranda Muresan, Preslav Nakov, Aline Villavicencio
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
3977–3992
Language:
URL:
https://aclanthology.org/2022.findings-acl.314
DOI:
10.18653/v1/2022.findings-acl.314
Bibkey:
Cite (ACL):
Takateru Yamakoshi, Thomas Griffiths, and Robert Hawkins. 2022. Probing BERT’s priors with serial reproduction chains. In Findings of the Association for Computational Linguistics: ACL 2022, pages 3977–3992, Dublin, Ireland. Association for Computational Linguistics.
Cite (Informal):
Probing BERT’s priors with serial reproduction chains (Yamakoshi et al., Findings 2022)
Copy Citation:
PDF:
https://preview.aclanthology.org/naacl24-info/2022.findings-acl.314.pdf