A Metrological Perspective on Reproducibility in NLP*

Anya Belz


Abstract
Reproducibility has become an increasingly debated topic in NLP and ML over recent years, but so far, no commonly accepted definitions of even basic terms or concepts have emerged. The range of different definitions proposed within NLP/ML not only do not agree with each other, they are also not aligned with standard scientific definitions. This article examines the standard definitions of repeatability and reproducibility provided by the meta-science of metrology, and explores what they imply in terms of how to assess reproducibility, and what adopting them would mean for reproducibility assessment in NLP/ML. It turns out the standard definitions lead directly to a method for assessing reproducibility in quantified terms that renders results from reproduction studies comparable across multiple reproductions of the same original study, as well as reproductions of different original studies. The article considers where this method sits in relation to other aspects of NLP work one might wish to assess in the context of reproducibility.
Anthology ID:
2022.cl-4.21
Volume:
Computational Linguistics, Volume 48, Issue 4 - December 2022
Month:
December
Year:
2022
Address:
Cambridge, MA
Venue:
CL
SIG:
Publisher:
MIT Press
Note:
Pages:
1125–1135
Language:
URL:
https://aclanthology.org/2022.cl-4.21
DOI:
10.1162/coli_a_00448
Bibkey:
Cite (ACL):
Anya Belz. 2022. A Metrological Perspective on Reproducibility in NLP*. Computational Linguistics, 48(4):1125–1135.
Cite (Informal):
A Metrological Perspective on Reproducibility in NLP* (Belz, CL 2022)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-acl-2023-videos/2022.cl-4.21.pdf