Abstract
Against a background of growing interest in reproducibility in NLP and ML, and as part of an ongoing research programme designed to develop theory and practice of reproducibility assessment in NLP, we organised the second shared task on reproducibility of evaluations in NLG, ReproGen 2022. This paper describes the shared task, summarises results from the reproduction studies submitted, and provides further comparative analysis of the results. Out of six initial team registrations, we received submissions from five teams. Meta-analysis of the five reproduction studies revealed varying degrees of reproducibility, and allowed further tentative conclusions about what types of evaluation tend to have better reproducibility.- Anthology ID:
- 2022.inlg-genchal.8
- Volume:
- Proceedings of the 15th International Conference on Natural Language Generation: Generation Challenges
- Month:
- July
- Year:
- 2022
- Address:
- Waterville, Maine, USA and virtual meeting
- Editors:
- Samira Shaikh, Thiago Ferreira, Amanda Stent
- Venue:
- INLG
- SIG:
- SIGGEN
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 43–51
- Language:
- URL:
- https://aclanthology.org/2022.inlg-genchal.8
- DOI:
- Cite (ACL):
- Anya Belz, Anastasia Shimorina, Maja Popović, and Ehud Reiter. 2022. The 2022 ReproGen Shared Task on Reproducibility of Evaluations in NLG: Overview and Results. In Proceedings of the 15th International Conference on Natural Language Generation: Generation Challenges, pages 43–51, Waterville, Maine, USA and virtual meeting. Association for Computational Linguistics.
- Cite (Informal):
- The 2022 ReproGen Shared Task on Reproducibility of Evaluations in NLG: Overview and Results (Belz et al., INLG 2022)
- PDF:
- https://preview.aclanthology.org/nschneid-patch-2/2022.inlg-genchal.8.pdf