Replicability under Near-Perfect Conditions – A Case-Study from Automatic Summarization

Margot Mieskes


Abstract
Replication of research results has become more and more important in Natural Language Processing. Nevertheless, we still rely on results reported in the literature for comparison. Additionally, elements of an experimental setup are not always completely reported. This includes, but is not limited to reporting specific parameters used or omitting an implementational detail. In our experiment based on two frequently used data sets from the domain of automatic summarization and the seemingly full disclosure of research artefacts, we examine how well results reported are replicable and what elements influence the success or failure of replication. Our results indicate that publishing research artifacts is far from sufficient, that that publishing all relevant parameters in all possible detail is cruicial.
Anthology ID:
2022.insights-1.23
Volume:
Proceedings of the Third Workshop on Insights from Negative Results in NLP
Month:
May
Year:
2022
Address:
Dublin, Ireland
Editors:
Shabnam Tafreshi, João Sedoc, Anna Rogers, Aleksandr Drozd, Anna Rumshisky, Arjun Akula
Venue:
insights
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
165–171
Language:
URL:
https://aclanthology.org/2022.insights-1.23
DOI:
10.18653/v1/2022.insights-1.23
Bibkey:
Cite (ACL):
Margot Mieskes. 2022. Replicability under Near-Perfect Conditions – A Case-Study from Automatic Summarization. In Proceedings of the Third Workshop on Insights from Negative Results in NLP, pages 165–171, Dublin, Ireland. Association for Computational Linguistics.
Cite (Informal):
Replicability under Near-Perfect Conditions – A Case-Study from Automatic Summarization (Mieskes, insights 2022)
Copy Citation:
PDF:
https://preview.aclanthology.org/nschneid-patch-3/2022.insights-1.23.pdf
Video:
 https://preview.aclanthology.org/nschneid-patch-3/2022.insights-1.23.mp4