Abstract
It is standard practice in speech & language technology to rank systems according to their performance on a test set held out for evaluation. However, few researchers apply statistical tests to determine whether differences in performance are likely to arise by chance, and few examine the stability of system ranking across multiple training-testing splits. We conduct replication and reproduction experiments with nine part-of-speech taggers published between 2000 and 2018, each of which claimed state-of-the-art performance on a widely-used “standard split”. While we replicate results on the standard split, we fail to reliably reproduce some rankings when we repeat this analysis with randomly generated training-testing splits. We argue that randomly generated splits should be used in system evaluation.- Anthology ID:
- P19-1267
- Volume:
- Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics
- Month:
- July
- Year:
- 2019
- Address:
- Florence, Italy
- Editors:
- Anna Korhonen, David Traum, Lluís Màrquez
- Venue:
- ACL
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 2786–2791
- Language:
- URL:
- https://aclanthology.org/P19-1267
- DOI:
- 10.18653/v1/P19-1267
- Award:
- Outstanding Paper
- Cite (ACL):
- Kyle Gorman and Steven Bedrick. 2019. We Need to Talk about Standard Splits. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2786–2791, Florence, Italy. Association for Computational Linguistics.
- Cite (Informal):
- We Need to Talk about Standard Splits (Gorman & Bedrick, ACL 2019)
- PDF:
- https://preview.aclanthology.org/ingest-acl-2023-videos/P19-1267.pdf
- Code
- kylebgorman/SOTA-taggers
- Data
- Penn Treebank