Abstract
Recent work to enhance data partitioning strategies for more realistic model evaluation face challenges in providing a clear optimal choice. This study addresses these challenges, focusing on morphological segmentation and synthesizing limitations related to language diversity, adoption of multiple datasets and splits, and detailed model comparisons. Our study leverages data from 19 languages, including ten indigenous or endangered languages across 10 language families with diverse morphological systems (polysynthetic, fusional, and agglutinative) and different degrees of data availability. We conduct large-scale experimentation with varying sized combinations of training and evaluation sets as well as new test data. Our results show that, when faced with new test data: (1) models trained from random splits are able to achieve higher numerical scores; (2) model rankings derived from random splits tend to generalize more consistently.- Anthology ID:
- 2024.naacl-long.157
- Volume:
- Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)
- Month:
- June
- Year:
- 2024
- Address:
- Mexico City, Mexico
- Editors:
- Kevin Duh, Helena Gomez, Steven Bethard
- Venue:
- NAACL
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 2851–2864
- Language:
- URL:
- https://aclanthology.org/2024.naacl-long.157
- DOI:
- 10.18653/v1/2024.naacl-long.157
- Cite (ACL):
- Zoey Liu and Bonnie Dorr. 2024. The Effect of Data Partitioning Strategy on Model Generalizability: A Case Study of Morphological Segmentation. In Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers), pages 2851–2864, Mexico City, Mexico. Association for Computational Linguistics.
- Cite (Informal):
- The Effect of Data Partitioning Strategy on Model Generalizability: A Case Study of Morphological Segmentation (Liu & Dorr, NAACL 2024)
- PDF:
- https://preview.aclanthology.org/ingest-2024-clasp/2024.naacl-long.157.pdf