A Measure for Transparent Comparison of Linguistic Diversity in Multilingual NLP Data Sets

Tanja Samardzic, Ximena Gutierrez, Christian Bentz, Steven Moran, Olga Pelloni


Abstract
Typologically diverse benchmarks are increasingly created to track the progress achieved in multilingual NLP. Linguistic diversity of these data sets is typically measured as the number of languages or language families included in the sample, but such measures do not consider structural properties of the included languages. In this paper, we propose assessing linguistic diversity of a data set against a reference language sample as a means of maximising linguistic diversity in the long run. We represent languages as sets of features and apply a version of the Jaccard index suitable for comparing sets of measures. In addition to the features extracted from typological data bases, we propose an automatic text-based measure, which can be used as a means of overcoming the well-known problem of data sparsity in manually collected features. Our diversity score is interpretable in terms of linguistic features and can identify the types of languages that are not represented in a data set. Using our method, we analyse a range of popular multilingual data sets (UD, Bible100, mBERT, XTREME, XGLUE, XNLI, XCOPA, TyDiQA, XQuAD). In addition to ranking these data sets, we find, for example, that (poly)synthetic languages are missing in almost all of them.
Anthology ID:
2024.findings-naacl.213
Volume:
Findings of the Association for Computational Linguistics: NAACL 2024
Month:
June
Year:
2024
Address:
Mexico City, Mexico
Editors:
Kevin Duh, Helena Gomez, Steven Bethard
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
3367–3382
Language:
URL:
https://aclanthology.org/2024.findings-naacl.213
DOI:
Bibkey:
Cite (ACL):
Tanja Samardzic, Ximena Gutierrez, Christian Bentz, Steven Moran, and Olga Pelloni. 2024. A Measure for Transparent Comparison of Linguistic Diversity in Multilingual NLP Data Sets. In Findings of the Association for Computational Linguistics: NAACL 2024, pages 3367–3382, Mexico City, Mexico. Association for Computational Linguistics.
Cite (Informal):
A Measure for Transparent Comparison of Linguistic Diversity in Multilingual NLP Data Sets (Samardzic et al., Findings 2024)
Copy Citation:
PDF:
https://preview.aclanthology.org/naacl24-info/2024.findings-naacl.213.pdf
Copyright:
 2024.findings-naacl.213.copyright.pdf