Lost in Evaluation: Misleading Benchmarks for Bilingual Dictionary Induction

Yova Kementchedjhieva, Mareike Hartmann, Anders Søgaard


Abstract
The task of bilingual dictionary induction (BDI) is commonly used for intrinsic evaluation of cross-lingual word embeddings. The largest dataset for BDI was generated automatically, so its quality is dubious. We study the composition and quality of the test sets for five diverse languages from this dataset, with concerning findings: (1) a quarter of the data consists of proper nouns, which can be hardly indicative of BDI performance, and (2) there are pervasive gaps in the gold-standard targets. These issues appear to affect the ranking between cross-lingual embedding systems on individual languages, and the overall degree to which the systems differ in performance. With proper nouns removed from the data, the margin between the top two systems included in the study grows from 3.4% to 17.2%. Manual verification of the predictions, on the other hand, reveals that gaps in the gold standard targets artificially inflate the margin between the two systems on English to Bulgarian BDI from 0.1% to 6.7%. We thus suggest that future research either avoids drawing conclusions from quantitative results on this BDI dataset, or accompanies such evaluation with rigorous error analysis.
Anthology ID:
D19-1328
Volume:
Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)
Month:
November
Year:
2019
Address:
Hong Kong, China
Editors:
Kentaro Inui, Jing Jiang, Vincent Ng, Xiaojun Wan
Venues:
EMNLP | IJCNLP
SIG:
SIGDAT
Publisher:
Association for Computational Linguistics
Note:
Pages:
3336–3341
Language:
URL:
https://aclanthology.org/D19-1328
DOI:
10.18653/v1/D19-1328
Bibkey:
Cite (ACL):
Yova Kementchedjhieva, Mareike Hartmann, and Anders Søgaard. 2019. Lost in Evaluation: Misleading Benchmarks for Bilingual Dictionary Induction. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3336–3341, Hong Kong, China. Association for Computational Linguistics.
Cite (Informal):
Lost in Evaluation: Misleading Benchmarks for Bilingual Dictionary Induction (Kementchedjhieva et al., EMNLP-IJCNLP 2019)
Copy Citation:
PDF:
https://preview.aclanthology.org/nschneid-patch-3/D19-1328.pdf
Attachment:
 D19-1328.Attachment.zip
Code
 coastalcph/MUSE_dicos +  additional community code