Abstract
The aim of this work is to explore the possible limitations of existing methods of cross-language word embeddings evaluation, addressing the lack of correlation between intrinsic and extrinsic cross-language evaluation methods. To prove this hypothesis, we construct English-Russian datasets for extrinsic and intrinsic evaluation tasks and compare performances of 5 different cross-language models on them. The results say that the scores even on different intrinsic benchmarks do not correlate to each other. We can conclude that the use of human references as ground truth for cross-language word embeddings is not proper unless one does not understand how do native speakers process semantics in their cognition.- Anthology ID:
- S18-2010
- Volume:
- Proceedings of the Seventh Joint Conference on Lexical and Computational Semantics
- Month:
- June
- Year:
- 2018
- Address:
- New Orleans, Louisiana
- Venue:
- SemEval
- SIGs:
- SIGSEM | SIGLEX
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 94–100
- Language:
- URL:
- https://aclanthology.org/S18-2010
- DOI:
- 10.18653/v1/S18-2010
- Cite (ACL):
- Amir Bakarov, Roman Suvorov, and Ilya Sochenkov. 2018. The Limitations of Cross-language Word Embeddings Evaluation. In Proceedings of the Seventh Joint Conference on Lexical and Computational Semantics, pages 94–100, New Orleans, Louisiana. Association for Computational Linguistics.
- Cite (Informal):
- The Limitations of Cross-language Word Embeddings Evaluation (Bakarov et al., SemEval 2018)
- PDF:
- https://preview.aclanthology.org/nodalida-main-page/S18-2010.pdf
- Code
- bakarov/cross-lang-embeddings