High Quality ELMo Embeddings for Seven Less-Resourced Languages

Matej Ulčar, Marko Robnik-Šikonja


Abstract
Recent results show that deep neural networks using contextual embeddings significantly outperform non-contextual embeddings on a majority of text classification task. We offer precomputed embeddings from popular contextual ELMo model for seven languages: Croatian, Estonian, Finnish, Latvian, Lithuanian, Slovenian, and Swedish. We demonstrate that the quality of embeddings strongly depends on the size of training set and show that existing publicly available ELMo embeddings for listed languages shall be improved. We train new ELMo embeddings on much larger training sets and show their advantage over baseline non-contextual FastText embeddings. In evaluation, we use two benchmarks, the analogy task and the NER task.
Anthology ID:
2020.lrec-1.582
Volume:
Proceedings of the Twelfth Language Resources and Evaluation Conference
Month:
May
Year:
2020
Address:
Marseille, France
Editors:
Nicoletta Calzolari, Frédéric Béchet, Philippe Blache, Khalid Choukri, Christopher Cieri, Thierry Declerck, Sara Goggi, Hitoshi Isahara, Bente Maegaard, Joseph Mariani, Hélène Mazo, Asuncion Moreno, Jan Odijk, Stelios Piperidis
Venue:
LREC
SIG:
Publisher:
European Language Resources Association
Note:
Pages:
4731–4738
Language:
English
URL:
https://aclanthology.org/2020.lrec-1.582
DOI:
Bibkey:
Cite (ACL):
Matej Ulčar and Marko Robnik-Šikonja. 2020. High Quality ELMo Embeddings for Seven Less-Resourced Languages. In Proceedings of the Twelfth Language Resources and Evaluation Conference, pages 4731–4738, Marseille, France. European Language Resources Association.
Cite (Informal):
High Quality ELMo Embeddings for Seven Less-Resourced Languages (Ulčar & Robnik-Šikonja, LREC 2020)
Copy Citation:
PDF:
https://preview.aclanthology.org/nschneid-patch-2/2020.lrec-1.582.pdf