Slovene SuperGLUE Benchmark: Translation and Evaluation

Aleš Žagar, Marko Robnik-Šikonja


Abstract
We present SuperGLUE benchmark adapted and translated into Slovene using a combination of human and machine translation. We describe the translation process and problems arising due to differences in morphology and grammar. We evaluate the translated datasets in several modes: monolingual, cross-lingual, and multilingual, taking into account differences between machine and human translated training sets. The results show that the monolingual Slovene SloBERTa model is superior to massively multilingual and trilingual BERT models, but these also show a good cross-lingual performance on certain tasks. The performance of Slovene models still lags behind the best English models.
Anthology ID:
2022.lrec-1.221
Volume:
Proceedings of the Thirteenth Language Resources and Evaluation Conference
Month:
June
Year:
2022
Address:
Marseille, France
Editors:
Nicoletta Calzolari, Frédéric Béchet, Philippe Blache, Khalid Choukri, Christopher Cieri, Thierry Declerck, Sara Goggi, Hitoshi Isahara, Bente Maegaard, Joseph Mariani, Hélène Mazo, Jan Odijk, Stelios Piperidis
Venue:
LREC
SIG:
Publisher:
European Language Resources Association
Note:
Pages:
2058–2065
Language:
URL:
https://aclanthology.org/2022.lrec-1.221
DOI:
Bibkey:
Cite (ACL):
Aleš Žagar and Marko Robnik-Šikonja. 2022. Slovene SuperGLUE Benchmark: Translation and Evaluation. In Proceedings of the Thirteenth Language Resources and Evaluation Conference, pages 2058–2065, Marseille, France. European Language Resources Association.
Cite (Informal):
Slovene SuperGLUE Benchmark: Translation and Evaluation (Žagar & Robnik-Šikonja, LREC 2022)
Copy Citation:
PDF:
https://preview.aclanthology.org/improve-issue-templates/2022.lrec-1.221.pdf