Improving Cuneiform Language Identification with BERT

Gabriel Bernier-Colborne, Cyril Goutte, Serge Léger


Abstract
We describe the systems developed by the National Research Council Canada for the Cuneiform Language Identification (CLI) shared task at the 2019 VarDial evaluation campaign. We compare a state-of-the-art baseline relying on character n-grams and a traditional statistical classifier, a voting ensemble of classifiers, and a deep learning approach using a Transformer network. We describe how these systems were trained, and analyze the impact of some preprocessing and model estimation decisions. The deep neural network achieved 77% accuracy on the test data, which turned out to be the best performance at the CLI evaluation, establishing a new state-of-the-art for cuneiform language identification.
Anthology ID:
W19-1402
Volume:
Proceedings of the Sixth Workshop on NLP for Similar Languages, Varieties and Dialects
Month:
June
Year:
2019
Address:
Ann Arbor, Michigan
Editors:
Marcos Zampieri, Preslav Nakov, Shervin Malmasi, Nikola Ljubešić, Jörg Tiedemann, Ahmed Ali
Venue:
VarDial
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
17–25
Language:
URL:
https://aclanthology.org/W19-1402
DOI:
10.18653/v1/W19-1402
Bibkey:
Cite (ACL):
Gabriel Bernier-Colborne, Cyril Goutte, and Serge Léger. 2019. Improving Cuneiform Language Identification with BERT. In Proceedings of the Sixth Workshop on NLP for Similar Languages, Varieties and Dialects, pages 17–25, Ann Arbor, Michigan. Association for Computational Linguistics.
Cite (Informal):
Improving Cuneiform Language Identification with BERT (Bernier-Colborne et al., VarDial 2019)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-bitext-workshop/W19-1402.pdf