ANDES at SemEval-2020 Task 12: A Jointly-trained BERT Multilingual Model for Offensive Language Detection

Juan Manuel Pérez, Aymé Arango, Franco Luque


Abstract
This paper describes our participation in SemEval-2020 Task 12: Multilingual Offensive Language Detection. We jointly-trained a single model by fine-tuning Multilingual BERT to tackle the task across all the proposed languages: English, Danish, Turkish, Greek and Arabic. Our single model had competitive results, with a performance close to top-performing systems in spite of sharing the same parameters across all languages. Zero-shot and few-shot experiments were also conducted to analyze the transference performance among these languages. We make our code public for further research
Anthology ID:
2020.semeval-1.199
Volume:
Proceedings of the Fourteenth Workshop on Semantic Evaluation
Month:
December
Year:
2020
Address:
Barcelona (online)
Editors:
Aurelie Herbelot, Xiaodan Zhu, Alexis Palmer, Nathan Schneider, Jonathan May, Ekaterina Shutova
Venue:
SemEval
SIG:
SIGLEX
Publisher:
International Committee for Computational Linguistics
Note:
Pages:
1524–1531
Language:
URL:
https://aclanthology.org/2020.semeval-1.199
DOI:
10.18653/v1/2020.semeval-1.199
Bibkey:
Cite (ACL):
Juan Manuel Pérez, Aymé Arango, and Franco Luque. 2020. ANDES at SemEval-2020 Task 12: A Jointly-trained BERT Multilingual Model for Offensive Language Detection. In Proceedings of the Fourteenth Workshop on Semantic Evaluation, pages 1524–1531, Barcelona (online). International Committee for Computational Linguistics.
Cite (Informal):
ANDES at SemEval-2020 Task 12: A Jointly-trained BERT Multilingual Model for Offensive Language Detection (Pérez et al., SemEval 2020)
Copy Citation:
PDF:
https://preview.aclanthology.org/naacl24-info/2020.semeval-1.199.pdf
Code
 finiteautomata/offenseval2020
Data
OLID