KS@LTH at SemEval-2020 Task 12: Fine-tuning Multi- and Monolingual Transformer Models for Offensive Language Detection

Kasper Socha


Abstract
This paper describes the KS@LTH system for SemEval-2020 Task 12 OffensEval2: Multilingual Offensive Language Identification in Social Media. We compare mono- and multilingual models based on fine-tuning pre-trained transformer models for offensive language identification in Arabic, Greek, English and Turkish. For Danish, we explore the possibility of fine-tuning a model pre-trained on a similar language, Swedish, and additionally also cross-lingual training together with English.
Anthology ID:
2020.semeval-1.270
Volume:
Proceedings of the Fourteenth Workshop on Semantic Evaluation
Month:
December
Year:
2020
Address:
Barcelona (online)
Editors:
Aurelie Herbelot, Xiaodan Zhu, Alexis Palmer, Nathan Schneider, Jonathan May, Ekaterina Shutova
Venue:
SemEval
SIG:
SIGLEX
Publisher:
International Committee for Computational Linguistics
Note:
Pages:
2045–2053
Language:
URL:
https://aclanthology.org/2020.semeval-1.270
DOI:
10.18653/v1/2020.semeval-1.270
Bibkey:
Cite (ACL):
Kasper Socha. 2020. KS@LTH at SemEval-2020 Task 12: Fine-tuning Multi- and Monolingual Transformer Models for Offensive Language Detection. In Proceedings of the Fourteenth Workshop on Semantic Evaluation, pages 2045–2053, Barcelona (online). International Committee for Computational Linguistics.
Cite (Informal):
KS@LTH at SemEval-2020 Task 12: Fine-tuning Multi- and Monolingual Transformer Models for Offensive Language Detection (Socha, SemEval 2020)
Copy Citation:
PDF:
https://preview.aclanthology.org/nschneid-patch-3/2020.semeval-1.270.pdf