Abstract
In this paper, we describe the PUM team’s entry to the SemEval-2020 Task 12. Creating our solution involved leveraging two well-known pretrained models used in natural language processing: BERT and XLNet, which achieve state-of-the-art results in multiple NLP tasks. The models were fine-tuned for each subtask separately and features taken from their hidden layers were combinedand fed into a fully connected neural network. The model using aggregated Transformer featurescan serve as a powerful tool for offensive language identification problem. Our team was ranked7th out of 40 in Sub-task C - Offense target identification with 64.727% macro F1-score and 64thout of 85 in Sub-task A - Offensive language identification (89.726% F1-score).- Anthology ID:
- 2020.semeval-1.210
- Volume:
- Proceedings of the Fourteenth Workshop on Semantic Evaluation
- Month:
- December
- Year:
- 2020
- Address:
- Barcelona (online)
- Editors:
- Aurelie Herbelot, Xiaodan Zhu, Alexis Palmer, Nathan Schneider, Jonathan May, Ekaterina Shutova
- Venue:
- SemEval
- SIG:
- SIGLEX
- Publisher:
- International Committee for Computational Linguistics
- Note:
- Pages:
- 1615–1621
- Language:
- URL:
- https://aclanthology.org/2020.semeval-1.210
- DOI:
- 10.18653/v1/2020.semeval-1.210
- Cite (ACL):
- Piotr Janiszewski, Mateusz Skiba, and Urszula Walińska. 2020. PUM at SemEval-2020 Task 12: Aggregation of Transformer-based Models’ Features for Offensive Language Recognition. In Proceedings of the Fourteenth Workshop on Semantic Evaluation, pages 1615–1621, Barcelona (online). International Committee for Computational Linguistics.
- Cite (Informal):
- PUM at SemEval-2020 Task 12: Aggregation of Transformer-based Models’ Features for Offensive Language Recognition (Janiszewski et al., SemEval 2020)
- PDF:
- https://preview.aclanthology.org/emnlp22-frontmatter/2020.semeval-1.210.pdf
- Data
- OLID