UoB at SemEval-2020 Task 12: Boosting BERT with Corpus Level Information

Wah Meng Lim, Harish Tayyar Madabushi


Abstract
Pre-trained language model word representation, such as BERT, have been extremely successful in several Natural Language Processing tasks significantly improving on the state-of-the-art. This can largely be attributed to their ability to better capture semantic information contained within a sentence. Several tasks, however, can benefit from information available at a corpus level, such as Term Frequency-Inverse Document Frequency (TF-IDF). In this work we test the effectiveness of integrating this information with BERT on the task of identifying abuse on social media and show that integrating this information with BERT does indeed significantly improve performance. We participate in Sub-Task A (abuse detection) wherein we achieve a score within two points of the top performing team and in Sub-Task B (target detection) wherein we are ranked 4 of the 44 participating teams.
Anthology ID:
2020.semeval-1.295
Volume:
Proceedings of the Fourteenth Workshop on Semantic Evaluation
Month:
December
Year:
2020
Address:
Barcelona (online)
Venue:
SemEval
SIGs:
SIGLEX | SIGSEM
Publisher:
International Committee for Computational Linguistics
Note:
Pages:
2216–2221
Language:
URL:
https://aclanthology.org/2020.semeval-1.295
DOI:
10.18653/v1/2020.semeval-1.295
Bibkey:
Cite (ACL):
Wah Meng Lim and Harish Tayyar Madabushi. 2020. UoB at SemEval-2020 Task 12: Boosting BERT with Corpus Level Information. In Proceedings of the Fourteenth Workshop on Semantic Evaluation, pages 2216–2221, Barcelona (online). International Committee for Computational Linguistics.
Cite (Informal):
UoB at SemEval-2020 Task 12: Boosting BERT with Corpus Level Information (Lim & Tayyar Madabushi, SemEval 2020)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingestion-script-update/2020.semeval-1.295.pdf