Abstract
Toxicity is pervasive in social media and poses a major threat to the health of online communities. The recent introduction of pre-trained language models, which have achieved state-of-the-art results in many NLP tasks, has transformed the way in which we approach natural language processing. However, the inherent nature of pre-training means that they are unlikely to capture task-specific statistical information or learn domain-specific knowledge. Additionally, most implementations of these models typically do not employ conditional random fields, a method for simultaneous token classification. We show that these modifications can improve model performance on the Toxic Spans Detection task at SemEval-2021 to achieve a score within 4 percentage points of the top performing team.- Anthology ID:
- 2021.semeval-1.28
- Volume:
- Proceedings of the 15th International Workshop on Semantic Evaluation (SemEval-2021)
- Month:
- August
- Year:
- 2021
- Address:
- Online
- Editors:
- Alexis Palmer, Nathan Schneider, Natalie Schluter, Guy Emerson, Aurelie Herbelot, Xiaodan Zhu
- Venue:
- SemEval
- SIG:
- SIGLEX
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 243–248
- Language:
- URL:
- https://aclanthology.org/2021.semeval-1.28
- DOI:
- 10.18653/v1/2021.semeval-1.28
- Cite (ACL):
- Erik Yan and Harish Tayyar Madabushi. 2021. UoB at SemEval-2021 Task 5: Extending Pre-Trained Language Models to Include Task and Domain-Specific Information for Toxic Span Prediction. In Proceedings of the 15th International Workshop on Semantic Evaluation (SemEval-2021), pages 243–248, Online. Association for Computational Linguistics.
- Cite (Informal):
- UoB at SemEval-2021 Task 5: Extending Pre-Trained Language Models to Include Task and Domain-Specific Information for Toxic Span Prediction (Yan & Tayyar Madabushi, SemEval 2021)
- PDF:
- https://preview.aclanthology.org/nschneid-patch-1/2021.semeval-1.28.pdf
- Code
- erikdyan/toxic_span_detection