IREL at SemEval-2023 Task 11: User Conditioned Modelling for Toxicity Detection in Subjective Tasks

Ankita Maity, Pavan Kandru, Bhavyajeet Singh, Kancharla Aditya Hari, Vasudeva Varma


Abstract
This paper describes our system used in the SemEval-2023 Task 11 Learning With Disagreements (Le-Wi-Di). This is a subjective task since it deals with detecting hate speech, misogyny and offensive language. Thus, disagreement among annotators is expected. We experiment with different settings like loss functions specific for subjective tasks and include anonymized annotator-specific information to help us understand the level of disagreement. We perform an in-depth analysis of the performance discrepancy of these different modelling choices. Our system achieves a cross-entropy of 0.58, 4.01 and 3.70 on the test sets of HS-Brexit, ArMIS and MD-Agreement, respectively. Our code implementation is publicly available.
Anthology ID:
2023.semeval-1.294
Volume:
Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)
Month:
July
Year:
2023
Address:
Toronto, Canada
Editors:
Atul Kr. Ojha, A. Seza Doğruöz, Giovanni Da San Martino, Harish Tayyar Madabushi, Ritesh Kumar, Elisa Sartori
Venue:
SemEval
SIG:
SIGLEX
Publisher:
Association for Computational Linguistics
Note:
Pages:
2133–2136
Language:
URL:
https://aclanthology.org/2023.semeval-1.294
DOI:
10.18653/v1/2023.semeval-1.294
Bibkey:
Cite (ACL):
Ankita Maity, Pavan Kandru, Bhavyajeet Singh, Kancharla Aditya Hari, and Vasudeva Varma. 2023. IREL at SemEval-2023 Task 11: User Conditioned Modelling for Toxicity Detection in Subjective Tasks. In Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023), pages 2133–2136, Toronto, Canada. Association for Computational Linguistics.
Cite (Informal):
IREL at SemEval-2023 Task 11: User Conditioned Modelling for Toxicity Detection in Subjective Tasks (Maity et al., SemEval 2023)
Copy Citation:
PDF:
https://preview.aclanthology.org/nschneid-patch-3/2023.semeval-1.294.pdf