Hafiz Muhammad Umer
2025
SafeSpeech: A Comprehensive and Interactive Tool for Analysing Sexist and Abusive Language in Conversations
Xingwei Tan
|
Chen Lyu
|
Hafiz Muhammad Umer
|
Sahrish Khan
|
Mahathi Parvatham
|
Lois Arthurs
|
Simon Cullen
|
Shelley Wilson
|
Arshad Jhumka
|
Gabriele Pergola
Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (System Demonstrations)
Detecting toxic language, including sexism, harassment, and abusive behaviour, remains a critical challenge, particularly in its subtle and context-dependent forms. Existing approaches largely focus on isolated message-level classification, overlooking toxicity that emerges across conversational contexts. To promote and enable future research in this direction, we introduce *SafeSpeech*, a comprehensive platform for toxic content detection and analysis that bridges message-level and conversation-level insights. The platform integrates fine-tuned classifiers and large language models (LLMs) to enable multi-granularity detection, toxic-aware conversation summarization, and persona profiling. *SafeSpeech* also incorporates explainability mechanisms, such as perplexity gain analysis, to highlight the linguistic elements driving predictions. Evaluations on benchmark datasets, including EDOS, OffensEval, and HatEval, demonstrate the reproduction of state-of-the-art performance across multiple tasks, including fine-grained sexism detection.
Search
Fix data
Co-authors
- Lois Arthurs 1
- Simon Cullen 1
- Arshad Jhumka 1
- Sahrish Khan 1
- Chen Lyu 1
- show all...