OLEA: Tool and Infrastructure for Offensive Language Error Analysis in English

Marie Grace, Jay Seabrum, Dananjay Srinivas, Alexis Palmer


Abstract
State-of-the-art models for identifying offensive language often fail to generalize over more nuanced or implicit cases of offensive and hateful language. Understanding model performance on complex cases is key for building robust models that are effective in real-world settings. To help researchers efficiently evaluate their models, we introduce OLEA, a diagnostic, open-source, extensible Python library that provides easy-to-use tools for error analysis in the context of detecting offensive language in English. OLEA packages analyses and datasets proposed by prior scholarship, empowering researchers to build effective, explainable and generalizable offensive language classifiers.
Anthology ID:
2023.eacl-demo.24
Volume:
Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics: System Demonstrations
Month:
May
Year:
2023
Address:
Dubrovnik, Croatia
Editors:
Danilo Croce, Luca Soldaini
Venue:
EACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
209–218
Language:
URL:
https://aclanthology.org/2023.eacl-demo.24
DOI:
10.18653/v1/2023.eacl-demo.24
Bibkey:
Cite (ACL):
Marie Grace, Jay Seabrum, Dananjay Srinivas, and Alexis Palmer. 2023. OLEA: Tool and Infrastructure for Offensive Language Error Analysis in English. In Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics: System Demonstrations, pages 209–218, Dubrovnik, Croatia. Association for Computational Linguistics.
Cite (Informal):
OLEA: Tool and Infrastructure for Offensive Language Error Analysis in English (Grace et al., EACL 2023)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-2024-clasp/2023.eacl-demo.24.pdf
Video:
 https://preview.aclanthology.org/ingest-2024-clasp/2023.eacl-demo.24.mp4