HateXScore: A Metric Suite for Evaluating Reasoning Quality in Hate Speech Explanations

Yujia Hu, Roy Ka-Wei Lee


Abstract
Hateful speech detection is a key component of content moderation, yet current evaluation frameworks rarely assess why a text is deemed hateful. We introduce , a four-component metric suite designed to evaluate the reasoning quality of model explanations. It assesses (i) conclusion explicitness, (ii) faithfulness and causal grounding of quoted spans, (iii) protected group identification (policy-configurable), and (iv) logical consistency among these elements. Evaluated on six diverse hate speech datasets, reveals interpretability failures and annotation inconsistencies that are invisible to standard metrics like Accuracy or F1. Moreover, human evaluation shows strong agreement with , validating it as a practical tool for trustworthy and transparent moderation. Disclaimer: This paper contains sensitive content that may be disturbing to some readers.
Anthology ID:
2026.eacl-long.198
Volume:
Proceedings of the 19th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
March
Year:
2026
Address:
Rabat, Morocco
Editors:
Vera Demberg, Kentaro Inui, Lluís Marquez
Venue:
EACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
4221–4240
Language:
URL:
https://preview.aclanthology.org/ingest-eacl/2026.eacl-long.198/
DOI:
Bibkey:
Cite (ACL):
Yujia Hu and Roy Ka-Wei Lee. 2026. HateXScore: A Metric Suite for Evaluating Reasoning Quality in Hate Speech Explanations. In Proceedings of the 19th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers), pages 4221–4240, Rabat, Morocco. Association for Computational Linguistics.
Cite (Informal):
HateXScore: A Metric Suite for Evaluating Reasoning Quality in Hate Speech Explanations (Hu & Lee, EACL 2026)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-eacl/2026.eacl-long.198.pdf