Abstract
Behavioural testing—verifying system capabilities by validating human-designed input-output pairs—is an alternative evaluation method of natural language processing systems proposed to address the shortcomings of the standard approach: computing metrics on held-out data. While behavioural tests capture human prior knowledge and insights, there has been little exploration on how to leverage them for model training and development. With this in mind, we explore behaviour-aware learning by examining several fine-tuning schemes using HateCheck, a suite of functional tests for hate speech detection systems. To address potential pitfalls of training on data originally intended for evaluation, we train and evaluate models on different configurations of HateCheck by holding out categories of test cases, which enables us to estimate performance on potentially overlooked system properties. The fine-tuning procedure led to improvements in the classification accuracy of held-out functionalities and identity groups, suggesting that models can potentially generalise to overlooked functionalities. However, performance on held-out functionality classes and i.i.d. hate speech detection data decreased, which indicates that generalisation occurs mostly across functionalities from the same class and that the procedure led to overfitting to the HateCheck data distribution.- Anthology ID:
- 2022.nlppower-1.8
- Volume:
- Proceedings of NLP Power! The First Workshop on Efficient Benchmarking in NLP
- Month:
- May
- Year:
- 2022
- Address:
- Dublin, Ireland
- Editors:
- Tatiana Shavrina, Vladislav Mikhailov, Valentin Malykh, Ekaterina Artemova, Oleg Serikov, Vitaly Protasov
- Venue:
- nlppower
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 75–83
- Language:
- URL:
- https://aclanthology.org/2022.nlppower-1.8
- DOI:
- 10.18653/v1/2022.nlppower-1.8
- Cite (ACL):
- Pedro Henrique Luz de Araujo and Benjamin Roth. 2022. Checking HateCheck: a cross-functional analysis of behaviour-aware learning for hate speech detection. In Proceedings of NLP Power! The First Workshop on Efficient Benchmarking in NLP, pages 75–83, Dublin, Ireland. Association for Computational Linguistics.
- Cite (Informal):
- Checking HateCheck: a cross-functional analysis of behaviour-aware learning for hate speech detection (Henrique Luz de Araujo & Roth, nlppower 2022)
- PDF:
- https://preview.aclanthology.org/emnlp22-frontmatter/2022.nlppower-1.8.pdf
- Code
- peluz/checking-hatecheck-code