Evaluating Gender Bias of Pre-trained Language Models in Natural Language Inference by Considering All Labels

Panatchakorn Anantaprayoon, Masahiro Kaneko, Naoaki Okazaki


Abstract
Discriminatory gender biases have been found in Pre-trained Language Models (PLMs) for multiple languages. In Natural Language Inference (NLI), existing bias evaluation methods have focused on the prediction results of one specific label out of three labels, such as neutral. However, such evaluation methods can be inaccurate since unique biased inferences are associated with unique prediction labels. Addressing this limitation, we propose a bias evaluation method for PLMs, called NLI-CoAL, which considers all the three labels of NLI task. First, we create three evaluation data groups that represent different types of biases. Then, we define a bias measure based on the corresponding label output of each data group. In the experiments, we introduce a meta-evaluation technique for NLI bias measures and use it to confirm that our bias measure can distinguish biased, incorrect inferences from non-biased incorrect inferences better than the baseline, resulting in a more accurate bias evaluation. We create the datasets in English, Japanese, and Chinese, and successfully validate the compatibility of our bias measure across multiple languages. Lastly, we observe the bias tendencies in PLMs of different languages. To our knowledge, we are the first to construct evaluation datasets and measure PLMs’ bias from NLI in Japanese and Chinese.
Anthology ID:
2024.lrec-main.566
Volume:
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)
Month:
May
Year:
2024
Address:
Torino, Italia
Editors:
Nicoletta Calzolari, Min-Yen Kan, Veronique Hoste, Alessandro Lenci, Sakriani Sakti, Nianwen Xue
Venues:
LREC | COLING
SIG:
Publisher:
ELRA and ICCL
Note:
Pages:
6395–6408
Language:
URL:
https://aclanthology.org/2024.lrec-main.566
DOI:
Bibkey:
Cite (ACL):
Panatchakorn Anantaprayoon, Masahiro Kaneko, and Naoaki Okazaki. 2024. Evaluating Gender Bias of Pre-trained Language Models in Natural Language Inference by Considering All Labels. In Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024), pages 6395–6408, Torino, Italia. ELRA and ICCL.
Cite (Informal):
Evaluating Gender Bias of Pre-trained Language Models in Natural Language Inference by Considering All Labels (Anantaprayoon et al., LREC-COLING 2024)
Copy Citation:
PDF:
https://preview.aclanthology.org/proper-vol2-ingestion/2024.lrec-main.566.pdf