This is not correct! Negation-aware Evaluation of Language Generation Systems

Miriam Anschütz, Diego Miguel Lozano, Georg Groh


Abstract
Large language models underestimate the impact of negations on how much they change the meaning of a sentence. Therefore, learned evaluation metrics based on these models are insensitive to negations. In this paper, we propose NegBLEURT, a negation-aware version of the BLEURT evaluation metric. For that, we designed a rule-based sentence negation tool and used it to create the CANNOT negation evaluation dataset. Based on this dataset, we fine-tuned a sentence transformer and an evaluation metric to improve their negation sensitivity. Evaluating these models on existing benchmarks shows that our fine-tuned models outperform existing metrics on the negated sentences by far while preserving their base models’ performances on other perturbations.
Anthology ID:
2023.inlg-main.12
Volume:
Proceedings of the 16th International Natural Language Generation Conference
Month:
September
Year:
2023
Address:
Prague, Czechia
Editors:
C. Maria Keet, Hung-Yi Lee, Sina Zarrieß
Venues:
INLG | SIGDIAL
SIG:
SIGGEN
Publisher:
Association for Computational Linguistics
Note:
Pages:
163–175
Language:
URL:
https://aclanthology.org/2023.inlg-main.12
DOI:
10.18653/v1/2023.inlg-main.12
Bibkey:
Cite (ACL):
Miriam Anschütz, Diego Miguel Lozano, and Georg Groh. 2023. This is not correct! Negation-aware Evaluation of Language Generation Systems. In Proceedings of the 16th International Natural Language Generation Conference, pages 163–175, Prague, Czechia. Association for Computational Linguistics.
Cite (Informal):
This is not correct! Negation-aware Evaluation of Language Generation Systems (Anschütz et al., INLG-SIGDIAL 2023)
Copy Citation:
PDF:
https://preview.aclanthology.org/nschneid-patch-5/2023.inlg-main.12.pdf