A Novel Metric for Measuring the Robustness of Large Language Models in Non-adversarial Scenarios
Samuel Ackerman, Ella Rabinovich, Eitan Farchi, Ateret Anaby Tavor
Abstract
We evaluate the robustness of several large language models on multiple datasets. Robustness here refers to the relative insensitivity of the model’s answers to meaning-preserving variants of their input. Benchmark datasets are constructed by introducing naturally-occurring, non-malicious perturbations, or by generating semantically equivalent paraphrases of input questions or statements. We further propose a novel metric for assessing a model robustness, and demonstrate its benefits in the non-adversarial scenario by empirical evaluation of several models on the created datasets.- Anthology ID:
- 2024.findings-emnlp.158
- Volume:
- Findings of the Association for Computational Linguistics: EMNLP 2024
- Month:
- November
- Year:
- 2024
- Address:
- Miami, Florida, USA
- Editors:
- Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
- Venue:
- Findings
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 2794–2802
- Language:
- URL:
- https://preview.aclanthology.org/fix-sig-urls/2024.findings-emnlp.158/
- DOI:
- 10.18653/v1/2024.findings-emnlp.158
- Cite (ACL):
- Samuel Ackerman, Ella Rabinovich, Eitan Farchi, and Ateret Anaby Tavor. 2024. A Novel Metric for Measuring the Robustness of Large Language Models in Non-adversarial Scenarios. In Findings of the Association for Computational Linguistics: EMNLP 2024, pages 2794–2802, Miami, Florida, USA. Association for Computational Linguistics.
- Cite (Informal):
- A Novel Metric for Measuring the Robustness of Large Language Models in Non-adversarial Scenarios (Ackerman et al., Findings 2024)
- PDF:
- https://preview.aclanthology.org/fix-sig-urls/2024.findings-emnlp.158.pdf