Metric assessment protocol in the context of answer fluctuation on MCQ tasks
Ekaterina Goliakova, Xavier Renard, Marie-Jeanne Lesot, Thibault Laugel, Christophe Marsala, Marcin Detyniecki
Abstract
Using multiple-choice questions (MCQs) has become a standard for assessing LLM capabilities efficiently. A variety of metrics can be employed for this task. However, previous research has not conducted a thorough assessment of them. At the same time, MCQ evaluation suffers from answer fluctuation: models produce different results given slight changes in prompts. We suggest a metric assessment protocol in which evaluation methodologies are analyzed through their connection with fluctuation rates, as well as original performance. Our results show that there is a strong link between existing metrics and the answer changing, even when computed without any additional prompt variants. Highest association on the protocol is demonstrated by a novel metric, worst accuracy.- Anthology ID:
- 2025.gem-1.26
- Volume:
- Proceedings of the Fourth Workshop on Generation, Evaluation and Metrics (GEM²)
- Month:
- July
- Year:
- 2025
- Address:
- Vienna, Austria and virtual meeting
- Editors:
- Kaustubh Dhole, Miruna Clinciu
- Venues:
- GEM | WS
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 302–319
- Language:
- URL:
- https://preview.aclanthology.org/corrections-2025-08/2025.gem-1.26/
- DOI:
- Cite (ACL):
- Ekaterina Goliakova, Xavier Renard, Marie-Jeanne Lesot, Thibault Laugel, Christophe Marsala, and Marcin Detyniecki. 2025. Metric assessment protocol in the context of answer fluctuation on MCQ tasks. In Proceedings of the Fourth Workshop on Generation, Evaluation and Metrics (GEM²), pages 302–319, Vienna, Austria and virtual meeting. Association for Computational Linguistics.
- Cite (Informal):
- Metric assessment protocol in the context of answer fluctuation on MCQ tasks (Goliakova et al., GEM 2025)
- PDF:
- https://preview.aclanthology.org/corrections-2025-08/2025.gem-1.26.pdf