FLEX: A Benchmark for Evaluating Robustness of Fairness in Large Language Models
Dahyun Jung, Seungyoon Lee, Hyeonseok Moon, Chanjun Park, Heuiseok Lim
Abstract
Recent advancements in Large Language Models (LLMs) have significantly enhanced interactions between users and models. These advancements concurrently underscore the need for rigorous safety evaluations due to the manifestation of social biases, which can lead to harmful societal impacts. Despite these concerns, existing benchmarks may overlook the intrinsic weaknesses of LLMs, which can generate biased responses even with simple adversarial instructions. To address this critical gap, we introduce a new benchmark, Fairness Benchmark in LLM under Extreme Scenarios (FLEX), designed to test whether LLMs can sustain fairness even when exposed to prompts constructed to induce bias. To thoroughly evaluate the robustness of LLMs, we integrate prompts that amplify potential biases into the fairness assessment. Comparative experiments between FLEX and existing benchmarks demonstrate that traditional evaluations may underestimate the inherent risks in models. This highlights the need for more stringent LLM evaluation benchmarks to guarantee safety and fairness.- Anthology ID:
- 2025.findings-naacl.199
- Volume:
- Findings of the Association for Computational Linguistics: NAACL 2025
- Month:
- April
- Year:
- 2025
- Address:
- Albuquerque, New Mexico
- Editors:
- Luis Chiruzzo, Alan Ritter, Lu Wang
- Venue:
- Findings
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 3606–3620
- Language:
- URL:
- https://preview.aclanthology.org/fix-sig-urls/2025.findings-naacl.199/
- DOI:
- Cite (ACL):
- Dahyun Jung, Seungyoon Lee, Hyeonseok Moon, Chanjun Park, and Heuiseok Lim. 2025. FLEX: A Benchmark for Evaluating Robustness of Fairness in Large Language Models. In Findings of the Association for Computational Linguistics: NAACL 2025, pages 3606–3620, Albuquerque, New Mexico. Association for Computational Linguistics.
- Cite (Informal):
- FLEX: A Benchmark for Evaluating Robustness of Fairness in Large Language Models (Jung et al., Findings 2025)
- PDF:
- https://preview.aclanthology.org/fix-sig-urls/2025.findings-naacl.199.pdf