Beyond Bias Scores: Unmasking Vacuous Neutrality in Small Language Models

Sumanth Manduru, Carlotta Domeniconi


Abstract
The rapid adoption of Small Language Models (SLMs) for resource constrained applications has outpaced our understanding of their ethical and fairness implications. To address this gap, we introduce the Vacuous Neutrality Framework (VaNeu), a multi-dimensional evaluation paradigm designed to assess SLM fairness prior to deployment. The framework examines model robustness across four stages - biases, utility, ambiguity handling, and positional bias over diverse social bias categories. To the best of our knowledge, this work presents the first large-scale audit of SLMs in the 0.5–5B parameter range, an overlooked “middle tier” between BERT-class encoders and flagship LLMs. We evaluate nine widely used SLMs spanning four model families under both ambiguous and disambiguated contexts. Our findings show that models demonstrating low bias in early stages often fail subsequent evaluations, revealing hidden vulnerabilities and unreliable reasoning. These results underscore the need for a more comprehensive understanding of fairness and reliability in SLMs, and position the proposed framework as a principled tool for responsible deployment in socially sensitive settings. The code is available at: https://github.com/smanduru10/Vacuous-Neutrality-Framework.git.
Anthology ID:
2026.eacl-srw.52
Volume:
Proceedings of the 19th Conference of the European Chapter of the Association for Computational Linguistics (Volume 4: Student Research Workshop)
Month:
March
Year:
2026
Address:
Rabat, Morocco
Editors:
Selene Baez Santamaria, Sai Ashish Somayajula, Atsuki Yamaguchi
Venue:
EACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
685–714
Language:
URL:
https://preview.aclanthology.org/ingest-eacl/2026.eacl-srw.52/
DOI:
Bibkey:
Cite (ACL):
Sumanth Manduru and Carlotta Domeniconi. 2026. Beyond Bias Scores: Unmasking Vacuous Neutrality in Small Language Models. In Proceedings of the 19th Conference of the European Chapter of the Association for Computational Linguistics (Volume 4: Student Research Workshop), pages 685–714, Rabat, Morocco. Association for Computational Linguistics.
Cite (Informal):
Beyond Bias Scores: Unmasking Vacuous Neutrality in Small Language Models (Manduru & Domeniconi, EACL 2026)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-eacl/2026.eacl-srw.52.pdf