@inproceedings{t-y-s-s-chowdhury-2025-fairness,
    title = "Fairness Beyond Performance: Revealing Reliability Disparities Across Groups in Legal {NLP}",
    author = "T.y.s.s, Santosh  and
      Chowdhury, Irtiza",
    editor = "Che, Wanxiang  and
      Nabende, Joyce  and
      Shutova, Ekaterina  and
      Pilehvar, Mohammad Taher",
    booktitle = "Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
    month = jul,
    year = "2025",
    address = "Vienna, Austria",
    publisher = "Association for Computational Linguistics",
    url = "https://preview.aclanthology.org/ingest-emnlp/2025.acl-long.1188/",
    doi = "10.18653/v1/2025.acl-long.1188",
    pages = "24376--24390",
    ISBN = "979-8-89176-251-0",
    abstract = "Fairness in NLP must extend beyond performance parity to encompass equitable reliability across groups. This study exposes a criticalblind spot: models often make less reliable or overconfident predictions for marginalized groups, even when overall performance appearsfair. Using the FairLex benchmark as a case study in legal NLP, we systematically evaluate both performance and reliability dispari-ties across demographic, regional, and legal attributes spanning four jurisdictions. We show that domain-specific pre-training consistentlyimproves both performance and reliability, especially for underrepresented groups. However, common bias mitigation methods frequentlyworsen reliability disparities, revealing a trade-off not captured by performance metrics alone. Our results call for a rethinking of fairnessin high-stakes NLP: To ensure equitable treatment, models must not only be accurate, but also reliably self-aware across all groups."
}Markdown (Informal)
[Fairness Beyond Performance: Revealing Reliability Disparities Across Groups in Legal NLP](https://preview.aclanthology.org/ingest-emnlp/2025.acl-long.1188/) (T.y.s.s & Chowdhury, ACL 2025)
ACL