Irtiza Chowdhury
2025
Fairness Beyond Performance: Revealing Reliability Disparities Across Groups in Legal NLP
Santosh T.y.s.s
|
Irtiza Chowdhury
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Fairness in NLP must extend beyond performance parity to encompass equitable reliability across groups. This study exposes a criticalblind spot: models often make less reliable or overconfident predictions for marginalized groups, even when overall performance appearsfair. Using the FairLex benchmark as a case study in legal NLP, we systematically evaluate both performance and reliability dispari-ties across demographic, regional, and legal attributes spanning four jurisdictions. We show that domain-specific pre-training consistentlyimproves both performance and reliability, especially for underrepresented groups. However, common bias mitigation methods frequentlyworsen reliability disparities, revealing a trade-off not captured by performance metrics alone. Our results call for a rethinking of fairnessin high-stakes NLP: To ensure equitable treatment, models must not only be accurate, but also reliably self-aware across all groups.
2024
The Craft of Selective Prediction: Towards Reliable Case Outcome Classification - An Empirical Study on European Court of Human Rights Cases
Santosh T.y.s.s
|
Irtiza Chowdhury
|
Shanshan Xu
|
Matthias Grabmair
Findings of the Association for Computational Linguistics: EMNLP 2024
In high-stakes decision-making tasks within legal NLP, such as Case Outcome Classification (COC), quantifying a model’s predictive confidence is crucial. Confidence estimation enables humans to make more informed decisions, particularly when the model’s certainty is low, or where the consequences of a mistake are significant. However, most existing COC works prioritize high task performance over model reliability. This paper conducts an empirical investigation into how various design choices—including pre-training corpus, confidence estimator and fine-tuning loss—affect the reliability of COC models within the framework of selective prediction. Our experiments on the multi-label COC task, focusing on European Court of Human Rights (ECtHR) cases, highlight the importance of a diverse yet domain-specific pre-training corpus for better calibration. Additionally, we demonstrate that larger models tend to exhibit overconfidence, Monte Carlo dropout methods produce reliable confidence estimates, and confident error regularization effectively mitigates overconfidence. To our knowledge, this is the first systematic exploration of selective prediction in legal NLP. Our findings underscore the need for further research on enhancing confidence measurement and improving the trustworthiness of models in the legal domain.