Jean Garcia-Gathright
2025
Taxonomizing Representational Harms using Speech Act Theory
Emily Corvi
|
Hannah Washington
|
Stefanie Reed
|
Chad Atalla
|
Alexandra Chouldechova
|
P. Alex Dow
|
Jean Garcia-Gathright
|
Nicholas J Pangakis
|
Emily Sheng
|
Dan Vann
|
Matthew Vogel
|
Hanna Wallach
Findings of the Association for Computational Linguistics: ACL 2025
Representational harms are widely recognized among fairness-related harms caused by generative language systems. However, their definitions are commonly under-specified. We make a theoretical contribution to the specification of representational harms by introducing a framework, grounded in speech act theory (Austin 1962), that conceptualizes representational harms caused by generative language systems as the perlocutionary effects (i.e., real-world impacts) of particular types of illocutionary acts (i.e., system behaviors). Building on this argument and drawing on relevant literature from linguistic anthropology and sociolinguistics, we provide new definitions of stereotyping, demeaning, and erasure. We then use our framework to develop a granular taxonomy of illocutionary acts that cause representational harms, going beyond the high-level taxonomies presented in previous work. We also discuss the ways that our framework and taxonomy can support the development of valid measurement instruments. Finally, we demonstrate the utility of our framework and taxonomy via a case study that engages with recent conceptual debates about what constitutes a representational harm and how such harms should be measured.
Understanding and Meeting Practitioner Needs When Measuring Representational Harms Caused by LLM-Based Systems
Emma Harvey
|
Emily Sheng
|
Su Lin Blodgett
|
Alexandra Chouldechova
|
Jean Garcia-Gathright
|
Alexandra Olteanu
|
Hanna Wallach
Findings of the Association for Computational Linguistics: ACL 2025
The NLP research community has made publicly available numerous instruments for measuring representational harms caused by large language model (LLM)-based systems. These instruments have taken the form of datasets, metrics, tools, and more. In this paper, we examine the extent to which such instruments meet the needs of practitioners tasked with evaluating LLM-based systems. Via semi-structured interviews with 12 such practitioners, we find that practitioners are often unable to use publicly available instruments for measuring representational harms. We identify two types of challenges. In some cases, instruments are not useful because they do not meaningfully measure what practitioners seek to measure or are otherwise misaligned with practitioner needs. In other cases, instruments-even useful instruments-are not used by practitioners due to practical and institutional barriers impeding their uptake. Drawing on measurement theory and pragmatic measurement, we provide recommendations for addressing these challenges to better meet practitioner needs.
Search
Fix author
Co-authors
- Alexandra Chouldechova 2
- Emily Sheng 2
- Hanna Wallach 2
- Chad Atalla 1
- Su Lin Blodgett 1
- show all...