Toward Human-Centered Readability Evaluation

Bahar İlgen, Georges Hattab


Abstract
Text simplification is essential for making public health information accessible to diverse populations, including those with limited health literacy. However, commonly used evaluation metrics in Natural Language Processing (NLP)—such as BLEU, FKGL, and SARI—mainly capture surface-level features and fail to account for human-centered qualities like clarity, trustworthiness, tone, cultural relevance, and actionability. This limitation is particularly critical in high-stakes health contexts, where communication must be not only simple but also usable, respectful, and trustworthy. To address this gap, we propose the Human-Centered Readability Score (HCRS), a five-dimensional evaluation framework grounded in Human-Computer Interaction (HCI) and health communication research. HCRS integrates automatic measures with structured human feedback to capture the relational and contextual aspects of readability. We outline the framework, discuss its integration into participatory evaluation workflows, and present a protocol for empirical validation. This work aims to advance the evaluation of health text simplification beyond surface metrics, enabling NLP systems that align more closely with diverse users’ needs, expectations, and lived experiences.
Anthology ID:
2025.hcinlp-1.22
Volume:
Proceedings of the Fourth Workshop on Bridging Human-Computer Interaction and Natural Language Processing (HCI+NLP)
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Su Lin Blodgett, Amanda Cercas Curry, Sunipa Dev, Siyan Li, Michael Madaio, Jack Wang, Sherry Tongshuang Wu, Ziang Xiao, Diyi Yang
Venues:
HCINLP | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
263–273
Language:
URL:
https://preview.aclanthology.org/ingest-emnlp/2025.hcinlp-1.22/
DOI:
Bibkey:
Cite (ACL):
Bahar İlgen and Georges Hattab. 2025. Toward Human-Centered Readability Evaluation. In Proceedings of the Fourth Workshop on Bridging Human-Computer Interaction and Natural Language Processing (HCI+NLP), pages 263–273, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
Toward Human-Centered Readability Evaluation (İlgen & Hattab, HCINLP 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-emnlp/2025.hcinlp-1.22.pdf