Abstract
The success of Large Language Models (LLMs) in other domains has raised the question of whether LLMs can reliably assess and manipulate the readability of text. We approach this question empirically. First, using a published corpus of 4,724 English text excerpts, we find that readability estimates produced “zero-shot” from GPT-4 Turbo and GPT-4o mini exhibit relatively high correlation with human judgments (r = 0.76 and r = 0.74, respectively), out-performing estimates derived from traditional readability formulas and various psycholinguistic indices. Then, in a pre-registered human experiment (N = 59), we ask whether Turbo can reliably make text easier or harder to read. We find evidence to support this hypothesis, though considerable variance in human judgments remains unexplained. We conclude by discussing the limitations of this approach, including limited scope, as well as the validity of the “readability” construct and its dependence on context, audience, and goal.- Anthology ID:
- 2024.tsar-1.13
- Volume:
- Proceedings of the Third Workshop on Text Simplification, Accessibility and Readability (TSAR 2024)
- Month:
- November
- Year:
- 2024
- Address:
- Miami, Florida, USA
- Editors:
- Matthew Shardlow, Horacio Saggion, Fernando Alva-Manchego, Marcos Zampieri, Kai North, Sanja Štajner, Regina Stodden
- Venue:
- TSAR
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 126–134
- Language:
- URL:
- https://aclanthology.org/2024.tsar-1.13
- DOI:
- 10.18653/v1/2024.tsar-1.13
- Cite (ACL):
- Sean Trott and Pamela Rivière. 2024. Measuring and Modifying the Readability of English Texts with GPT-4. In Proceedings of the Third Workshop on Text Simplification, Accessibility and Readability (TSAR 2024), pages 126–134, Miami, Florida, USA. Association for Computational Linguistics.
- Cite (Informal):
- Measuring and Modifying the Readability of English Texts with GPT-4 (Trott & Rivière, TSAR 2024)
- PDF:
- https://preview.aclanthology.org/dois-2013-emnlp/2024.tsar-1.13.pdf