Emotion-aware text simplification of user generated content using LLMs

Anastasiia Bezobrazova, Daria Sokova, Constantin Orasan


Abstract
Digital inclusion increasingly supports adults with intellectual disabilities (ID) to participate online, yet social media posts can be difficult to understand, particularly when they contain strong emotions, slang, or non-standard writing. This paper investigates whether large language models (LLMs) can simplify social media texts to improve cognitive accessibility and preserve emotional meaning. Using an accessibility-oriented prompt based on existing guidance, posts are simplified and emotion preservation is assessed. The results suggest that many simplified posts retain the same emotions, though changes occur, especially when emotions are weakly expressed or ambiguous. Qualitative analysis shows that simplification improves fluency and structure but can also shift perceived emotion through changes to tone, formatting, and other affective cues common in social media text. The research has also revealed that different LLMs produce very different outputs.
Anthology ID:
2026.wassa-1.10
Volume:
The Proceedings for the 15th Workshop on Computational Approaches to Subjectivity, Sentiment Social Media Analysis (WASSA 2026)
Month:
March
Year:
2026
Address:
Rabat, Morocco
Editors:
Jeremy Barnes, Valentin Barriere, Orphée De Clercq, Roman Klinger, Célia Nouri, Debora Nozza, Pranaydeep Singh
Venues:
WASSA | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
107–122
Language:
URL:
https://preview.aclanthology.org/ingest-eacl/2026.wassa-1.10/
DOI:
Bibkey:
Cite (ACL):
Anastasiia Bezobrazova, Daria Sokova, and Constantin Orasan. 2026. Emotion-aware text simplification of user generated content using LLMs. In The Proceedings for the 15th Workshop on Computational Approaches to Subjectivity, Sentiment Social Media Analysis (WASSA 2026), pages 107–122, Rabat, Morocco. Association for Computational Linguistics.
Cite (Informal):
Emotion-aware text simplification of user generated content using LLMs (Bezobrazova et al., WASSA 2026)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-eacl/2026.wassa-1.10.pdf