OneNRC@TSAR2025 Shared Task Small Models for Readability Controlled Text Simplification

Sowmya Vajjala


Abstract
In this system description paper, we describe the team OneNRC’s experiments on readability controlled text simplification, focused on using smaller, quantized language models (<20B). We compare these with one large proprietary model and show that the smaller models offer comparable or even better results in some experimental settings. The approach primarily comprises of prompt optimization, agentic workflow, and tool calling. The best results were achieved while using a CEFR proficiency classifier as a verification tool for the language model agent. In terms of comparison with other systems, our submission that used a quantized Gemma3:12B model that ran on a laptop achieved a rank of 9.88 among the submitted systems as per the AUTORANK framework used by the organizers. We hope these results will lead into further exploration on the usefulness of smaller models for text simplification.
Anthology ID:
2025.tsar-1.9
Volume:
Proceedings of the Fourth Workshop on Text Simplification, Accessibility and Readability (TSAR 2025)
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Matthew Shardlow, Fernando Alva-Manchego, Kai North, Regina Stodden, Horacio Saggion, Nouran Khallaf, Akio Hayakawa
Venues:
TSAR | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
131–136
Language:
URL:
https://preview.aclanthology.org/ingest-emnlp/2025.tsar-1.9/
DOI:
Bibkey:
Cite (ACL):
Sowmya Vajjala. 2025. OneNRC@TSAR2025 Shared Task Small Models for Readability Controlled Text Simplification. In Proceedings of the Fourth Workshop on Text Simplification, Accessibility and Readability (TSAR 2025), pages 131–136, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
OneNRC@TSAR2025 Shared Task Small Models for Readability Controlled Text Simplification (Vajjala, TSAR 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-emnlp/2025.tsar-1.9.pdf