OUNLP at TSAR 2025 Shared Task Multi-Round Text Simplifier via Code Generation

Cuong Huynh, Jie Cao


Abstract
This paper describes the system submission of our team OUNLP to the TSAR-2025 shared task on readability-controlled text simplification. Based on the analysis of prompt-based text simplification methods, we discovered that simplification performance is highly related to the gap between the source CEFR level and the target CEFR level. Inspired by this finding, we propose two multi-round simplification methods generated via GPT-4o rule-based simplification (MRS-Rule) and jointly rule-based LLM simplification (MRS-Joint). Our submitted systems ranked 7th out of 20 teams. Later improvements with MRS-Joint show that taking the LLM simplified candidates as the starting point could further boost multi-round simplification performance.
Anthology ID:
2025.tsar-1.19
Volume:
Proceedings of the Fourth Workshop on Text Simplification, Accessibility and Readability (TSAR 2025)
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Matthew Shardlow, Fernando Alva-Manchego, Kai North, Regina Stodden, Horacio Saggion, Nouran Khallaf, Akio Hayakawa
Venues:
TSAR | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
223–230
Language:
URL:
https://preview.aclanthology.org/ingest-emnlp/2025.tsar-1.19/
DOI:
Bibkey:
Cite (ACL):
Cuong Huynh and Jie Cao. 2025. OUNLP at TSAR 2025 Shared Task Multi-Round Text Simplifier via Code Generation. In Proceedings of the Fourth Workshop on Text Simplification, Accessibility and Readability (TSAR 2025), pages 223–230, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
OUNLP at TSAR 2025 Shared Task Multi-Round Text Simplifier via Code Generation (Huynh & Cao, TSAR 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-emnlp/2025.tsar-1.19.pdf