HIT-YOU at TSAR 2025 Shared Task Leveraging Similarity-Based Few-Shot Prompting, Round-Trip Translation, and Self-Refinement for Readability-Controlled Text Simplification

Mao Shimada, Kexin Bian, Zhidong Ling, Mamoru Komachi


Abstract
We describe our submission to the TSAR 2025 shared task on readability-controlled text simplification, which evaluates systems on their ability to adjust linguistic complexity to specified CEFR levels while preserving meaning and coherence. We explored two complementary frameworks leveraging the shared task CEFR classifier as feedback. The first is an ensemble approach generating diverse candidates using multiple LLMs under zero-shot prompting with level-specific instructions and vocabulary lists, one-shot prompting, and round-trip translation. Candidates were filtered by predicted CEFR level before an LLM judge selected the final output. The second framework is a self-refinement loop, where a single candidate is iteratively revised with classifier feedback until matching the target level or reaching a maximum number of iterations. This study is among the first to apply round-trip translation and iterative self-refinement to controlled simplification, broadening the toolkit for adapting linguistic complexity.
Anthology ID:
2025.tsar-1.20
Volume:
Proceedings of the Fourth Workshop on Text Simplification, Accessibility and Readability (TSAR 2025)
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Matthew Shardlow, Fernando Alva-Manchego, Kai North, Regina Stodden, Horacio Saggion, Nouran Khallaf, Akio Hayakawa
Venues:
TSAR | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
231–241
Language:
URL:
https://preview.aclanthology.org/ingest-emnlp/2025.tsar-1.20/
DOI:
Bibkey:
Cite (ACL):
Mao Shimada, Kexin Bian, Zhidong Ling, and Mamoru Komachi. 2025. HIT-YOU at TSAR 2025 Shared Task Leveraging Similarity-Based Few-Shot Prompting, Round-Trip Translation, and Self-Refinement for Readability-Controlled Text Simplification. In Proceedings of the Fourth Workshop on Text Simplification, Accessibility and Readability (TSAR 2025), pages 231–241, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
HIT-YOU at TSAR 2025 Shared Task Leveraging Similarity-Based Few-Shot Prompting, Round-Trip Translation, and Self-Refinement for Readability-Controlled Text Simplification (Shimada et al., TSAR 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-emnlp/2025.tsar-1.20.pdf