Exploring Paraphrasing Strategies for CEFR A1-Level Constraints in LLMs

Eugenio Marzona, Maria Goikhman, Alessio Palmero Aprosio, Massimo Zancanaro


Abstract
Large language models are increasingly used for teaching and self-learning foreign languages. However, their capability to meet specific linguistic constraints is still underexplored. This study compares the effectiveness of prompt engineering in guiding ChatGPT (4o and 4o-mini), and Llama 3 to rephrase general-domain texts to meet CEFR A1-level constraints in English and Italian, making them suitable for beginner learners. It compares 4 prompt engineering approaches, built upon iterative paraphrasing method that gradually refines original texts for CEFR compliance. The approaches compared include paraphrasing with or without Chain-of-Thought, as well as grammar and vocabulary simplification performed either simultaneously or as separate steps. The findings suggest that for English the best approach is combining COT with separate grammar and vocabulary simplification, while for Italian one-step strategies have better effect on grammar, and two-step strategies work better for covering the vocabulary. The paraphrasing approach can approve compliance, although at this point it is not cost-effective. We release a dataset of pairs original sentence-beginner level paraphrase (both in Italian and in English) on which further work could be based.
Anthology ID:
2025.findings-emnlp.828
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2025
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
15305–15318
Language:
URL:
https://preview.aclanthology.org/author-page-yu-wang-polytechnic/2025.findings-emnlp.828/
DOI:
10.18653/v1/2025.findings-emnlp.828
Bibkey:
Cite (ACL):
Eugenio Marzona, Maria Goikhman, Alessio Palmero Aprosio, and Massimo Zancanaro. 2025. Exploring Paraphrasing Strategies for CEFR A1-Level Constraints in LLMs. In Findings of the Association for Computational Linguistics: EMNLP 2025, pages 15305–15318, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
Exploring Paraphrasing Strategies for CEFR A1-Level Constraints in LLMs (Marzona et al., Findings 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/author-page-yu-wang-polytechnic/2025.findings-emnlp.828.pdf
Checklist:
 2025.findings-emnlp.828.checklist.pdf