GPT4AMR: Does LLM-based Paraphrasing Improve AMR-to-text Generation Fluency?

Jiyuan Ji, Shira Wein


Abstract
Abstract Meaning Representation (AMR) is a graph-based semantic representation that has been incorporated into numerous downstream tasks, in particular due to substantial efforts developing text-to-AMR parsing and AMR-to-text generation models. However, there still exists a large gap between fluent, natural sentences and texts generated from AMR-to-text generation models. Prompt-based Large Language Models (LLMs), on the other hand, have demonstrated an outstanding ability to produce fluent text in a variety of languages and domains. In this paper, we investigate the extent to which LLMs can improve the AMR-to-text generated output fluency post-hoc via prompt engineering. We conduct automatic and human evaluations of the results, and ultimately have mixed findings: LLM-generated paraphrases generally do not exhibit improvement in automatic evaluation, but outperform baseline texts according to our human evaluation. Thus, we provide a detailed error analysis of our results to investigate the complex nature of generating highly fluent text from semantic representations.
Anthology ID:
2025.winlp-main.2
Volume:
Proceedings of the 9th Widening NLP Workshop
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Chen Zhang, Emily Allaway, Hua Shen, Lesly Miculicich, Yinqiao Li, Meryem M'hamdi, Peerat Limkonchotiwat, Richard He Bai, Santosh T.y.s.s., Sophia Simeng Han, Surendrabikram Thapa, Wiem Ben Rim
Venues:
WiNLP | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
9–18
Language:
URL:
https://preview.aclanthology.org/ingest-emnlp/2025.winlp-main.2/
DOI:
Bibkey:
Cite (ACL):
Jiyuan Ji and Shira Wein. 2025. GPT4AMR: Does LLM-based Paraphrasing Improve AMR-to-text Generation Fluency?. In Proceedings of the 9th Widening NLP Workshop, pages 9–18, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
GPT4AMR: Does LLM-based Paraphrasing Improve AMR-to-text Generation Fluency? (Ji & Wein, WiNLP 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-emnlp/2025.winlp-main.2.pdf