@inproceedings{ji-wein-2025-gpt4amr,
    title = "{GPT}4{AMR}: Does {LLM}-based Paraphrasing Improve {AMR}-to-text Generation Fluency?",
    author = "Ji, Jiyuan  and
      Wein, Shira",
    editor = "Zhang, Chen  and
      Allaway, Emily  and
      Shen, Hua  and
      Miculicich, Lesly  and
      Li, Yinqiao  and
      M'hamdi, Meryem  and
      Limkonchotiwat, Peerat  and
      Bai, Richard He  and
      T.y.s.s., Santosh  and
      Han, Sophia Simeng  and
      Thapa, Surendrabikram  and
      Rim, Wiem Ben",
    booktitle = "Proceedings of the 9th Widening NLP Workshop",
    month = nov,
    year = "2025",
    address = "Suzhou, China",
    publisher = "Association for Computational Linguistics",
    url = "https://preview.aclanthology.org/ingest-emnlp/2025.winlp-main.2/",
    pages = "9--18",
    ISBN = "979-8-89176-351-7",
    abstract = "Abstract Meaning Representation (AMR) is a graph-based semantic representation that has been incorporated into numerous downstream tasks, in particular due to substantial efforts developing text-to-AMR parsing and AMR-to-text generation models. However, there still exists a large gap between fluent, natural sentences and texts generated from AMR-to-text generation models. Prompt-based Large Language Models (LLMs), on the other hand, have demonstrated an outstanding ability to produce fluent text in a variety of languages and domains. In this paper, we investigate the extent to which LLMs can improve the AMR-to-text generated output fluency post-hoc via prompt engineering. We conduct automatic and human evaluations of the results, and ultimately have mixed findings: LLM-generated paraphrases generally do not exhibit improvement in automatic evaluation, but outperform baseline texts according to our human evaluation. Thus, we provide a detailed error analysis of our results to investigate the complex nature of generating highly fluent text from semantic representations."
}Markdown (Informal)
[GPT4AMR: Does LLM-based Paraphrasing Improve AMR-to-text Generation Fluency?](https://preview.aclanthology.org/ingest-emnlp/2025.winlp-main.2/) (Ji & Wein, WiNLP 2025)
ACL