Abstract
We propose neural models to generate text from formal meaning representations based on Discourse Representation Structures (DRSs). DRSs are document-level representations which encode rich semantic detail pertaining to rhetorical relations, presupposition, and co-reference within and across sentences. We formalize the task of neural DRS-to-text generation and provide modeling solutions for the problems of condition ordering and variable naming which render generation from DRSs non-trivial. Our generator relies on a novel sibling treeLSTM model which is able to accurately represent DRS structures and is more generally suited to trees with wide branches. We achieve competitive performance (59.48 BLEU) on the GMB benchmark against several strong baselines.- Anthology ID:
- 2021.naacl-main.35
- Volume:
- Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
- Month:
- June
- Year:
- 2021
- Address:
- Online
- Editors:
- Kristina Toutanova, Anna Rumshisky, Luke Zettlemoyer, Dilek Hakkani-Tur, Iz Beltagy, Steven Bethard, Ryan Cotterell, Tanmoy Chakraborty, Yichao Zhou
- Venue:
- NAACL
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 397–415
- Language:
- URL:
- https://aclanthology.org/2021.naacl-main.35
- DOI:
- 10.18653/v1/2021.naacl-main.35
- Cite (ACL):
- Jiangming Liu, Shay B. Cohen, and Mirella Lapata. 2021. Text Generation from Discourse Representation Structures. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 397–415, Online. Association for Computational Linguistics.
- Cite (Informal):
- Text Generation from Discourse Representation Structures (Liu et al., NAACL 2021)
- PDF:
- https://preview.aclanthology.org/naacl-24-ws-corrections/2021.naacl-main.35.pdf