Abstract
We present an end-to-end neural approach to generate English sentences from formal meaning representations, Discourse Representation Structures (DRSs). We use a rather standard bi-LSTM sequence-to-sequence model, work with a linearized DRS input representation, and evaluate character-level and word-level decoders. We obtain very encouraging results in terms of reference-based automatic metrics such as BLEU. But because such metrics only evaluate the surface level of generated output, we develop a new metric, ROSE, that targets specific semantic phenomena. We do this with five DRS generation challenge sets focusing on tense, grammatical number, polarity, named entities and quantities. The aim of these challenge sets is to assess the neural generator’s systematicity and generalization to unseen inputs.- Anthology ID:
- 2021.gem-1.8
- Volume:
- Proceedings of the 1st Workshop on Natural Language Generation, Evaluation, and Metrics (GEM 2021)
- Month:
- August
- Year:
- 2021
- Address:
- Online
- Editors:
- Antoine Bosselut, Esin Durmus, Varun Prashant Gangal, Sebastian Gehrmann, Yacine Jernite, Laura Perez-Beltrachini, Samira Shaikh, Wei Xu
- Venue:
- GEM
- SIG:
- SIGGEN
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 73–83
- Language:
- URL:
- https://aclanthology.org/2021.gem-1.8
- DOI:
- 10.18653/v1/2021.gem-1.8
- Cite (ACL):
- Chunliu Wang, Rik van Noord, Arianna Bisazza, and Johan Bos. 2021. Evaluating Text Generation from Discourse Representation Structures. In Proceedings of the 1st Workshop on Natural Language Generation, Evaluation, and Metrics (GEM 2021), pages 73–83, Online. Association for Computational Linguistics.
- Cite (Informal):
- Evaluating Text Generation from Discourse Representation Structures (Wang et al., GEM 2021)
- PDF:
- https://preview.aclanthology.org/improve-issue-templates/2021.gem-1.8.pdf
- Code
- wangchunliu/drs-generation