Mod-D2T: A Multi-layer Dataset for Modular Data-to-Text Generation

Simon Mille, Francois Lareau, Stamatia Dasiopoulou, Anya Belz


Abstract
Rule-based text generators lack the coverage and fluency of their neural counterparts, but have two big advantages over them: (i) they are entirely controllable and do not hallucinate; and (ii) they can fully explain how an output was generated from an input. In this paper we leverage these two advantages to create large and reliable synthetic datasets with multiple human-intelligible intermediate representations. We present the Modular Data-to-Text (Mod-D2T) Dataset which incorporates ten intermediate-level representations between input triple sets and output text; the mappings from one level to the next can broadly be interpreted as the traditional modular tasks of an NLG pipeline. We describe the Mod-D2T dataset, evaluate its quality via manual validation and discuss its applications and limitations. Data, code and documentation are available at https://github.com/mille-s/Mod-D2T.
Anthology ID:
2023.inlg-main.36
Volume:
Proceedings of the 16th International Natural Language Generation Conference
Month:
September
Year:
2023
Address:
Prague, Czechia
Editors:
C. Maria Keet, Hung-Yi Lee, Sina Zarrieß
Venues:
INLG | SIGDIAL
SIG:
SIGGEN
Publisher:
Association for Computational Linguistics
Note:
Pages:
455–466
Language:
URL:
https://aclanthology.org/2023.inlg-main.36
DOI:
10.18653/v1/2023.inlg-main.36
Bibkey:
Cite (ACL):
Simon Mille, Francois Lareau, Stamatia Dasiopoulou, and Anya Belz. 2023. Mod-D2T: A Multi-layer Dataset for Modular Data-to-Text Generation. In Proceedings of the 16th International Natural Language Generation Conference, pages 455–466, Prague, Czechia. Association for Computational Linguistics.
Cite (Informal):
Mod-D2T: A Multi-layer Dataset for Modular Data-to-Text Generation (Mille et al., INLG-SIGDIAL 2023)
Copy Citation:
PDF:
https://preview.aclanthology.org/dois-2013-emnlp/2023.inlg-main.36.pdf