Structure-to-Text Generation with Self-Training, Acceptability Classifiers and Context-Conditioning for the GEM Shared Task
Shreyan Bakshi, Soumya Batra, Peyman Heidari, Ankit Arun, Shashank Jain, Michael White
Abstract
We explore the use of self-training and acceptability classifiers with pre-trained models for natural language generation in structure-to-text settings using three GEM datasets (E2E, WebNLG-en, Schema-Guided Dialog). With the Schema-Guided Dialog dataset, we also experiment with including multiple turns of context in the input. We find that self-training with reconstruction matching along with acceptability classifier filtering can improve semantic correctness, though gains are limited in the full-data setting. With context-conditioning, we find that including multiple turns in the context encourages the model to align with the user’s word and phrasing choices as well as to generate more self-consistent responses. In future versions of the GEM challenge, we encourage the inclusion of few-shot tracks to encourage research on data efficiency.- Anthology ID:
- 2021.gem-1.12
- Volume:
- Proceedings of the 1st Workshop on Natural Language Generation, Evaluation, and Metrics (GEM 2021)
- Month:
- August
- Year:
- 2021
- Address:
- Online
- Editors:
- Antoine Bosselut, Esin Durmus, Varun Prashant Gangal, Sebastian Gehrmann, Yacine Jernite, Laura Perez-Beltrachini, Samira Shaikh, Wei Xu
- Venue:
- GEM
- SIG:
- SIGGEN
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 136–147
- Language:
- URL:
- https://aclanthology.org/2021.gem-1.12
- DOI:
- 10.18653/v1/2021.gem-1.12
- Cite (ACL):
- Shreyan Bakshi, Soumya Batra, Peyman Heidari, Ankit Arun, Shashank Jain, and Michael White. 2021. Structure-to-Text Generation with Self-Training, Acceptability Classifiers and Context-Conditioning for the GEM Shared Task. In Proceedings of the 1st Workshop on Natural Language Generation, Evaluation, and Metrics (GEM 2021), pages 136–147, Online. Association for Computational Linguistics.
- Cite (Informal):
- Structure-to-Text Generation with Self-Training, Acceptability Classifiers and Context-Conditioning for the GEM Shared Task (Bakshi et al., GEM 2021)
- PDF:
- https://preview.aclanthology.org/nschneid-patch-1/2021.gem-1.12.pdf
- Data
- GEM, SGD, WebNLG