MOCHA: A Multi-Task Training Approach for Coherent Text Generation from Cognitive Perspective

Zhe Hu, Hou Pong Chan, Lifu Huang


Abstract
Teaching neural models to generate narrative coherent texts is a critical problem. Recent pre-trained language models have achieved promising results, but there is still a gap between human written texts and machine-generated outputs. In this work, we propose a novel multi-task training strategy for long text generation grounded on the cognitive theory of writing, which empowers the model to learn essential subskills needed for writing including planning and reviewing besides end-to-end generation. We extensively evaluate our model on three open-ended generation tasks including story generation, news article writing and argument generation. Experiments show that our model achieves better results on both few-shot and fully-supervised settings than strong baselines, and human evaluations confirm that our model can generate more coherent outputs.
Anthology ID:
2022.emnlp-main.705
Volume:
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
Month:
December
Year:
2022
Address:
Abu Dhabi, United Arab Emirates
Editors:
Yoav Goldberg, Zornitsa Kozareva, Yue Zhang
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
10324–10334
Language:
URL:
https://aclanthology.org/2022.emnlp-main.705
DOI:
10.18653/v1/2022.emnlp-main.705
Bibkey:
Cite (ACL):
Zhe Hu, Hou Pong Chan, and Lifu Huang. 2022. MOCHA: A Multi-Task Training Approach for Coherent Text Generation from Cognitive Perspective. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 10324–10334, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Cite (Informal):
MOCHA: A Multi-Task Training Approach for Coherent Text Generation from Cognitive Perspective (Hu et al., EMNLP 2022)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-acl-2023-videos/2022.emnlp-main.705.pdf