Multi-Task Learning for Coherence Modeling

Youmna Farag, Helen Yannakoudakis


Abstract
We address the task of assessing discourse coherence, an aspect of text quality that is essential for many NLP tasks, such as summarization and language assessment. We propose a hierarchical neural network trained in a multi-task fashion that learns to predict a document-level coherence score (at the network’s top layers) along with word-level grammatical roles (at the bottom layers), taking advantage of inductive transfer between the two tasks. We assess the extent to which our framework generalizes to different domains and prediction tasks, and demonstrate its effectiveness not only on standard binary evaluation coherence tasks, but also on real-world tasks involving the prediction of varying degrees of coherence, achieving a new state of the art.
Anthology ID:
P19-1060
Volume:
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics
Month:
July
Year:
2019
Address:
Florence, Italy
Editors:
Anna Korhonen, David Traum, Lluís Màrquez
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
629–639
Language:
URL:
https://aclanthology.org/P19-1060
DOI:
10.18653/v1/P19-1060
Bibkey:
Cite (ACL):
Youmna Farag and Helen Yannakoudakis. 2019. Multi-Task Learning for Coherence Modeling. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 629–639, Florence, Italy. Association for Computational Linguistics.
Cite (Informal):
Multi-Task Learning for Coherence Modeling (Farag & Yannakoudakis, ACL 2019)
Copy Citation:
PDF:
https://preview.aclanthology.org/naacl24-info/P19-1060.pdf
Code
 Youmna-H/coherence_mtl
Data
GCDC