Incremental Neural Lexical Coherence Modeling

Sungho Jeon, Michael Strube


Abstract
Pretrained language models, neural models pretrained on massive amounts of data, have established the state of the art in a range of NLP tasks. They are based on a modern machine-learning technique, the Transformer which relates all items simultaneously to capture semantic relations in sequences. However, it differs from what humans do. Humans read sentences one-by-one, incrementally. Can neural models benefit by interpreting texts incrementally as humans do? We investigate this question in coherence modeling. We propose a coherence model which interprets sentences incrementally to capture lexical relations between them. We compare the state of the art in each task, simple neural models relying on a pretrained language model, and our model in two downstream tasks. Our findings suggest that interpreting texts incrementally as humans could be useful to design more advanced models.
Anthology ID:
2020.coling-main.594
Volume:
Proceedings of the 28th International Conference on Computational Linguistics
Month:
December
Year:
2020
Address:
Barcelona, Spain (Online)
Editors:
Donia Scott, Nuria Bel, Chengqing Zong
Venue:
COLING
SIG:
Publisher:
International Committee on Computational Linguistics
Note:
Pages:
6752–6758
Language:
URL:
https://aclanthology.org/2020.coling-main.594
DOI:
10.18653/v1/2020.coling-main.594
Bibkey:
Cite (ACL):
Sungho Jeon and Michael Strube. 2020. Incremental Neural Lexical Coherence Modeling. In Proceedings of the 28th International Conference on Computational Linguistics, pages 6752–6758, Barcelona, Spain (Online). International Committee on Computational Linguistics.
Cite (Informal):
Incremental Neural Lexical Coherence Modeling (Jeon & Strube, COLING 2020)
Copy Citation:
PDF:
https://preview.aclanthology.org/nschneid-patch-4/2020.coling-main.594.pdf
Code
 sdeva14/coling20-inc-lexi-cohe
Data
GCDC