Evaluating Document Coherence Modeling
Aili Shen, Meladel Mistica, Bahar Salehi, Hang Li, Timothy Baldwin, Jianzhong Qi
Abstract
Abstract While pretrained language models (LMs) have driven impressive gains over morpho-syntactic and semantic tasks, their ability to model discourse and pragmatic phenomena is less clear. As a step towards a better understanding of their discourse modeling capabilities, we propose a sentence intrusion detection task. We examine the performance of a broad range of pretrained LMs on this detection task for English. Lacking a dataset for the task, we introduce INSteD, a novel intruder sentence detection dataset, containing 170,000+ documents constructed from English Wikipedia and CNN news articles. Our experiments show that pretrained LMs perform impressively in in-domain evaluation, but experience a substantial drop in the cross-domain setting, indicating limited generalization capacity. Further results over a novel linguistic probe dataset show that there is substantial room for improvement, especially in the cross- domain setting.- Anthology ID:
- 2021.tacl-1.38
- Volume:
- Transactions of the Association for Computational Linguistics, Volume 9
- Month:
- Year:
- 2021
- Address:
- Cambridge, MA
- Venue:
- TACL
- SIG:
- Publisher:
- MIT Press
- Note:
- Pages:
- 621–640
- Language:
- URL:
- https://aclanthology.org/2021.tacl-1.38
- DOI:
- 10.1162/tacl_a_00388
- Cite (ACL):
- Aili Shen, Meladel Mistica, Bahar Salehi, Hang Li, Timothy Baldwin, and Jianzhong Qi. 2021. Evaluating Document Coherence Modeling. Transactions of the Association for Computational Linguistics, 9:621–640.
- Cite (Informal):
- Evaluating Document Coherence Modeling (Shen et al., TACL 2021)
- PDF:
- https://preview.aclanthology.org/ingestion-script-update/2021.tacl-1.38.pdf