Coherent or Not? Stressing a Neural Language Model for Discourse Coherence in Multiple Languages
Dominique Brunato, Felice Dell’Orletta, Irene Dini, Andrea Amelio Ravelli
Abstract
In this study, we investigate the capability of a Neural Language Model (NLM) to distinguish between coherent and incoherent text, where the latter has been artificially created to gradually undermine local coherence within text. While previous research on coherence assessment using NLMs has primarily focused on English, we extend our investigation to multiple languages. We employ a consistent evaluation framework to compare the performance of monolingual and multilingual models in both in-domain and out-domain settings. Additionally, we explore the model’s performance in a cross-language scenario.- Anthology ID:
- 2023.findings-acl.680
- Volume:
- Findings of the Association for Computational Linguistics: ACL 2023
- Month:
- July
- Year:
- 2023
- Address:
- Toronto, Canada
- Editors:
- Anna Rogers, Jordan Boyd-Graber, Naoaki Okazaki
- Venue:
- Findings
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 10690–10700
- Language:
- URL:
- https://aclanthology.org/2023.findings-acl.680
- DOI:
- 10.18653/v1/2023.findings-acl.680
- Cite (ACL):
- Dominique Brunato, Felice Dell’Orletta, Irene Dini, and Andrea Amelio Ravelli. 2023. Coherent or Not? Stressing a Neural Language Model for Discourse Coherence in Multiple Languages. In Findings of the Association for Computational Linguistics: ACL 2023, pages 10690–10700, Toronto, Canada. Association for Computational Linguistics.
- Cite (Informal):
- Coherent or Not? Stressing a Neural Language Model for Discourse Coherence in Multiple Languages (Brunato et al., Findings 2023)
- PDF:
- https://preview.aclanthology.org/cschoel_rss_and_blog/2023.findings-acl.680.pdf