Abstract
Recently, neural language models (LMs) have demonstrated impressive abilities in generating high-quality discourse. While many recent papers have analyzed the syntactic aspects encoded in LMs, there has been no analysis to date of the inter-sentential, rhetorical knowledge. In this paper, we propose a method that quantitatively evaluates the rhetorical capacities of neural LMs. We examine the capacities of neural LMs understanding the rhetoric of discourse by evaluating their abilities to encode a set of linguistic features derived from Rhetorical Structure Theory (RST). Our experiments show that BERT-based LMs outperform other Transformer LMs, revealing the richer discourse knowledge in their intermediate layer representations. In addition, GPT-2 and XLNet apparently encode less rhetorical knowledge, and we suggest an explanation drawing from linguistic philosophy. Our method shows an avenue towards quantifying the rhetorical capacities of neural LMs.- Anthology ID:
- 2020.blackboxnlp-1.3
- Volume:
- Proceedings of the Third BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP
- Month:
- November
- Year:
- 2020
- Address:
- Online
- Editors:
- Afra Alishahi, Yonatan Belinkov, Grzegorz Chrupała, Dieuwke Hupkes, Yuval Pinter, Hassan Sajjad
- Venue:
- BlackboxNLP
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 16–32
- Language:
- URL:
- https://aclanthology.org/2020.blackboxnlp-1.3
- DOI:
- 10.18653/v1/2020.blackboxnlp-1.3
- Cite (ACL):
- Zining Zhu, Chuer Pan, Mohamed Abdalla, and Frank Rudzicz. 2020. Examining the rhetorical capacities of neural language models. In Proceedings of the Third BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP, pages 16–32, Online. Association for Computational Linguistics.
- Cite (Informal):
- Examining the rhetorical capacities of neural language models (Zhu et al., BlackboxNLP 2020)
- PDF:
- https://preview.aclanthology.org/nschneid-patch-4/2020.blackboxnlp-1.3.pdf
- Data
- IMDb Movie Reviews