An Empirical study to understand the Compositional Prowess of Neural Dialog Models

Vinayshekhar Kumar, Vaibhav Kumar, Mukul Bhutani, Alexander Rudnicky


Abstract
In this work, we examine the problems associated with neural dialog models under the common theme of compositionality. Specifically, we investigate three manifestations of compositionality: (1) Productivity, (2) Substitutivity, and (3) Systematicity. These manifestations shed light on the generalization, syntactic robustness, and semantic capabilities of neural dialog models. We design probing experiments by perturbing the training data to study the above phenomenon. We make informative observations based on automated metrics and hope that this work increases research interest in understanding the capacity of these models.
Anthology ID:
2022.insights-1.21
Volume:
Proceedings of the Third Workshop on Insights from Negative Results in NLP
Month:
May
Year:
2022
Address:
Dublin, Ireland
Venue:
insights
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
154–158
Language:
URL:
https://aclanthology.org/2022.insights-1.21
DOI:
10.18653/v1/2022.insights-1.21
Bibkey:
Cite (ACL):
Vinayshekhar Kumar, Vaibhav Kumar, Mukul Bhutani, and Alexander Rudnicky. 2022. An Empirical study to understand the Compositional Prowess of Neural Dialog Models. In Proceedings of the Third Workshop on Insights from Negative Results in NLP, pages 154–158, Dublin, Ireland. Association for Computational Linguistics.
Cite (Informal):
An Empirical study to understand the Compositional Prowess of Neural Dialog Models (Kumar et al., insights 2022)
Copy Citation:
PDF:
https://preview.aclanthology.org/remove-xml-comments/2022.insights-1.21.pdf
Video:
 https://preview.aclanthology.org/remove-xml-comments/2022.insights-1.21.mp4
Code
 vinayshekharcmu/ComposionalityOfDialogModels
Data
DailyDialogMutualFriends