Commonsense Justification for Action Explanation

Shaohua Yang, Qiaozi Gao, Sari Sadiya, Joyce Chai


Abstract
To enable collaboration and communication between humans and agents, this paper investigates learning to acquire commonsense evidence for action justification. In particular, we have developed an approach based on the generative Conditional Variational Autoencoder(CVAE) that models object relations/attributes of the world as latent variables and jointly learns a performer that predicts actions and an explainer that gathers commonsense evidence to justify the action. Our empirical results have shown that, compared to a typical attention-based model, CVAE achieves significantly higher performance in both action prediction and justification. A human subject study further shows that the commonsense evidence gathered by CVAE can be communicated to humans to achieve a significantly higher common ground between humans and agents.
Anthology ID:
D18-1283
Volume:
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing
Month:
October-November
Year:
2018
Address:
Brussels, Belgium
Editors:
Ellen Riloff, David Chiang, Julia Hockenmaier, Jun’ichi Tsujii
Venue:
EMNLP
SIG:
SIGDAT
Publisher:
Association for Computational Linguistics
Note:
Pages:
2627–2637
Language:
URL:
https://aclanthology.org/D18-1283
DOI:
10.18653/v1/D18-1283
Bibkey:
Cite (ACL):
Shaohua Yang, Qiaozi Gao, Sari Sadiya, and Joyce Chai. 2018. Commonsense Justification for Action Explanation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2627–2637, Brussels, Belgium. Association for Computational Linguistics.
Cite (Informal):
Commonsense Justification for Action Explanation (Yang et al., EMNLP 2018)
Copy Citation:
PDF:
https://preview.aclanthology.org/ml4al-ingestion/D18-1283.pdf
Code
 yangshao/Commonsense4Action
Data
Visual Genome