Abstract
End-to-end learning framework is useful for building dialog systems for its simplicity in training and efficiency in model updating. However, current end-to-end approaches only consider user semantic inputs in learning and under-utilize other user information. Therefore, we propose to include user sentiment obtained through multimodal information (acoustic, dialogic and textual), in the end-to-end learning framework to make systems more user-adaptive and effective. We incorporated user sentiment information in both supervised and reinforcement learning settings. In both settings, adding sentiment information reduced the dialog length and improved the task success rate on a bus information search task. This work is the first attempt to incorporate multimodal user information in the adaptive end-to-end dialog system training framework and attained state-of-the-art performance.- Anthology ID:
- P18-1140
- Volume:
- Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
- Month:
- July
- Year:
- 2018
- Address:
- Melbourne, Australia
- Venue:
- ACL
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 1509–1519
- Language:
- URL:
- https://aclanthology.org/P18-1140
- DOI:
- 10.18653/v1/P18-1140
- Cite (ACL):
- Weiyan Shi and Zhou Yu. 2018. Sentiment Adaptive End-to-End Dialog Systems. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1509–1519, Melbourne, Australia. Association for Computational Linguistics.
- Cite (Informal):
- Sentiment Adaptive End-to-End Dialog Systems (Shi & Yu, ACL 2018)
- PDF:
- https://preview.aclanthology.org/ingestion-script-update/P18-1140.pdf