Translation vs. Dialogue: A Comparative Analysis of Sequence-to-Sequence Modeling
Wenpeng Hu, Ran Le, Bing Liu, Jinwen Ma, Dongyan Zhao, Rui Yan
Abstract
Understanding neural models is a major topic of interest in the deep learning community. In this paper, we propose to interpret a general neural model comparatively. Specifically, we study the sequence-to-sequence (Seq2Seq) model in the contexts of two mainstream NLP tasks–machine translation and dialogue response generation–as they both use the seq2seq model. We investigate how the two tasks are different and how their task difference results in major differences in the behaviors of the resulting translation and dialogue generation systems. This study allows us to make several interesting observations and gain valuable insights, which can be used to help develop better translation and dialogue generation models. To our knowledge, no such comparative study has been done so far.- Anthology ID:
- 2020.coling-main.363
- Volume:
- Proceedings of the 28th International Conference on Computational Linguistics
- Month:
- December
- Year:
- 2020
- Address:
- Barcelona, Spain (Online)
- Venue:
- COLING
- SIG:
- Publisher:
- International Committee on Computational Linguistics
- Note:
- Pages:
- 4111–4122
- Language:
- URL:
- https://aclanthology.org/2020.coling-main.363
- DOI:
- 10.18653/v1/2020.coling-main.363
- Cite (ACL):
- Wenpeng Hu, Ran Le, Bing Liu, Jinwen Ma, Dongyan Zhao, and Rui Yan. 2020. Translation vs. Dialogue: A Comparative Analysis of Sequence-to-Sequence Modeling. In Proceedings of the 28th International Conference on Computational Linguistics, pages 4111–4122, Barcelona, Spain (Online). International Committee on Computational Linguistics.
- Cite (Informal):
- Translation vs. Dialogue: A Comparative Analysis of Sequence-to-Sequence Modeling (Hu et al., COLING 2020)
- PDF:
- https://preview.aclanthology.org/remove-xml-comments/2020.coling-main.363.pdf