Abstract
We compare two models for corpus-based selection of dialogue responses: one based on cross-language relevance with a cross-language LSTM model. Each model is tested on multiple corpora, collected from two different types of dialogue source material. Results show that while the LSTM model performs adequately on a very large corpus (millions of utterances), its performance is dominated by the cross-language relevance model for a more moderate-sized corpus (ten thousands of utterances).- Anthology ID:
- 2020.lrec-1.92
- Volume:
- Proceedings of the Twelfth Language Resources and Evaluation Conference
- Month:
- May
- Year:
- 2020
- Address:
- Marseille, France
- Editors:
- Nicoletta Calzolari, Frédéric Béchet, Philippe Blache, Khalid Choukri, Christopher Cieri, Thierry Declerck, Sara Goggi, Hitoshi Isahara, Bente Maegaard, Joseph Mariani, Hélène Mazo, Asuncion Moreno, Jan Odijk, Stelios Piperidis
- Venue:
- LREC
- SIG:
- Publisher:
- European Language Resources Association
- Note:
- Pages:
- 735–742
- Language:
- English
- URL:
- https://aclanthology.org/2020.lrec-1.92
- DOI:
- Cite (ACL):
- Seyed Hossein Alavi, Anton Leuski, and David Traum. 2020. Which Model Should We Use for a Real-World Conversational Dialogue System? a Cross-Language Relevance Model or a Deep Neural Net?. In Proceedings of the Twelfth Language Resources and Evaluation Conference, pages 735–742, Marseille, France. European Language Resources Association.
- Cite (Informal):
- Which Model Should We Use for a Real-World Conversational Dialogue System? a Cross-Language Relevance Model or a Deep Neural Net? (Alavi et al., LREC 2020)
- PDF:
- https://preview.aclanthology.org/nschneid-patch-1/2020.lrec-1.92.pdf