Abstract
Current spoken dialogue systems initiate their turns after a long period of silence (700-1000ms), which leads to little real-time feedback, sluggish responses, and an overall stilted conversational flow. Humans typically respond within 200ms and successfully predicting initiation points in advance would allow spoken dialogue agents to do the same. In this work, we predict the lead-time to initiation using prosodic features from a pre-trained speech representation model (wav2vec 1.0) operating on user audio and word features from a pre-trained language model (GPT-2) operating on incremental transcriptions. To evaluate errors, we propose two metrics w.r.t. predicted and true lead times. We train and evaluate the models on the Switchboard Corpus and find that our method outperforms features from prior work on both metrics and vastly outperforms the common approach of waiting for 700ms of silence.- Anthology ID:
- 2022.sigdial-1.22
- Volume:
- Proceedings of the 23rd Annual Meeting of the Special Interest Group on Discourse and Dialogue
- Month:
- September
- Year:
- 2022
- Address:
- Edinburgh, UK
- Venue:
- SIGDIAL
- SIG:
- SIGDIAL
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 217–224
- Language:
- URL:
- https://aclanthology.org/2022.sigdial-1.22
- DOI:
- Cite (ACL):
- Siyan Li, Ashwin Paranjape, and Christopher Manning. 2022. When can I Speak? Predicting initiation points for spoken dialogue agents. In Proceedings of the 23rd Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 217–224, Edinburgh, UK. Association for Computational Linguistics.
- Cite (Informal):
- When can I Speak? Predicting initiation points for spoken dialogue agents (Li et al., SIGDIAL 2022)
- PDF:
- https://preview.aclanthology.org/ingestion-script-update/2022.sigdial-1.22.pdf
- Code
- siyan-sylvia-li/icarus_final