Abstract
Current referring expression generation systems mostly deliver their output as one-shot, written expressions. We present on-going work on incremental generation of spoken expressions referring to objects in real-world images. This approach extends upon previous work using the words-as-classifier model for generation. We implement this generator in an incremental dialogue processing framework such that we can exploit an existing interface to incremental text-to-speech synthesis. Our system generates and synthesizes referring expressions while continuously observing non-verbal user reactions.- Anthology ID:
- W17-3509
- Volume:
- Proceedings of the 10th International Conference on Natural Language Generation
- Month:
- September
- Year:
- 2017
- Address:
- Santiago de Compostela, Spain
- Editors:
- Jose M. Alonso, Alberto Bugarín, Ehud Reiter
- Venue:
- INLG
- SIG:
- SIGGEN
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 72–73
- Language:
- URL:
- https://aclanthology.org/W17-3509
- DOI:
- 10.18653/v1/W17-3509
- Cite (ACL):
- Sina Zarrieß, M. Soledad López Gambino, and David Schlangen. 2017. Refer-iTTS: A System for Referring in Spoken Installments to Objects in Real-World Images. In Proceedings of the 10th International Conference on Natural Language Generation, pages 72–73, Santiago de Compostela, Spain. Association for Computational Linguistics.
- Cite (Informal):
- Refer-iTTS: A System for Referring in Spoken Installments to Objects in Real-World Images (Zarrieß et al., INLG 2017)
- PDF:
- https://preview.aclanthology.org/improve-issue-templates/W17-3509.pdf