Generating Quantified Descriptions of Abstract Visual Scenes

Guanyi Chen, Kees van Deemter, Chenghua Lin


Abstract
Quantified expressions have always taken up a central position in formal theories of meaning and language use. Yet quantified expressions have so far attracted far less attention from the Natural Language Generation community than, for example, referring expressions. In an attempt to start redressing the balance, we investigate a recently developed corpus in which quantified expressions play a crucial role; the corpus is the result of a carefully controlled elicitation experiment, in which human participants were asked to describe visually presented scenes. Informed by an analysis of this corpus, we propose algorithms that produce computer-generated descriptions of a wider class of visual scenes, and we evaluate the descriptions generated by these algorithms in terms of their correctness, completeness, and human-likeness. We discuss what this exercise can teach us about the nature of quantification and about the challenges posed by the generation of quantified expressions.
Anthology ID:
W19-8667
Volume:
Proceedings of the 12th International Conference on Natural Language Generation
Month:
October–November
Year:
2019
Address:
Tokyo, Japan
Venue:
INLG
SIG:
SIGGEN
Publisher:
Association for Computational Linguistics
Note:
Pages:
529–539
Language:
URL:
https://aclanthology.org/W19-8667
DOI:
10.18653/v1/W19-8667
Bibkey:
Cite (ACL):
Guanyi Chen, Kees van Deemter, and Chenghua Lin. 2019. Generating Quantified Descriptions of Abstract Visual Scenes. In Proceedings of the 12th International Conference on Natural Language Generation, pages 529–539, Tokyo, Japan. Association for Computational Linguistics.
Cite (Informal):
Generating Quantified Descriptions of Abstract Visual Scenes (Chen et al., INLG 2019)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingestion-script-update/W19-8667.pdf
Supplementary attachment:
 W19-8667.Supplementary_Attachment.pdf