Abstract
We model the production of quantified referring expressions (QREs) that identity collections of visual items. A previous approach, called Perceptual Cost Pruning, modeled human QRE production using a preference-based referring expression generation algorithm, first removing facts from the input knowledge base based on a model of perceptual cost. In this paper, we present an alternative model that incrementally constructs a symbolic knowledge base through simulating human visual attention/perception from raw images. We demonstrate that this model produces the same output as Perceptual Cost Pruning. We argue that this is a more extensible approach and a step toward developing a wider range of process-level models of human visual description.- Anthology ID:
- 2020.inlg-1.16
- Volume:
- Proceedings of the 13th International Conference on Natural Language Generation
- Month:
- December
- Year:
- 2020
- Address:
- Dublin, Ireland
- Editors:
- Brian Davis, Yvette Graham, John Kelleher, Yaji Sripada
- Venue:
- INLG
- SIG:
- SIGGEN
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 107–112
- Language:
- URL:
- https://aclanthology.org/2020.inlg-1.16
- DOI:
- 10.18653/v1/2020.inlg-1.16
- Cite (ACL):
- Gordon Briggs. 2020. Generating Quantified Referring Expressions through Attention-Driven Incremental Perception. In Proceedings of the 13th International Conference on Natural Language Generation, pages 107–112, Dublin, Ireland. Association for Computational Linguistics.
- Cite (Informal):
- Generating Quantified Referring Expressions through Attention-Driven Incremental Perception (Briggs, INLG 2020)
- PDF:
- https://preview.aclanthology.org/nschneid-patch-4/2020.inlg-1.16.pdf