@inproceedings{briggs-2020-generating,
    title = "Generating Quantified Referring Expressions through Attention-Driven Incremental Perception",
    author = "Briggs, Gordon",
    editor = "Davis, Brian  and
      Graham, Yvette  and
      Kelleher, John  and
      Sripada, Yaji",
    booktitle = "Proceedings of the 13th International Conference on Natural Language Generation",
    month = dec,
    year = "2020",
    address = "Dublin, Ireland",
    publisher = "Association for Computational Linguistics",
    url = "https://preview.aclanthology.org/ingest-emnlp/2020.inlg-1.16/",
    doi = "10.18653/v1/2020.inlg-1.16",
    pages = "107--112",
    abstract = "We model the production of quantified referring expressions (QREs) that identity collections of visual items. A previous approach, called Perceptual Cost Pruning, modeled human QRE production using a preference-based referring expression generation algorithm, first removing facts from the input knowledge base based on a model of perceptual cost. In this paper, we present an alternative model that incrementally constructs a symbolic knowledge base through simulating human visual attention/perception from raw images. We demonstrate that this model produces the same output as Perceptual Cost Pruning. We argue that this is a more extensible approach and a step toward developing a wider range of process-level models of human visual description."
}Markdown (Informal)
[Generating Quantified Referring Expressions through Attention-Driven Incremental Perception](https://preview.aclanthology.org/ingest-emnlp/2020.inlg-1.16/) (Briggs, INLG 2020)
ACL