@inproceedings{schuz-zarriess-2023-keeping,
    title = "Keeping an Eye on Context: Attention Allocation over Input Partitions in Referring Expression Generation",
    author = {Sch{\"u}z, Simeon  and
      Zarrie{\ss}, Sina},
    editor = "Gatt, Albert  and
      Gardent, Claire  and
      Cripwell, Liam  and
      Belz, Anya  and
      Borg, Claudia  and
      Erdem, Aykut  and
      Erdem, Erkut",
    booktitle = "Proceedings of the Workshop on Multimodal, Multilingual Natural Language Generation and Multilingual WebNLG Challenge (MM-NLG 2023)",
    month = sep,
    year = "2023",
    address = "Prague, Czech Republic",
    publisher = "Association for Computational Linguistics",
    url = "https://preview.aclanthology.org/landing_page/2023.mmnlg-1.3/",
    pages = "20--27",
    abstract = "In Referring Expression Generation, model inputs are often composed of different representations, including the visual properties of the intended referent, its relative position and size, and the visual context. Yet, the extent to which this information influences the generation process of black-box neural models is largely unclear. We investigate the relative weighting of target, location, and context information in the attention components of a Transformer-based generation model. Our results show a general target bias, which, however, depends on the content of the generated expressions, pointing to interesting directions for future research."
}Markdown (Informal)
[Keeping an Eye on Context: Attention Allocation over Input Partitions in Referring Expression Generation](https://preview.aclanthology.org/landing_page/2023.mmnlg-1.3/) (Schüz & Zarrieß, MMNLG 2023)
ACL