Abstract
In recent years several corpora have been developed for vision and language tasks. We argue that there is still significant room for corpora that increase the complexity of both visual and linguistic domains and which capture different varieties of perceptual and conversational contexts. Working with two corpora approaching this goal, we present a linguistic perspective on some of the challenges in creating and extending resources combining language and vision while preserving continuity with the existing best practices in the area of coreference annotation.- Anthology ID:
- 2021.alvr-1.7
- Volume:
- Proceedings of the Second Workshop on Advances in Language and Vision Research
- Month:
- June
- Year:
- 2021
- Address:
- Online
- Editors:
- Xin, Ronghang Hu, Drew Hudson, Tsu-Jui Fu, Marcus Rohrbach, Daniel Fried
- Venue:
- ALVR
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 39–44
- Language:
- URL:
- https://aclanthology.org/2021.alvr-1.7
- DOI:
- 10.18653/v1/2021.alvr-1.7
- Cite (ACL):
- Sharid Loáiciga, Simon Dobnik, and David Schlangen. 2021. Reference and coreference in situated dialogue. In Proceedings of the Second Workshop on Advances in Language and Vision Research, pages 39–44, Online. Association for Computational Linguistics.
- Cite (Informal):
- Reference and coreference in situated dialogue (Loáiciga et al., ALVR 2021)
- PDF:
- https://preview.aclanthology.org/nschneid-patch-3/2021.alvr-1.7.pdf