Abstract
In human cognition, world knowledge supports the perception of object colours: knowing that trees are typically green helps to perceive their colour in certain contexts. We go beyond previous studies on colour terms using isolated colour swatches and study visual grounding of colour terms in realistic objects. Our models integrate processing of visual information and object-specific knowledge via hard-coded (late) or learned (early) fusion. We find that both models consistently outperform a bottom-up baseline that predicts colour terms solely from visual inputs, but show interesting differences when predicting atypical colours of so-called colour diagnostic objects. Our models also achieve promising results when tested on new object categories not seen during training.- Anthology ID:
- 2020.acl-main.584
- Volume:
- Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
- Month:
- July
- Year:
- 2020
- Address:
- Online
- Editors:
- Dan Jurafsky, Joyce Chai, Natalie Schluter, Joel Tetreault
- Venue:
- ACL
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 6536–6542
- Language:
- URL:
- https://aclanthology.org/2020.acl-main.584
- DOI:
- 10.18653/v1/2020.acl-main.584
- Cite (ACL):
- Simeon Schüz and Sina Zarrieß. 2020. Knowledge Supports Visual Language Grounding: A Case Study on Colour Terms. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6536–6542, Online. Association for Computational Linguistics.
- Cite (Informal):
- Knowledge Supports Visual Language Grounding: A Case Study on Colour Terms (Schüz & Zarrieß, ACL 2020)
- PDF:
- https://preview.aclanthology.org/ingest-2024-clasp/2020.acl-main.584.pdf