ViStruct: Visual Structural Knowledge Extraction via Curriculum Guided Code-Vision Representation

Yangyi Chen, Xingyao Wang, Manling Li, Derek Hoiem, Heng Ji


Abstract
State-of-the-art vision-language models (VLMs) still have limited performance in structural knowledge extraction, such as relations between objects. In this work, we present ViStruct, a training framework to learn VLMs for effective visual structural knowledge extraction. Two novel designs are incorporated. First, we propose to leverage the inherent structure of programming language to depict visual structural information. This approach enables explicit and consistent representation of visual structural information of multiple granularities, such as concepts, relations, and events, in a well-organized structured format. Second, we introduce curriculum-based learning for VLMs to progressively comprehend visual structures, from fundamental visual concepts to intricate event structures. Our intuition is that lower-level knowledge may contribute to complex visual structure understanding. Furthermore, we compile and release a collection of datasets tailored for visual structural knowledge extraction. We adopt a weakly-supervised approach to directly generate visual event structures from captions for ViStruct training, capitalizing on abundant image-caption pairs from the web. In experiments, we evaluate ViStruct on visual structure prediction tasks, demonstrating its effectiveness in improving the understanding of visual structures. The code will be made public to facilitate future research.
Anthology ID:
2023.emnlp-main.824
Volume:
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
Month:
December
Year:
2023
Address:
Singapore
Editors:
Houda Bouamor, Juan Pino, Kalika Bali
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
13342–13357
Language:
URL:
https://aclanthology.org/2023.emnlp-main.824
DOI:
10.18653/v1/2023.emnlp-main.824
Bibkey:
Cite (ACL):
Yangyi Chen, Xingyao Wang, Manling Li, Derek Hoiem, and Heng Ji. 2023. ViStruct: Visual Structural Knowledge Extraction via Curriculum Guided Code-Vision Representation. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 13342–13357, Singapore. Association for Computational Linguistics.
Cite (Informal):
ViStruct: Visual Structural Knowledge Extraction via Curriculum Guided Code-Vision Representation (Chen et al., EMNLP 2023)
Copy Citation:
PDF:
https://preview.aclanthology.org/nschneid-patch-5/2023.emnlp-main.824.pdf
Video:
 https://preview.aclanthology.org/nschneid-patch-5/2023.emnlp-main.824.mp4