PEVL: Position-enhanced Pre-training and Prompt Tuning for Vision-language Models
Yuan Yao, Qianyu Chen, Ao Zhang, Wei Ji, Zhiyuan Liu, Tat-Seng Chua, Maosong Sun
Abstract
Vision-language pre-training (VLP) has shown impressive performance on a wide range of cross-modal tasks, where VLP models without reliance on object detectors are becoming the mainstream due to their superior computation efficiency and competitive performance. However, the removal of object detectors also deprives the capability of VLP models in explicit object modeling, which is essential to various position-sensitive vision-language (VL) tasks, such as referring expression comprehension and visual commonsense reasoning. To address the challenge, we introduce PEVL that enhances the pre-training and prompt tuning of VLP models with explicit object position modeling. Specifically, PEVL reformulates discretized object positions and language in a unified language modeling framework, which facilitates explicit VL alignment during pre-training, and also enables flexible prompt tuning for various downstream tasks. We show that PEVL enables state-of-the-art performance of detector-free VLP models on position-sensitive tasks such as referring expression comprehension and phrase grounding, and also improves the performance on position-insensitive tasks with grounded inputs. We make the data and code for this paper publicly available at https://github.com/thunlp/PEVL.- Anthology ID:
- 2022.emnlp-main.763
- Volume:
- Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
- Month:
- December
- Year:
- 2022
- Address:
- Abu Dhabi, United Arab Emirates
- Editors:
- Yoav Goldberg, Zornitsa Kozareva, Yue Zhang
- Venue:
- EMNLP
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 11104–11117
- Language:
- URL:
- https://aclanthology.org/2022.emnlp-main.763
- DOI:
- 10.18653/v1/2022.emnlp-main.763
- Cite (ACL):
- Yuan Yao, Qianyu Chen, Ao Zhang, Wei Ji, Zhiyuan Liu, Tat-Seng Chua, and Maosong Sun. 2022. PEVL: Position-enhanced Pre-training and Prompt Tuning for Vision-language Models. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 11104–11117, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
- Cite (Informal):
- PEVL: Position-enhanced Pre-training and Prompt Tuning for Vision-language Models (Yao et al., EMNLP 2022)
- PDF:
- https://preview.aclanthology.org/nschneid-patch-3/2022.emnlp-main.763.pdf