Yongfei Liu


2024

pdf
InfiMM: Advancing Multimodal Understanding with an Open-Sourced Visual Language Model
Haogeng Liu | Quanzeng You | Yiqi Wang | Xiaotian Han | Bohan Zhai | Yongfei Liu | Wentao Chen | Yiren Jian | Yunzhe Tao | Jianbo Yuan | Ran He | Hongxia Yang
Findings of the Association for Computational Linguistics ACL 2024

In this work, we present InfiMM, an advanced Multimodal Large Language Model that adapts to intricate vision-language tasks. InfiMM, inspired by the Flamingo architecture, distinguishes itself through the utilization of large-scale training data, comprehensive training strategies, and diverse large language models. This approach ensures the preservation of Flamingo’s foundational strengths while simultaneously introducing augmented capabilities. Empirical evaluations across a variety of benchmarks underscore InfiMM’s remarkable capability in multimodal understanding. The code can be found at: https://anonymous.4open.science/r/infimm-zephyr-F60C/.

2022

pdf
KD-VLP: Improving End-to-End Vision-and-Language Pretraining with Object Knowledge Distillation
Yongfei Liu | Chenfei Wu | Shao-Yen Tseng | Vasudev Lal | Xuming He | Nan Duan
Findings of the Association for Computational Linguistics: NAACL 2022

Self-supervised vision-and-language pretraining (VLP) aims to learn transferable multi-modal representations from large-scale image-text data and to achieve strong performances on a broad scope of vision-language tasks after finetuning. Previous mainstream VLP approaches typically adopt a two-step strategy relying on external object detectors to encode images in a multi-modal Transformer framework, which suffer from restrictive object concept space, limited image context and inefficient computation. In this paper, we propose an object-aware end-to-end VLP framework, which directly feeds image grid features from CNNs into the Transformer and learns the multi-modal representations jointly. More importantly, we propose to perform object knowledge distillation to facilitate learning cross-modal alignment at different semantic levels. To achieve that, we design two novel pretext tasks by taking object features and their semantic labels from external detectors as supervision: 1.) Object-guided masked vision modeling task focuses on enforcing object-aware representation learning in the multi-modal Transformer; 2.) Phrase-region alignment task aims to improve cross-modal alignment by utilizing the similarities between noun phrases and object labels in the linguistic space. Extensive experiments on a wide range of vision-language tasks demonstrate the efficacy of our proposed framework, and we achieve competitive or superior performances over the existing pretraining strategies.

2021

pdf
GEM: A General Evaluation Benchmark for Multimodal Tasks
Lin Su | Nan Duan | Edward Cui | Lei Ji | Chenfei Wu | Huaishao Luo | Yongfei Liu | Ming Zhong | Taroon Bharti | Arun Sacheti
Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021