从多模态预训练到多模态大模型:架构、训练、评测、趋势概览(From Multi-Modal Pre-Training to Multi-Modal Large Language Models: An Overview of Architectures, Training,)
Zejun Li (李泽君), Jiwen Zhang (张霁雯), Ye Wang (王晔), Mengfei Du (杜梦飞), Qingwen Liu (刘晴雯), Dianyi Wang (王殿仪), Binhao Wu (吴斌浩), Ruipu Luo (罗瑞璞), Xuanjing Huang (黄萱菁), Zhongyu Wei (魏忠钰)
Abstract
“多媒体信息在人类社会的发展历程中有着至关重要的作用,构建具有多模态信息处理能力的智能系统也是通往通用人工智能的必经之路。随着预训练技术的发展以及对于通用模型的需求,多模态的研究也从早期的任务特定的方法转移到了构建统一泛用的多模态基座模型上。初步的统一多模态模型探索受到BERT启发,从表征学习的角度出发构建能为不同下游任务提供有效初始化的多模态预训练模型,这类方法尽管有效但仍然在泛用性方面受限于预训练中微调范式,无法更广泛高效地应用。近年来随着大语言模型的发展,以大语言模型为基座的多模态大模型则展现出了巨大的潜力:此类模型有着强大的信息感知,交互,以及推理能力并且能有效泛化到多样的场景下,为新时代的通用人工智能系统提供了切实可行的思路。本文将从构建统一多模态模型的角度出发,介绍和梳理相关工作的发展,从多模态预训练到多模态大模型,介绍对应的架构,训练,评测方法以及发展趋势,为读者提供一个全面的概览。”- Anthology ID:
- 2024.ccl-2.1
- Volume:
- Proceedings of the 23rd Chinese National Conference on Computational Linguistics (Volume 2: Frontier Forum)
- Month:
- July
- Year:
- 2024
- Address:
- Taiyuan, China
- Editor:
- Zhao Xin
- Venue:
- CCL
- SIG:
- Publisher:
- Chinese Information Processing Society of China
- Note:
- Pages:
- 1–33
- Language:
- Chinese
- URL:
- https://preview.aclanthology.org/gwc-25-ingestion/2024.ccl-2.1/
- DOI:
- Cite (ACL):
- Zejun Li, Jiwen Zhang, Ye Wang, Mengfei Du, Qingwen Liu, Dianyi Wang, Binhao Wu, Ruipu Luo, Xuanjing Huang, and Zhongyu Wei. 2024. 从多模态预训练到多模态大模型:架构、训练、评测、趋势概览(From Multi-Modal Pre-Training to Multi-Modal Large Language Models: An Overview of Architectures, Training,). In Proceedings of the 23rd Chinese National Conference on Computational Linguistics (Volume 2: Frontier Forum), pages 1–33, Taiyuan, China. Chinese Information Processing Society of China.
- Cite (Informal):
- 从多模态预训练到多模态大模型:架构、训练、评测、趋势概览(From Multi-Modal Pre-Training to Multi-Modal Large Language Models: An Overview of Architectures, Training,) (Li et al., CCL 2024)
- PDF:
- https://preview.aclanthology.org/gwc-25-ingestion/2024.ccl-2.1.pdf