X-LLaVA: Optimizing Bilingual Large Vision-Language Alignment

DongJae Shin, HyeonSeok Lim, Inho Won, ChangSu Choi, Minjun Kim, SeungWoo Song, HanGyeol Yoo, SangMin Kim, KyungTae Lim


Abstract
The impressive development of large language models (LLMs) is expanding into the realm of large multimodal models (LMMs), which incorporate multiple types of data beyond text. However, the nature of multimodal models leads to significant expenses in the creation of training data. Furthermore, constructing multilingual data for LMMs presents its own set of challenges due to language diversity and complexity. Therefore, in this study, we propose two cost-effective methods to solve this problem: (1) vocabulary expansion and pretraining of multilingual LLM for specific languages, and (2) automatic and elaborate construction of multimodal datasets using GPT4-V. Based on these methods, we constructed a 91K English-Korean-Chinese multilingual, multimodal training dataset. Additionally, we developed a bilingual multimodal model that exhibits excellent performance in both Korean and English, surpassing existing approaches.
Anthology ID:
2024.findings-naacl.158
Volume:
Findings of the Association for Computational Linguistics: NAACL 2024
Month:
June
Year:
2024
Address:
Mexico City, Mexico
Editors:
Kevin Duh, Helena Gomez, Steven Bethard
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2463–2473
Language:
URL:
https://preview.aclanthology.org/build-pipeline-with-new-library/2024.findings-naacl.158/
DOI:
10.18653/v1/2024.findings-naacl.158
Bibkey:
Cite (ACL):
DongJae Shin, HyeonSeok Lim, Inho Won, ChangSu Choi, Minjun Kim, SeungWoo Song, HanGyeol Yoo, SangMin Kim, and KyungTae Lim. 2024. X-LLaVA: Optimizing Bilingual Large Vision-Language Alignment. In Findings of the Association for Computational Linguistics: NAACL 2024, pages 2463–2473, Mexico City, Mexico. Association for Computational Linguistics.
Cite (Informal):
X-LLaVA: Optimizing Bilingual Large Vision-Language Alignment (Shin et al., Findings 2024)
Copy Citation:
PDF:
https://preview.aclanthology.org/build-pipeline-with-new-library/2024.findings-naacl.158.pdf