HyeonSeok Lim
2024
Optimizing Language Augmentation for Multilingual Large Language Models: A Case Study on Korean
ChangSu Choi
|
Yongbin Jeong
|
Seoyoon Park
|
Inho Won
|
HyeonSeok Lim
|
SangMin Kim
|
Yejee Kang
|
Chanhyuk Yoon
|
Jaewan Park
|
Yiseul Lee
|
HyeJin Lee
|
Younggyun Hahm
|
Hansaem Kim
|
KyungTae Lim
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)
Large language models (LLMs) use pretraining to predict the subsequent word; however, their expansion requires significant computing resources. Numerous big tech companies and research institutes have developed multilingual LLMs (MLLMs) to meet current demands, overlooking less-resourced languages (LRLs). This study proposed three strategies to enhance the performance of LRLs based on the publicly available MLLMs. First, the MLLM vocabularies of LRLs were expanded to enhance expressiveness. Second, bilingual data were used for pretraining to align the high- and less-resourced languages. Third, a high-quality small-scale instruction dataset was constructed and instruction-tuning was performed to augment the LRL. The experiments employed the Llama2 model and Korean was used as the LRL, which was quantitatively evaluated against other developed LLMs across eight tasks. Furthermore, a qualitative assessment was performed based on human evaluation and GPT4. Experimental results showed that our proposed Bllossom model exhibited superior performance in qualitative analyses compared to previously proposed Korean monolingual models.
X-LLaVA: Optimizing Bilingual Large Vision-Language Alignment
DongJae Shin
|
HyeonSeok Lim
|
Inho Won
|
ChangSu Choi
|
Minjun Kim
|
SeungWoo Song
|
HanGyeol Yoo
|
SangMin Kim
|
KyungTae Lim
Findings of the Association for Computational Linguistics: NAACL 2024
The impressive development of large language models (LLMs) is expanding into the realm of large multimodal models (LMMs), which incorporate multiple types of data beyond text. However, the nature of multimodal models leads to significant expenses in the creation of training data. Furthermore, constructing multilingual data for LMMs presents its own set of challenges due to language diversity and complexity. Therefore, in this study, we propose two cost-effective methods to solve this problem: (1) vocabulary expansion and pretraining of multilingual LLM for specific languages, and (2) automatic and elaborate construction of multimodal datasets using GPT4-V. Based on these methods, we constructed a 91K English-Korean-Chinese multilingual, multimodal training dataset. Additionally, we developed a bilingual multimodal model that exhibits excellent performance in both Korean and English, surpassing existing approaches.
Search
Co-authors
- ChangSu Choi 2
- Inho Won 2
- SangMin Kim 2
- KyungTae Lim 2
- Yongbin Jeong 1
- show all...