Zhongzhi Luan
2025
On Domain-Adaptive Post-Training for Multimodal Large Language Models
Daixuan Cheng
|
Shaohan Huang
|
Ziyu Zhu
|
Xintong Zhang
|
Xin Zhao
|
Zhongzhi Luan
|
Bo Dai
|
Zhenliang Zhang
Findings of the Association for Computational Linguistics: EMNLP 2025
Adapting general multimodal large language models (MLLMs) to specific domains, such as scientific and industrial fields, is highly significant in promoting their practical applications. This paper systematically investigates domain adaptation of MLLMs via post-training, focusing on data synthesis, training pipeline, and task evaluation. (1) **Data Synthesis**: Using only open-source models, we develop a generate-then-filter pipeline that curates diverse visual instruction tasks based on domain-specific image-caption pairs. The resulting data surpass the data synthesized by manual rules or strong closed-source models in enhancing domain-specific performance. (2) **Training Pipeline**: Unlike general MLLMs that typically adopt a two-stage training paradigm, we find that a single-stage approach is more effective for domain adaptation. (3) **Task Evaluation**: We conduct extensive experiments in high-impact domains such as biomedicine, food, and remote sensing, by post-training a variety of MLLMs and then evaluating MLLM performance on various domain-specific tasks. Finally, we fully open-source our models, code, and data to encourage future research in this area.
2024
Scaling Sentence Embeddings with Large Language Models
Ting Jiang
|
Shaohan Huang
|
Zhongzhi Luan
|
Deqing Wang
|
Fuzhen Zhuang
Findings of the Association for Computational Linguistics: EMNLP 2024
Large Language Models (LLMs) have recently gained significant interest due to their impressive results in various natural language tasks. However, their application to sentence embeddings is still under active research. In this work, we introduce PromptEOL, a simple and efficient method designed to enhance LLM performance on sentence embeddings with a one-word limitation. We further integrate PromptEOL with in-context learning and alignment to leverage LLMs in two settings: without fine-tuning and with fine-tuning. Our extensive experiments show that PromptEOL enables LLMs to generate superior sentence embeddings without fine-tuning, outperforming contrastive learning methods. Additionally, with fine-tuning, a 2.7B parameter model using PromptEOL surpasses the performance of a 4.8B parameter model from previous methods. We also analyze how scaling model parameters, from 125 million to 66 billion, impacts sentence embedding performance.
Search
Fix author
Co-authors
- Shaohan Huang 2
- Daixuan Cheng 1
- Bo Dai 1
- Ting Jiang 1
- Deqing Wang 1
- show all...