You Li

Also published as:


2025

pdf bib
Multi-Stage LLM Fine-Tuning with a Continual Learning Setting
Changhao Guan | Chao Huang | Hongliang Li | You Li | Ning Cheng | Zihe Liu | Yufeng Chen | Jinan Xu | Jian Liu
Findings of the Association for Computational Linguistics: NAACL 2025

In recent years, large language models (LLMs) have made significant progress in knowledge-intensive applications. However, when adapting them to specific domains, we may encounter a multi-stage continuous learning scenario, especially in cases where domain knowledge evolves rapidly.This issue severely limits traditional fine-tuning approaches for LLMs.To overcome this limitation, we propose a new learning paradigm designed specifically for multi-stage continuous learning. This paradigm includes a preference-based learning bias to identify potential knowledge conflicts, as well as a self-distillation-based data augmentation strategy to expand and enrich the training corpus, thereby improving the integration of knowledge-compatible information.In the experiments, we show that our proposed method achieves a significant improvement in accuracy after 7 stages of fine-tuning compared to previous methods, while also demonstrating excellent performance in preserving general knowledge.We have released our code and dataset at Multi-Stage-Learning.

pdf bib
Migician: Revealing the Magic of Free-Form Multi-Image Grounding in Multimodal Large Language Models
You Li | Heyu Huang | Chi Chen | Kaiyu Huang | Chao Huang | Zonghao Guo | Zhiyuan Liu | Jinan Xu | Yuhua Li | Ruixuan Li | Maosong Sun
Findings of the Association for Computational Linguistics: ACL 2025

The recent advancement of Multimodal Large Language Models (MLLMs) has significantly improved their fine-grained perception of single images and general comprehension across multiple images. However, existing MLLMs still face challenges in achieving precise grounding in complex multi-image scenarios. To address this, we first explore a Chain-of-Thought (CoT) framework that integrates single-image grounding with multi-image comprehension. While partially effective, it remains unstable and struggles to capture abstract visual information due to its non-end-to-end nature. Therefore, we introduce Migician, the first multi-image grounding model capable of performing free-form and accurate grounding across multiple images. To support this, we present the MGrounding-630k dataset, which comprises data for several multi-image grounding tasks derived from existing datasets, along with newly generated free-form grounding instruction-following data. Furthermore, we propose MIG-Bench, a comprehensive benchmark specifically designed for evaluating multi-image grounding capabilities. Experimental results demonstrate that our model achieves significantly superior multi-image grounding capabilities, outperforming the best existing MLLMs by 24.94% and even surpassing much larger 70B models. Our code, model, dataset, and benchmark are fully open-sourced at https://migician-vg.github.io/.

2024

pdf bib
MITF:基于图像映射文本特征的跨模态图文检索方法(MITF:Cross-modal Image-text Retrieval Method with Mapping Images to Text Features)
Xinyue Lou (娄馨月) | You Li (李铀) | Rui Qi (齐睿) | Yufeng Chen (陈钰枫) | Jinan Xu (徐金安)
Proceedings of the 23rd Chinese National Conference on Computational Linguistics (Volume 1: Main Conference)

“减小图文信息间的语义鸿沟,促进跨模态信息的对齐与融合一直是解决跨模态图文检索问题的关键。但现有的双流模型因为训练时图像编码器与文本编码器是分开的,导致图文特征的对齐与融合较难。因此,本文提出图像映射文本特征(MITF)网络将不同模态(图像和文本)的信息映射到单一模态(文本),进一步增强跨模态语义的融合和对齐,提高图文检索的性能。具体地,在冻结预训练的中文视觉语言模型Chinese-CLIP参数的情况下,训练一个MITF网络将图像映射为伪语言标记,在此基础上引入提示词自动学习机制提升模型对于伪语言标记的理解能力。同时,在检索时构建Faiss索引提高检索速度。在三个开源数据集的实验结果表明所提方法相比原始Chinese-CLIP模型检索时的Mean Recall指标平均提高了3.7%,检索速度提高了约4倍。同时,图文特征可视化结果进一步表明所提方法提高了图像特征与文本特征的对齐程度。”

pdf bib
Knowledge-Guided Dynamic Modality Attention Fusion Framework for Multimodal Sentiment Analysis
Xinyu Feng | Yuming Lin | Lihua He | You Li | Liang Chang | Ya Zhou
Findings of the Association for Computational Linguistics: EMNLP 2024

Multimodal Sentiment Analysis (MSA) utilizes multimodal data to infer the users’ sentiment. Previous methods focus on equally treating the contribution of each modality or statically using text as the dominant modality to conduct interaction, which neglects the situation where each modality may become dominant. In this paper, we propose a Knowledge-Guided Dynamic Modality Attention Fusion Framework (KuDA) for multimodal sentiment analysis. KuDA uses sentiment knowledge to guide the model dynamically selecting the dominant modality and adjusting the contributions of each modality. In addition, with the obtained multimodal representation, the model can further highlight the contribution of dominant modality through the correlation evaluation loss. Extensive experiments on four MSA benchmark datasets indicate that KuDA achieves state-of-the-art performance and is able to adapt to different scenarios of dominant modality.

2023

pdf bib
Rethinking the Construction of Effective Metrics for Understanding the Mechanisms of Pretrained Language Models
You Li | Jinhui Yin | Yuming Lin
Findings of the Association for Computational Linguistics: EMNLP 2023

Pretrained language models are expected to effectively map input text to a set of vectors while preserving the inherent relationships within the text. Consequently, designing a white-box model to compute metrics that reflect the presence of specific internal relations in these vectors has become a common approach for post-hoc interpretability analysis of pretrained language models. However, achieving interpretability in white-box models and ensuring the rigor of metric computation becomes challenging when the source model lacks inherent interpretability. Therefore, in this paper, we discuss striking a balance in this trade-off and propose a novel line to constructing metrics for understanding the mechanisms of pretrained language models. We have specifically designed a family of metrics along this line of investigation, and the model used to compute these metrics is referred to as the tree topological probe. We conducted measurements on BERT-large by using these metrics. Based on the experimental results, we propose a speculation regarding the working mechanism of BERT-like pretrained language models, as well as a strategy for enhancing fine-tuning performance by leveraging the topological probe to improve specific submodules.