2025
pdf
bib
abs
LoRA-PAR: A Flexible Dual-System LoRA Partitioning Approach to Efficient LLM Fine-Tuning
Yining Huang
|
Bin Li
|
Keke Tang
|
Meilian Chen
Findings of the Association for Computational Linguistics: EMNLP 2025
Large-scale generative models like DeepSeek-R1 and OpenAI-O1 benefit substantially from chain-of-thought (CoT) reasoning, yet pushing their performance typically requires vast data, large model sizes, and full-parameter fine-tuning. While parameter-efficient fine-tuning (PEFT) helps reduce cost, most existing approaches primarily address domain adaptation or layer-wise allocation rather than explicitly tailoring data and parameters to different response demands. Inspired by “Thinking, Fast and Slow,” which characterizes two distinct modes of thought—System 1 (fast, intuitive, often automatic) and System 2 (slower, more deliberative and analytic)—we draw an analogy that different “subregions” of an LLM’s parameters might similarly specialize for tasks that demand quick, intuitive responses versus those requiring multi-step logical reasoning. Therefore, we propose LoRA-PAR, a dual-system LoRA framework that partitions both data and parameters by System 1 or System 2 demands, using fewer yet more focused parameters for each task. Specifically, we classify task data via multi-model role-playing and voting, and partition parameters based on importance scoring, then adopt a two-stage fine-tuning strategy of training System 1 tasks with supervised fine-tuning (SFT) to enhance knowledge and intuition and refine System 2 tasks with reinforcement learning (RL) to reinforce deeper logical deliberation next. Extensive experiments show that the two-stage fine-tuning strategy, SFT and RL, lowers active parameter usage while matching or surpassing SOTA PEFT baselines.
2024
pdf
bib
abs
Enhancing Emotion-Cause Pair Extraction in Conversations via Center Event Detection and Reasoning
Botao Wang
|
Keke Tang
|
Peican Zhu
Findings of the Association for Computational Linguistics: EMNLP 2024
Emotion-Cause Pair Extraction in Conversations (ECPEC) aims to identify emotion utterances and their corresponding cause utterances in unannotated conversations, this task that has garnered increasing attention recently. Previous methods often apply Emotion-Cause Pair Extraction (ECPE) task models, treating the entire conversation as a whole for contextual interaction. However, statistical analysis shows that the number of emotion-cause pairs in ECPEC conversation data far exceeds that in ECPE datasets, leading to interference among multiple events within a conversation and causing noise to propagate between different events. To address this issue, we propose a novel CEnter eveNT-guided framEwoRk (CENTER). This model introduces a Center Event Detection task to construct a center event-aware graph that captures the unique representations of different event regions. Additionally, mimicking human reasoning processes, we build a center event reasoning graph and use graph neural network to facilitate the flow of information between utterance pairs, thereby uncovering the relationships between emotions and their causes. Experimental results demonstrate that our approach achieves state-of-the-art performance across three benchmark datasets.
pdf
bib
abs
Towards Robust Temporal Activity Localization Learning with Noisy Labels
Daizong Liu
|
Xiaoye Qu
|
Xiang Fang
|
Jianfeng Dong
|
Pan Zhou
|
Guoshun Nan
|
Keke Tang
|
Wanlong Fang
|
Yu Cheng
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)
This paper addresses the task of temporal activity localization (TAL). Although recent works have made significant progress in TAL research, almost all of them implicitly assume that the dense frame-level correspondences in each video-query pair are correctly annotated. However, in reality, such an assumption is extremely expensive and even impossible to satisfy due to subjective labeling. To alleviate this issue, in this paper, we explore a new TAL setting termed Noisy Temporal activity localization (NTAL), where a TAL model should be robust to the mixed training data with noisy moment boundaries. Inspired by the memorization effect of neural networks, we propose a novel method called Co-Teaching Regularizer (CTR) for NTAL. Specifically, we first learn a Gaussian Mixture Model to divide the mixed training data into preliminary clean and noisy subsets. Subsequently, we refine the labels of the two subsets by an adaptive prediction function so that their true positive and false positive samples could be identified. To avoid single model being prone to its mistakes learned by the mixed data, we adopt a co-teaching paradigm, which utilizes two models sharing the same framework to teach each other for robust learning. A curriculum strategy is further introduced to gradually learn the moment confidence from easy to hard. Experiments on three datasets demonstrate that our CTR is significantly more robust to the noisy training data compared to the existing methods.
2023
pdf
bib
abs
Annotations Are Not All You Need: A Cross-modal Knowledge Transfer Network for Unsupervised Temporal Sentence Grounding
Xiang Fang
|
Daizong Liu
|
Wanlong Fang
|
Pan Zhou
|
Yu Cheng
|
Keke Tang
|
Kai Zou
Findings of the Association for Computational Linguistics: EMNLP 2023
This paper addresses the task of temporal sentence grounding (TSG). Although many respectable works have made decent achievements in this important topic, they severely rely on massive expensive video-query paired annotations, which require a tremendous amount of human effort to collect in real-world applications. To this end, in this paper, we target a more practical but challenging TSG setting: unsupervised temporal sentence grounding, where both paired video-query and segment boundary annotations are unavailable during the network training. Considering that some other cross-modal tasks provide many easily available yet cheap labels, we tend to collect and transfer their simple cross-modal alignment knowledge into our complex scenarios: 1) We first explore the entity-aware object-guided appearance knowledge from the paired Image-Noun task, and adapt them into each independent video frame; 2) Then, we extract the event-aware action representation from the paired Video-Verb task, and further refine the action representation into more practical but complicated real-world cases by a newly proposed copy-paste approach; 3) By modulating and transferring both appearance and action knowledge into our challenging unsupervised task, our model can directly utilize this general knowledge to correlate videos and queries, and accurately retrieve the relevant segment without training. Extensive experiments on two challenging datasets (ActivityNet Captions and Charades-STA) show our effectiveness, outperforming existing unsupervised methods and even competitively beating supervised works.