2025
pdf
bib
abs
Adversarial Alignment with Anchor Dragging Drift (A3D2): Multimodal Domain Adaptation with Partially Shifted Modalities
Jun Sun
|
Xinxin Zhang
|
Simin Hong
|
Jian Zhu
|
Lingfang Zeng
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Multimodal learning has celebrated remarkable success across diverse areas, yet faces the challenge of prohibitively expensive data collection and annotation when adapting models to new environments. In this context, domain adaptation has gained growing popularity as a technique for knowledge transfer, which, however, remains underexplored in multimodal settings compared with unimodal ones. This paper investigates multimodal domain adaptation, focusing on a practical partially shifting scenario where some modalities (referred to as anchors) remain domain-stable, while others (referred to as drifts) undergo a domain shift. We propose a bi-alignment scheme to simultaneously perform drift-drift and anchor-drift matching. The former is achieved through adversarial learning, aligning the representations of the drifts across source and target domains; the latter corresponds to an “anchor dragging drift” strategy, which matches the distributions of the drifts and anchors within the target domain using the optimal transport (OT) method. The overall design principle features
Adversarial
Alignment with
Anchor
Dragging
Drift, abbreviated as
A3D2, for multimodal domain adaptation with partially shifted modalities. Comprehensive empirical results verify the effectiveness of the proposed approach, and demonstrate that
A3D2 achieves superior performance compared with state-of-the-art approaches. The code is available at:
https://github.com/sunjunaimer/A3D2.git.
pdf
bib
abs
Zero-Shot Defense Against Toxic Images via Inherent Multimodal Alignment in LVLMs
Wei Zhao
|
Zhe Li
|
Yige Li
|
Jun Sun
Findings of the Association for Computational Linguistics: EMNLP 2025
Large Vision-Language Models (LVLMs) have made significant strides in multimodal comprehension, thanks to extensive pre-training and fine-tuning on large-scale visual datasets. However, despite their robust textual safety mechanisms, they remain vulnerable to harmful visual inputs. Existing safeguards—typically relying on pre-filtering or fine-tuning—incur high costs and diminish overall utility. To address this critical vulnerability, we introduce SafeCLIP, a lightweight method that leverages LVLMs’ inherent multimodal alignment for zero-shot toxic image detection. By projecting CLIP’s discarded CLS token into its text space and matching it with toxic descriptors, SafeCLIP detects harmful content without any architectural changes—adding minimal latency and enabling dynamic safety corrections during inference and fine-tuning. Experiments show that SafeCLIP achieves a 66.9% defense success rate with only 3.2% false positive rate and 7.2% overhead. In contrast, state-of-the-art methods achieve 52.9% success but have a 10.7% false positive rate and 210% overhead. Our work demonstrates that leveraging inherent multimodal alignment can yield efficient, low-cost LVLM safety. Code is available at
anonymous.4open.science/r/safeclip-2C01.
pdf
bib
abs
Do Influence Functions Work on Large Language Models?
Zhe Li
|
Wei Zhao
|
Yige Li
|
Jun Sun
Findings of the Association for Computational Linguistics: EMNLP 2025
Influence functions are important for quantifying the impact of individual training data points on a model’s predictions. Although extensive research has been conducted on influence functions in traditional machine learning models, their application to large language models (LLMs) has been limited. In this work, we conduct a systematic study to address a key question: do influence functions work on LLMs? Specifically, we evaluate influence functions across multiple tasks and find that they consistently perform poorly in most settings. Our further investigation reveals that their poor performance can be attributed to: (1) inevitable approximation errors when estimating the iHVP component due to the scale of LLMs, (2) uncertain convergence during fine-tuning, and, more fundamentally, (3) the definition itself, as changes in model parameters do not necessarily correlate with changes in LLM behavior. Thus, our study suggests the need for alternative approaches for identifying influential samples.
pdf
bib
abs
Third-Person Appraisal Agent: Simulating Human Emotional Reasoning in Text with Large Language Models
Simin Hong
|
Jun Sun
|
Hongyang Chen
Findings of the Association for Computational Linguistics: EMNLP 2025
Emotional reasoning is essential for improving human-AI interactions, particularly in mental health support and empathetic systems. However, current approaches, which primarily map sensory inputs to fixed emotion labels, fail to understand the intricate relationships between motivations, thoughts, and emotions, thereby limiting their ability to generalize across flexible emotional reasoning tasks. To address this, we propose a novel third-person appraisal agent that simulates human-like emotional reasoning through three phases: Primary Appraisal, Secondary Appraisal, and Reappraisal. In the Primary Appraisal phase, a third-person generator powered by a large language model (LLM) infers emotions based on cognitive appraisal theory. The Secondary Appraisal phase uses an evaluator LLM to provide feedback, guiding the generator in refining its predictions. The generator then uses counterfactual reasoning to adjust its process and explore alternative emotional responses. The Reappraisal phase utilizes reinforced fine-tuning (ReFT) by employing a reflective actor-critic framework to further enhance the model’s performance and generalization. This process uses reward signals and learns from appraisal trajectories without human annotations. Our approach outperforms baseline LLMs in various emotional reasoning tasks, demonstrating superior generalization and interpretability. To the best of our knowledge, this is the first cognition-based architecture designed to enhance emotional reasoning in LLMs, advancing AI towards human-like emotional understanding.
2024
pdf
bib
abs
Amanda: Adaptively Modality-Balanced Domain Adaptation for Multimodal Emotion Recognition
Xinxin Zhang
|
Jun Sun
|
Simin Hong
|
Taihao Li
Findings of the Association for Computational Linguistics: ACL 2024
This paper investigates unsupervised multimodal domain adaptation for multimodal emotion recognition, which is a solution for data scarcity yet remains under studied. Due to the varying distribution discrepancies of different modalities between source and target domains, the primary challenge lies in how to balance the domain alignment across modalities to guarantee they are all well aligned. To achieve this, we first develop our model based on the information bottleneck theory to learn optimal representation for each modality independently. Then, we align the domains via matching the label distributions and the representations. In order to balance the representation alignment, we propose to minimize a surrogate of the alignment losses, which is equivalent to adaptively adjusting the weights of the modalities throughout training, thus achieving balanced domain alignment across modalities. Overall, the proposed approach features
Adaptively
modality-bal
anced
domain
adaptation, dubbed
Amanda, for multimodal emotion recognition. Extensive empirical results on commonly used benchmark datasets demonstrate that Amanda significantly outperforms competing approaches. The code is available at
https://github.com/sunjunaimer/Amanda.
pdf
bib
abs
Defending Large Language Models Against Jailbreak Attacks via Layer-specific Editing
Wei Zhao
|
Zhe Li
|
Yige Li
|
Ye Zhang
|
Jun Sun
Findings of the Association for Computational Linguistics: EMNLP 2024
Large language models (LLMs) are increasingly being adopted in a wide range of real-world applications. Despite their impressive performance, recent studies have shown that LLMs are vulnerable to deliberately crafted adversarial prompts even when aligned via Reinforcement Learning from Human Feedback or supervised fine-tuning. While existing defense methods focus on either detecting harmful prompts or reducing the likelihood of harmful responses through various means, defending LLMs against jailbreak attacks based on the inner mechanisms of LLMs remains largely unexplored. In this work, we investigate how LLMs respond to harmful prompts and propose a novel defense method termed
Layer-specific
Editing (LED) to enhance the resilience of LLMs against jailbreak attacks. Through LED, we reveal that several critical
safety layers exist among the early layers of LLMs. We then show that realigning these safety layers (and some selected additional layers) with the decoded safe response from identified
toxic layers can significantly improve the alignment of LLMs against jailbreak attacks. Extensive experiments across various LLMs (e.g., Llama2, Mistral) show the effectiveness of LED, which effectively defends against jailbreak attacks while maintaining performance on benign prompts. Our code is available at
https://github.com/ledllm/ledllm.
pdf
bib
abs
DetectiveNN: Imitating Human Emotional Reasoning with a Recall-Detect-Predict Framework for Emotion Recognition in Conversations
Simin Hong
|
Jun Sun
|
Taihao Li
Findings of the Association for Computational Linguistics: EMNLP 2024
Emotion Recognition in conversations (ERC) involves an internal cognitive process that interprets emotional cues by using a collection of past emotional experiences. However, many existing methods struggle to decipher emotional cues in dialogues since they are insufficient in understanding the rich historical emotional context. In this work, we introduce an innovative Detective Network (DetectiveNN), a novel model that is grounded in the cognitive theory of emotion and utilizes a “recall-detect-predict” framework to imitate human emotional reasoning. This process begins by ‘recalling’ past interactions of a specific speaker to collect emotional cues. It then ‘detects’ relevant emotional patterns by interpreting these cues in the context of the ongoing conversation. Finally, it ‘predicts’ the speaker’s current emotional state. Tested on three benchmark datasets, our approach significantly outperforms existing methods. This highlights the advantages of incorporating cognitive factors into deep learning for ERC, enhancing task efficacy and prediction accuracy.
2023
pdf
bib
abs
Layer-wise Fusion with Modality Independence Modeling for Multi-modal Emotion Recognition
Jun Sun
|
Shoukang Han
|
Yu-Ping Ruan
|
Xiaoning Zhang
|
Shu-Kai Zheng
|
Yulong Liu
|
Yuxin Huang
|
Taihao Li
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Multi-modal emotion recognition has gained increasing attention in recent years due to its widespread applications and the advances in multi-modal learning approaches. However, previous studies primarily focus on developing models that exploit the unification of multiple modalities. In this paper, we propose that maintaining modality independence is beneficial for the model performance. According to this principle, we construct a dataset, and devise a multi-modal transformer model. The new dataset, CHinese Emotion Recognition dataset with Modality-wise Annotions, abbreviated as CHERMA, provides uni-modal labels for each individual modality, and multi-modal labels for all modalities jointly observed. The model consists of uni-modal transformer modules that learn representations for each modality, and a multi-modal transformer module that fuses all modalities. All the modules are supervised by their corresponding labels separately, and the forward information flow is uni-directionally from the uni-modal modules to the multi-modal module. The supervision strategy and the model architecture guarantee each individual modality learns its representation independently, and meanwhile the multi-modal module aggregates all information. Extensive empirical results demonstrate that our proposed scheme outperforms state-of-the-art alternatives, corroborating the importance of modality independence in multi-modal emotion recognition. The dataset and codes are availabel at
https://github.com/sunjunaimer/LFMIM2016
pdf
bib
Automatic Identifying Entity Type in Linked Data
Qingliang Miao
|
Ruiyu Fang
|
Shuangyong Song
|
Zhongguang Zheng
|
Lu Fang
|
Yao Meng
|
Jun Sun
Proceedings of the 30th Pacific Asia Conference on Language, Information and Computation: Posters
2015
pdf
bib
Feature Reduction Using Ensemble Approach
Yingju Xia
|
Cuiqin Hou
|
Zhuoran Xu
|
Jun Sun
Proceedings of the 29th Pacific Asia Conference on Language, Information and Computation: Posters
2010
pdf
bib
Discriminative Induction of Sub-Tree Alignment using Limited Labeled Data
Jun Sun
|
Min Zhang
|
Chew Lim Tan
Proceedings of the 23rd International Conference on Computational Linguistics (Coling 2010)
pdf
bib
Exploring Syntactic Structural Features for Sub-Tree Alignment Using Bilingual Tree Kernels
Jun Sun
|
Min Zhang
|
Chew Lim Tan
Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics
2009
pdf
bib
A non-contiguous Tree Sequence Alignment-based Model for Statistical Machine Translation
Jun Sun
|
Min Zhang
|
Chew Lim Tan
Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP
2007
pdf
bib
abs
I2R Chinese-English translation system for IWSLT 2007
Boxing Chen
|
Jun Sun
|
Hongfei Jiang
|
Min Zhang
|
Ai Ti Aw
Proceedings of the Fourth International Workshop on Spoken Language Translation
In this paper, we describe the system and approach used by Institute for Infocomm Research (I2R) for the IWSLT 2007 spoken language evaluation campaign. A multi-pass approach is exploited to generate and select best translation. First, we use two decoders namely the open source Moses and an in-home syntax-based decoder to generate N-best lists. Next we spawn new translation entries through a word-based n-gram language model estimated on the former N-best entries. Finally, we join the N-best lists from the previous two passes, and select the best translation by rescoring them with additional feature functions. In particular, this paper reports our effort on new translation entry generation and system combination. The performance on development and test sets are reported. The system was ranked first with respect to the BLEU measure in Chinese-to-English open data track.
pdf
bib
A tree-to-tree alignment-based model for statistical machine translation
Min Zhang
|
Hongfei Jiang
|
Ai Ti Aw
|
Jun Sun
|
Sheng Li
|
Chew Lim Tan
Proceedings of Machine Translation Summit XI: Papers