Yanshu Li
2025
TACO: Enhancing Multimodal In-context Learning via Task Mapping-Guided Sequence Configuration
Yanshu Li
|
Jianjiang Yang
|
Tian Yun
|
Pinyuan Feng
|
Jinfa Huang
|
Ruixiang Tang
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Multimodal in-context learning (ICL) has emerged as a key mechanism for harnessing the capabilities of large vision–language models (LVLMs). However, its effectiveness remains highly sensitive to the quality of input ICL sequences, particularly for tasks involving complex reasoning or open-ended generation. A major limitation is our limited understanding of how LVLMs actually exploit these sequences during inference. To bridge this gap, we systematically interpret multimodal ICL through the lens of task mapping, which reveals how local and global relationships within and among demonstrations guide model reasoning. Building on this insight, we present TACO, a lightweight transformer-based model equipped with task-aware attention that dynamically configures ICL sequences. By injecting task-mapping signals into the autoregressive decoding process, TACO creates a bidirectional synergy between sequence construction and task reasoning. Experiments on five LVLMs and nine datasets demonstrate that TACO consistently surpasses baselines across diverse ICL tasks. These results position task mapping as a novel and valuable perspective for interpreting and improving multimodal ICL.
M-ABSA: A Multilingual Dataset for Aspect-Based Sentiment Analysis
ChengYan Wu
|
Bolei Ma
|
Yihong Liu
|
Zheyu Zhang
|
Ningyuan Deng
|
Yanshu Li
|
Baolan Chen
|
Yi Zhang
|
Yun Xue
|
Barbara Plank
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Aspect-based sentiment analysis (ABSA) is a crucial task in information extraction and sentiment analysis, aiming to identify aspects with associated sentiment elements in text. However, existing ABSA datasets are predominantly English-centric, limiting the scope for multilingual evaluation and research. To bridge this gap, we present M-ABSA, a comprehensive dataset spanning 7 domains and 21 languages, making it the most extensive multilingual parallel dataset for ABSA to date. Our primary focus is on triplet extraction, which involves identifying aspect terms, aspect categories, and sentiment polarities. The dataset is constructed through an automatic translation process with human review to ensure quality. We perform extensive experiments using various baselines to assess performance and compatibility on M-ABSA. Our empirical findings highlight that the dataset enables diverse evaluation tasks, such as multilingual and multi-domain transfer learning, and large language model evaluation, underscoring its inclusivity and its potential to drive advancements in multilingual ABSA research.
Search
Fix author
Co-authors
- Baolan Chen 1
- Ningyuan Deng 1
- Pinyuan Feng 1
- Jinfa Huang 1
- Yihong Liu 1
- show all...