Tzu-Jui Julius Wang
2025
Learning to Describe Implicit Changes: Noise-robust Pre-training for Image Difference Captioning
Zixin Guo
|
Jiayang Sun
|
Tzu-Jui Julius Wang
|
Abduljalil Radman
|
Selen Pehlivan
|
Min Cao
|
Jorma Laaksonen
Findings of the Association for Computational Linguistics: EMNLP 2025
Image Difference Captioning (IDC) methods have advanced in highlighting subtle differences between similar images, but their performance is often constrained by limited training data. Using Large Multimodal Models (LMMs) to describe changes in image pairs mitigates data limits but adds noise. These change descriptions are often coarse summaries, obscuring fine details and hindering noise detection. In this work, we improve IDC with a noise-robust approach at both data and model levels. We use LMMs with structured prompts to generate fine-grained change descriptions during data curation. We propose a Noise-Aware Modeling and Captioning (NAMC) model with three modules: Noise Identification and Masking (NIM) to reduce noisy correspondences, Masked Image Reconstruction (MIR) to correct over-masking errors, and Fine-grained Description Generation (FDG) to produce coherent change descriptions. Experiments on four IDC benchmarks show that NAMC, pre-trained on our large-scale data, outperforms streamlined architectures and achieves competitive performance with LLM-finetuned methods, offering better inference efficiency.
Search
Fix author
Co-authors
- Min Cao 1
- Zixin Guo 1
- Jorma Laaksonen 1
- Selen Pehlivan 1
- Abduljalil Radman 1
- show all...