Innovative Image Fraud Detection with Cross-Sample Anomaly Analysis: The Power of LLMs

QiWen Wang, Junqi Yang, Zhenghao Lin, Zhenzhe Ying, Weiqiang Wang, Chen Lin


Abstract
The financial industry faces a substantial workload in verifying document images. Existing methods based on visual features struggle to identify fraudulent document images due to the lack of visual clues on the tampering region. This paper proposes CSIAD (Cross-Sample Image Anomaly Detection) by leveraging LLMs to identify logical inconsistencies in similar images. This novel framework accurately detects forged images with slight tampering traces and explains anomaly detection results. Furthermore, we introduce CrossCred, a new benchmark of real-world fraudulent images with fine-grained manual annotations. Experiments demonstrate that CSIAD outperforms state-of-the-art image fraud detection methods by 79.6% (F1) on CrossCred and deployed industrial solutions by 21.7% (F1) on business data. The benchmark is available at https://github.com/XMUDM/CSIAD.
Anthology ID:
2025.acl-long.687
Volume:
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
July
Year:
2025
Address:
Vienna, Austria
Editors:
Wanxiang Che, Joyce Nabende, Ekaterina Shutova, Mohammad Taher Pilehvar
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
14058–14078
Language:
URL:
https://preview.aclanthology.org/ingestion-acl-25/2025.acl-long.687/
DOI:
Bibkey:
Cite (ACL):
QiWen Wang, Junqi Yang, Zhenghao Lin, Zhenzhe Ying, Weiqiang Wang, and Chen Lin. 2025. Innovative Image Fraud Detection with Cross-Sample Anomaly Analysis: The Power of LLMs. In Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 14058–14078, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
Innovative Image Fraud Detection with Cross-Sample Anomaly Analysis: The Power of LLMs (Wang et al., ACL 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingestion-acl-25/2025.acl-long.687.pdf