Zeyu Wu
2025
Fraud-R1 : A Multi-Round Benchmark for Assessing the Robustness of LLM Against Augmented Fraud and Phishing Inducements
Shu Yang
|
Shenzhe Zhu
|
Zeyu Wu
|
Keyu Wang
|
Junchi Yao
|
Junchao Wu
|
Lijie Hu
|
Mengdi Li
|
Derek F. Wong
|
Di Wang
Findings of the Association for Computational Linguistics: ACL 2025
With the increasing integration of large language models (LLMs) into real-world applications such as finance, e-commerce, and recommendation systems, their susceptibility to misinformation and adversarial manipulation poses significant risks. Existing fraud detection benchmarks primarily focus on single-turn classification tasks, failing to capture the dynamic nature of real-world fraud attempts. To address this gap, we introduce Fraud-R1, a challenging bilingual benchmark designed to assess LLMs’ ability to resist fraud and phishing attacks across five key fraud categories: Fraudulent Services, Impersonation, Phishing Scams, Fake Job Postings, and Online Relationships, covering subclasses. Our dataset comprises manually curated fraud cases from social media, news, phishing scam records, and prior fraud datasets.
Search
Fix author
Co-authors
- Lijie Hu 1
- Mengdi Li 1
- Keyu Wang 1
- Di Wang 1
- Derek F. Wong (黄辉) 1
- show all...