OmniEval: An Omnidirectional and Automatic RAG Evaluation Benchmark in Financial Domain

Shuting Wang, Jiejun Tan, Zhicheng Dou, Ji-Rong Wen


Abstract
Retrieval-augmented generation (RAG) has emerged as a key application of large language models (LLMs), especially in vertical domains where LLMs may lack domain-specific knowledge. This paper introduces OmniEval, an omnidirectional and automatic RAG benchmark for the financial domain, featured by its multi-dimensional evaluation framework: First, we categorize RAG scenarios by five task classes and 16 financial topics, leading to a matrix-based structured assessment for RAG evaluation; Next, we leverage a multi-dimensional evaluation data generation method that integrates GPT-4-based automatic generation and human annotation approaches, achieving an 87.47% acceptance ratio in human evaluations of generated instances; Further, we utilize a multi-stage evaluation pipeline to assess both retrieval and generation performance, resulting in an all-sided evaluation of the RAG pipeline. Finally, rule-based and LLM-based metrics are combined to build a multi-dimensional evaluation system, enhancing the reliability of assessments through fine-tuned LLM-based evaluators. Our omnidirectional evaluation experiments highlight the performance variations of RAG systems across diverse topics and tasks and reveal significant opportunities for RAG models to improve their capabilities in vertical domains. We open source the anonymous code of our benchmark at https://github.com/RUC-NLPIR/OmniEval.
Anthology ID:
2025.emnlp-main.292
Volume:
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
5737–5762
Language:
URL:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.292/
DOI:
Bibkey:
Cite (ACL):
Shuting Wang, Jiejun Tan, Zhicheng Dou, and Ji-Rong Wen. 2025. OmniEval: An Omnidirectional and Automatic RAG Evaluation Benchmark in Financial Domain. In Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, pages 5737–5762, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
OmniEval: An Omnidirectional and Automatic RAG Evaluation Benchmark in Financial Domain (Wang et al., EMNLP 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.292.pdf
Checklist:
 2025.emnlp-main.292.checklist.pdf