Sridevi Wagle
2025
WeQA: A Benchmark for Retrieval Augmented Generation in Wind Energy Domain
Rounak Meyur
|
Hung Phan
|
Sridevi Wagle
|
Jan Strube
|
Mahantesh Halappanavar
|
Sameera Horawalavithana
|
Anurag Acharya
|
Sai Munikoti
Proceedings of the Fourth Workshop on NLP for Positive Impact (NLP4PI)
Wind energy project assessments present significant challenges for decision-makers, who must navigate and synthesize hundreds of pages of environmental and scientific documentation. These documents often span different regions and project scales, covering multiple domains of expertise. This process traditionally demands immense time and specialized knowledge from decision-makers. The advent of Large Language Models (LLM) and Retrieval Augmented Generation (RAG) approaches offer a transformative solution, enabling rapid, accurate cross-document information retrieval and synthesis. As the landscape of Natural Language Processing (NLP) and text generation continues to evolve, benchmarking becomes essential to evaluate and compare the performance of different RAG-based LLMs. In this paper, we present a comprehensive framework to generate a domain relevant RAG benchmark. Our framework is based on automatic question-answer generation with Human (domain experts)-AI (LLM) teaming. As a case study, we demonstrate the framework by introducing WeQA, a first-of-its-kind benchmark on the wind energy domain which comprises of multiple scientific documents/reports related to environmental aspects of wind energy projects. Our framework systematically evaluates RAG performance using diverse metrics and multiple question types with varying complexity level, providing a foundation for rigorous assessment of RAG-based systems in complex scientific domains and enabling researchers to identify areas for improvement in domain-specific applications.
2024
Evaluating the Effectiveness of Retrieval-Augmented Large Language Models in Scientific Document Reasoning
Sai Munikoti
|
Anurag Acharya
|
Sridevi Wagle
|
Sameera Horawalavithana
Proceedings of the Fourth Workshop on Scholarly Document Processing (SDP 2024)
Despite the dramatic progress in Large Language Model (LLM) development, LLMs often provide seemingly plausible but not factual information, often referred to as hallucinations. Retrieval-augmented LLMs provide a non-parametric approach to solve these issues by retrieving relevant information from external data sources and augment the training process. These models help to trace evidence from an externally provided knowledge base allowing the model predictions to be better interpreted and verified. In this work, we critically evaluate these models in their ability to perform in scientific document reasoning tasks. To this end, we tuned multiple such model variants with science-focused instructions and evaluated them on a scientific document reasoning benchmark for the usefulness of the retrieved document passages. Our findings suggest that models justify predictions in science tasks with fabricated evidence and leveraging scientific corpus as pretraining data does not alleviate the risk of evidence fabrication.
Search
Fix author
Co-authors
- Anurag Acharya 2
- Sameera Horawalavithana 2
- Sai Munikoti 2
- Mahantesh Halappanavar 1
- Rounak Meyur 1
- show all...