Ashley Harkin


2025

pdf bib
My Climate CoPilot: A Question Answering System for Climate Adaptation in Agriculture
Vincent Nguyen | Willow Hallgren | Ashley Harkin | Mahesh Prakash | Sarvnaz Karimi
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 3: System Demonstrations)

Accurately answering climate science questions requires scientific literature and climate data. Interpreting climate literature and data, however, presents inherent challenges such as determining relevant climate factors and drivers, interpreting uncertainties in the science and data, and dealing with the sheer volume of data. My Climate CoPilot is a platform that assists a range of potential users, such as farmer advisors, to mitigate and adapt to projected climate changes by providing answers to questions that are grounded in evidence. It emphasises transparency, user privacy, low-resource use, and provides automatic evaluation. It also strives for scientific robustness and accountability. Fifty domain experts carefully evaluated every aspect of My Climate CoPilot and based on their interactions and feedback, the system continues to evolve.

2024

pdf bib
My Climate Advisor: An Application of NLP in Climate Adaptation for Agriculture
Vincent Nguyen | Sarvnaz Karimi | Willow Hallgren | Ashley Harkin | Mahesh Prakash
Proceedings of the 1st Workshop on Natural Language Processing Meets Climate Change (ClimateNLP 2024)

Climate adaptation in the agricultural sector necessitates tools that equip farmers and farm advisors with relevant and trustworthy information to help increase their resiliency to climate change. We introduce My Climate Advisor, a question-answering (QA) prototype that synthesises information from different data sources, such as peer-reviewed scientific literature and high-quality, industry-relevant grey literature to generate answers, with references, to a given user’s question. Our prototype uses open-source generative models for data privacy and intellectual property protection, and retrieval augmented generation for answer generation, grounding and provenance. While there are standard evaluation metrics for QA systems, no existing evaluation framework suits our LLM-based QA application in the climate adaptation domain. We design an evaluation framework with seven metrics based on the requirements of the domain experts to judge the generated answers from 12 different LLM-based models. Our initial evaluations through a user study via domain experts show promising usability results.