Prannaya Gupta


2025

pdf bib
Evaluating AI for Finance: Is AI Credible at Assessing Investment Risk Appetite?
Divij Chawla | Ashita Bhutada | Duc Anh Do | Abhinav Raghunathan | Vinod Sp | Cathy Guo | Dar Win Liew | Prannaya Gupta | Rishabh Bhardwaj | Rajat Bhardwaj | Soujanya Poria
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing: Industry Track

We assess whether AI systems can credibly evaluate investment risk appetite—a task that must be thoroughly validated before automation. Our analysis was conducted on proprietary systems (GPT, Claude, Gemini) and open-weight models (LLaMA, DeepSeek, Mistral), using carefully curated user profiles that reflect real users with varying attributes such as country and gender. As a result, the models exhibit significant variance in score distributions when user attributes—such as country or gender—that should not influence risk computation are changed. For example, GPT-4o assigns higher risk scores to Nigerian and Indonesian profiles. While some models align closely with expected scores in the low- and mid-risk ranges, none maintain consistent scores across regions and demographics, thereby violating AI and finance regulations.

2024

pdf bib
WalledEval: A Comprehensive Safety Evaluation Toolkit for Large Language Models
Prannaya Gupta | Le Qi Yau | Hao Han Low | I-Shiang Lee | Hugo Maximus Lim | Yu Xin Teoh | Koh Jia Hng | Dar Win Liew | Rishabh Bhardwaj | Rajat Bhardwaj | Soujanya Poria
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing: System Demonstrations

WalledEval is a comprehensive AI safety testing toolkit designed to evaluate large language models (LLMs). It accommodates a diverse range of models, including both open-weight and API-based ones, and features over 35 safety benchmarks covering areas such as multilingual safety, exaggerated safety, and prompt injections. The framework supports both LLM and judge benchmarking, and incorporates custom mutators to test safety against various text-style mutations such as future tense and paraphrasing. Additionally, WalledEval introduces WalledGuard, a new, small and performant content moderation tool, and SGXSTest, a benchmark for assessing exaggerated safety in cultural contexts. We make WalledEval publicly available at https://github.com/walledai/walledeval with a demonstration video at https://youtu.be/50Zy97kj1MA.