Ashish Kundu
2026
Role-Conditioned Refusals: Evaluating Access Control Reasoning in Large Language Models
Đorđe Klisura | Joseph Khoury | Ashish Kundu | Ram Krishnan | Anthony Rios
Findings of the Association for Computational Linguistics: EACL 2026
Đorđe Klisura | Joseph Khoury | Ashish Kundu | Ram Krishnan | Anthony Rios
Findings of the Association for Computational Linguistics: EACL 2026
Access control is a cornerstone of secure computing, yet large language models often blur role boundaries by producing unrestricted responses. We study role-conditioned refusals, focusing on the LLM’s ability to adhere to access control policies by answering when authorized and refusing when not. To evaluate this behavior, we created a novel dataset that extends the Spider and BIRD text-to-SQL datasets, both of which have been modified with realistic PostgreSQL role-based policies at the table and column levels. We compare three designs: (i) zero or few-shot prompting, (ii) a two-step generator-verifier pipeline that checks SQL against policy, and (iii) LoRA fine-tuned models that learn permission awareness directly. Across multiple model families, explicit verification (the two-step framework) improves refusal precision and lowers false permits. At the same time, fine-tuning achieves a stronger balance between safety and utility (i.e., when considering execution accuracy). Longer and more complex policies consistently reduce the reliability of all systems. We release RBAC-augmented datasets and code.
2025
AssistedDS: Benchmarking How External Domain Knowledge Assists LLMs in Automated Data Science
An Luo | Xun Xian | Jin Du | Fangqiao Tian | Ganghua Wang | Ming Zhong | Shengchun Zhao | Xuan Bi | Zirui Liu | Jiawei Zhou | Jayanth Srinivasa | Ashish Kundu | Charles Fleming | Mingyi Hong | Jie Ding
Findings of the Association for Computational Linguistics: EMNLP 2025
An Luo | Xun Xian | Jin Du | Fangqiao Tian | Ganghua Wang | Ming Zhong | Shengchun Zhao | Xuan Bi | Zirui Liu | Jiawei Zhou | Jayanth Srinivasa | Ashish Kundu | Charles Fleming | Mingyi Hong | Jie Ding
Findings of the Association for Computational Linguistics: EMNLP 2025
Large language models (LLMs) have advanced the automation of data science workflows. Yet it remains unclear whether they can critically leverage external domain knowledge as human data scientists do in practice. To answer this question, we introduce AssistedDS (Assisted Data Science), a benchmark designed to systematically evaluate how LLMs handle domain knowledge in tabular prediction tasks. AssistedDS features both synthetic datasets with explicitly known generative mechanisms and real-world Kaggle competitions, each accompanied by curated bundles of helpful and adversarial documents. These documents provide domain-specific insights into data cleaning, feature engineering, and model selection. We assess state-of-the-art LLMs on their ability to discern and apply beneficial versus harmful domain knowledge, evaluating submission validity, information recall, and predictive performance. Our results demonstrate three key findings: (1) LLMs frequently exhibit an uncritical adoption of provided information, significantly impairing their predictive performance when adversarial content is introduced, (2) helpful guidance is often insufficient to counteract the negative influence of adversarial information, and (3) in Kaggle datasets, LLMs often make errors in handling time-series data, applying consistent feature engineering across different folds, and interpreting categorical variables correctly. These findings highlight a substantial gap in current models’ ability to critically evaluate and leverage expert knowledge, underscoring an essential research direction for developing more robust, knowledge-aware automated data science systems. Our data and code are publicly available [here](https://github.com/jeremyxianx/Assisted-DS).