Roman Vainshtein
2026
MAPS: A Multilingual Benchmark for Agent Performance and Security
Omer Hofman | Jonathan Brokman | Oren Rachmil | Shamik Bose | Vikas Pahuja | Toshiya Shimizu | Trisha Starostina | Kelly Marchisio | Seraphina Goldfarb-Tarrant | Roman Vainshtein
Findings of the Association for Computational Linguistics: EACL 2026
Omer Hofman | Jonathan Brokman | Oren Rachmil | Shamik Bose | Vikas Pahuja | Toshiya Shimizu | Trisha Starostina | Kelly Marchisio | Seraphina Goldfarb-Tarrant | Roman Vainshtein
Findings of the Association for Computational Linguistics: EACL 2026
Agentic AI systems, which build on Large Language Models (LLMs) and interact with tools and memory, have rapidly advanced in capability and scope. Yet, since LLMs have been shown to struggle in multilingual settings, typically resulting in lower performance and reduced safety, agentic systems risk inheriting these limitations. This raises concerns about the accessibility of such systems, as users interacting in languages other than English may encounter unreliable or security-critical agent behavior. Despite growing interest in evaluating agentic AI and recent initial efforts toward multilingual interaction, existing benchmarks do not yet provide a comprehensive, multi-domain, security-aware evaluation of multilingual agentic systems. To address this gap, we propose MAPS, a multilingual benchmark suite designed to evaluate agentic AI systems across diverse languages and tasks. MAPS builds on four widely used agentic benchmarks — GAIA (real-world tasks), SWE-Bench (code generation), MATH (mathematical reasoning), and the Agent Security Benchmark (security). We translate each dataset into eleven diverse languages, resulting in 805 unique tasks and 9,660 total language-specific instances - enabling a systematic analysis of the Multilingual Effect on AI agents’ performance and robustness. Empirically, we observe a degradation in both performance and security when transitioning from English to other languages, with severity varying by task and correlating with the amount of translated input. This work establishes the first standardized evaluation framework for multilingual agentic AI, encouraging future research towards equitable, reliable, and accessible agentic AI. https://huggingface.co/datasets/Fujitsu-FRE/MAPS
2025
CAIR: Counterfactual-based Agent Influence Ranker for Agentic AI Workflows
Amit Giloni | Chiara Picardi | Roy Betser | Shamik Bose | Aishvariya Priya Rathina Sabapathy | Roman Vainshtein
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Amit Giloni | Chiara Picardi | Roy Betser | Shamik Bose | Aishvariya Priya Rathina Sabapathy | Roman Vainshtein
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
An Agentic AI Workflow (AAW), also known as an LLM-based multi-agent system, is an autonomous system that assembles several LLM-based agents to work collaboratively towards a shared goal. The high autonomy, widespread adoption, and growing interest in such AAWs highlight the need for a deeper understanding of their operations, from both quality and security aspects. To this day, there are no existing methods to assess the influence of each agent on the AAW’s final output. Adopting techniques from related fields is not feasible since existing methods perform only static structural analysis, which is unsuitable for inference time execution. We present Counterfactual-based Agent Influence Ranker (CAIR) - the first method for assessing the influence level of each agent on the AAW’s output and determining which agents are the most influential. By performing counterfactual analysis, CAIR provides a task-agnostic analysis that can be used both offline and at inference time. We evaluate CAIR using an AAWs dataset of our creation, containing 30 different use cases with 230 different functionalities. Our evaluation showed that CAIR produces consistent rankings, outperforms baseline methods, and can easily enhance the effectiveness and relevancy of downstream tasks.
TFDP: Token-Efficient Disparity Audits for Autoregressive LLMs via Single-Token Masked Evaluation
Inderjeet Singh | Ramya Srinivasan | Roman Vainshtein | Hisashi Kojima
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Inderjeet Singh | Ramya Srinivasan | Roman Vainshtein | Hisashi Kojima
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Auditing autoregressive Large Language Models (LLMs) for disparities is often impeded by high token costs and limited precision. We introduce Token-Focused Disparity Probing (TFDP), a novel methodology overcoming these challenges by adapting single-token masked prediction to autoregressive architectures via targeted token querying. Disparities between minimally contrastive sentence pairs are quantified through a multi-scale semantic alignment score that integrates sentence, local-context, and token embeddings with adaptive weighting. We propose three disparity metrics: Preference Score (\mathcal{PS}), Prediction Set Divergence (\mathcal{PSD}), and Weighted Final Score (\mathcal{WFS}), for comprehensive assessment. Evaluated on our customized Proverbs Disparity Dataset (PDD) with controlled attribute toggles (e.g., gender bias, misinformation susceptibility), TFDP precisely detects disparities while achieving up to 42 times fewer output tokens than minimal n-token continuations, offering a scalable tool for responsible LLM evaluation.