Anirudh Iyengar Kaniyar Narayana Iyengar
2025
TABARD: A Novel Benchmark for Tabular Anomaly Analysis, Reasoning and Detection
Manan Roy Choudhury
|
Anirudh Iyengar Kaniyar Narayana Iyengar
|
Shikhhar Siingh
|
Sugeeth Puranam
|
Vivek Gupta
Findings of the Association for Computational Linguistics: EMNLP 2025
We study the capabilities of large language models (LLMs) in detecting fine-grained anomalies in tabular data. Specifically, we examine: (1) how well LLMs can identify diverse anomaly types including factual, logical, temporal, and value-based errors; (2) the impact of prompt design and prompting strategies; and (3) the effect of table structure and anomaly type on detection accuracy. To this end, we introduce TABARD, a new benchmark constructed by perturbing tables from WikiTQ, FeTaQA, Spider, and BEAVER. The dataset spans multiple domains and eight anomaly categories, including paired clean and corrupted tables. We evaluate LLMs using direct, indirect, and Chain-of-Thought (CoT) prompting. Our results reveal notable limitations in standard prompting, especially for complex reasoning tasks and longer tables. To overcome these issues, we propose a unified framework combining multi-step prompting, self-verification, and constraint-based rule execution. Our approach significantly improves precision and recall, offering a promising direction for robust and interpretable anomaly detection in tables.
INTERCHART: Benchmarking Visual Reasoning Across Decomposed and Distributed Chart Information
Anirudh Iyengar Kaniyar Narayana Iyengar
|
Srija Mukhopadhyay
|
Adnan Qidwai
|
Shubhankar Singh
|
Dan Roth
|
Vivek Gupta
Proceedings of the 14th International Joint Conference on Natural Language Processing and the 4th Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics
We introduce InterChart, a diagnostic benchmark that evaluates how well vision-language models (VLMs) reason across multiple related charts, a task central to real-world applications such as scientific reporting, financial analysis, and public policy dashboards. Unlike prior benchmarks focusing on isolated, visually uniform charts, InterChart challenges models with diverse question types ranging from entity inference and trend correlation to numerical estimation and abstract multi-step reasoning grounded in 2–3 thematically or structurally related charts. We organize the benchmark into three tiers of increasing difficulty: (1) factual reasoning over individual charts, (2) integrative analysis across synthetically aligned chart sets, and (3) semantic inference over visually complex, real-world chart pairs. Our evaluation of state-of-the-art open- and closed-source VLMs reveals consistent and steep accuracy declines as chart complexity increases. We find that models perform better when we decompose multi-entity charts into simpler visual units, underscoring their struggles with cross-chart integration. By exposing these systematic limitations, InterChart provides a rigorous framework for advancing multimodal reasoning in complex, multi-visual environments.
Search
Fix author
Co-authors
- Vivek Gupta 2
- Manan Roy Choudhury 1
- Srija Mukhopadhyay 1
- Sugeeth Puranam 1
- Adnan Qidwai 1
- show all...