Surgan Jandial
2025
On the Fine-Grained Planning Abilities of VLM Web Agents
Surgan Jandial
|
Yinong Oliver Wang
|
Andrea Bajcsy
|
Fernando De la Torre
Findings of the Association for Computational Linguistics: EMNLP 2025
Vision-Language Models (VLMs) have shown promise as web agents, yet their planning—the ability to devise strategies or action sequences to complete tasks—remains understudied. While prior works focus on VLM’s perception and overall success rates (i.e., goal completion), fine-grained investigation of their planning has been overlooked. To address this gap, we examine VLMs’ capability to (1) understand temporal relationships within web contexts, and (2) assess plans of actions across diverse scenarios. We design four simple yet effective tests to delve into these nuanced aspects around planning. Our results across nineteen VLMs reveal that these models exhibit limited performance in the aforementioned skills and are not reliable to function as web agents. To facilitate future work, we release our planning evaluations and data, providing a foundation for advancing the future research in this area.
2024
“Thinking” Fair and Slow: On the Efficacy of Structured Prompts for Debiasing Language Models
Shaz Furniturewala
|
Surgan Jandial
|
Abhinav Java
|
Pragyan Banerjee
|
Simra Shahid
|
Sumit Bhatia
|
Kokil Jaidka
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Existing debiasing techniques are typically training-based or require access to the model’s internals and output distributions, so they are inaccessible to end-users looking to adapt LLM outputs for their particular needs. In this study, we examine whether structured prompting techniques can offer opportunities for fair text generation. We evaluate a comprehensive end-user-focused iterative framework of debiasing that applies System 2 thinking processes for prompts to induce logical, reflective, and critical text generation, with single, multi-step, instruction, and role-based variants. By systematically evaluating many LLMs across many datasets and different prompting strategies, we show that the more complex System 2-based Implicative Prompts significantly improve over other techniques demonstrating lower mean bias in the outputs with competitive performance on the downstream tasks. Our work offers research directions for the design and the potential of end-user-focused evaluative frameworks for LLM use.
Search
Fix author
Co-authors
- Andrea Bajcsy 1
- Pragyan Banerjee 1
- Sumit Bhatia 1
- Fernando De la Torre 1
- Shaz Furniturewala 1
- show all...