Saaket Agashe
2025
LLM-Coordination: Evaluating and Analyzing Multi-agent Coordination Abilities in Large Language Models
Saaket Agashe
|
Yue Fan
|
Anthony Reyna
|
Xin Eric Wang
Findings of the Association for Computational Linguistics: NAACL 2025
Large Language Models (LLMs) have demonstrated emergent common-sense reasoning and Theory of Mind (ToM) capabilities, making them promising candidates for developing coordination agents. This study introduces the LLM-Coordination Benchmark, a novel benchmark for analyzing LLMs in the context of Pure Coordination Settings, where agents must cooperate to maximize gains. Our benchmark evaluates LLMs through two distinct tasks. The first is Agentic Coordination, where LLMs act as proactive participants in four pure coordination games. The second is Coordination Question Answering (CoordQA), which tests LLMs on 198 multiple-choice questions across these games to evaluate three key abilities: Environment Comprehension, ToM Reasoning, and Joint Planning. Results from Agentic Coordination experiments reveal that LLM-Agents excel in multi-agent coordination settings where decision-making primarily relies on environmental variables but face challenges in scenarios requiring active consideration of partners’ beliefs and intentions. The CoordQA experiments further highlight significant room for improvement in LLMs’ Theory of Mind reasoning and joint planning capabilities. Zero-Shot Coordination (ZSC) experiments in the Agentic Coordination setting demonstrate that LLM agents, unlike RL methods, exhibit robustness to unseen partners. These findings indicate the potential of LLMs as Agents in pure coordination setups and underscore areas for improvement.
2021
How do people interact with biased text prediction models while writing?
Advait Bhat
|
Saaket Agashe
|
Anirudha Joshi
Proceedings of the First Workshop on Bridging Human–Computer Interaction and Natural Language Processing
Recent studies have shown that a bias in thetext suggestions system can percolate in theuser’s writing. In this pilot study, we ask thequestion: How do people interact with text pre-diction models, in an inline next phrase sugges-tion interface and how does introducing senti-ment bias in the text prediction model affecttheir writing? We present a pilot study as afirst step to answer this question.