LLM Agents for Coordinating Multi-User Information Gathering

Harsh Jhamtani, Jacob Andreas, Benjamin Van Durme


Abstract
This paper introduces PeopleJoin, a benchmark for evaluating LM-mediated collaborative problem solving. Given a user request, PeopleJoin agents must identify teammates who might be able to assist, converse with these teammates to gather information, and finally compile a useful answer or summary for the original user. PeopleJoin comprises two evaluation domains: PeopleJoin-QA, focused on questions about tabular data, and PeopleJoin-DocCreation, focused on document creation tasks. The two domains are adapted from existing NLP benchmarks for database question answering and multi-document summarization; here, however, the information needed to complete these tasks is distributed across synthetic “organizations” of 2–20 users, simulating natural multi-user collaboration scenarios. We implemented several popular LM agent architectures, evaluating their accuracy and efficiency at completing tasks, and highlight new research questions that can be studied using PeopleJoin.
Anthology ID:
2025.findings-acl.916
Volume:
Findings of the Association for Computational Linguistics: ACL 2025
Month:
July
Year:
2025
Address:
Vienna, Austria
Editors:
Wanxiang Che, Joyce Nabende, Ekaterina Shutova, Mohammad Taher Pilehvar
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
17800–17826
Language:
URL:
https://preview.aclanthology.org/display_plenaries/2025.findings-acl.916/
DOI:
Bibkey:
Cite (ACL):
Harsh Jhamtani, Jacob Andreas, and Benjamin Van Durme. 2025. LLM Agents for Coordinating Multi-User Information Gathering. In Findings of the Association for Computational Linguistics: ACL 2025, pages 17800–17826, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
LLM Agents for Coordinating Multi-User Information Gathering (Jhamtani et al., Findings 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/display_plenaries/2025.findings-acl.916.pdf