Xiao Ma
2025
ProcWorld: Benchmarking Large Model Planning in Reachability-Constrained Environments
Dong Wang
|
Xinghang Li
|
Zhengshen Zhang
|
Jirong Liu
|
Xiao Ma
|
Hanbo Zhang
|
Tao Kong
|
Huaping Liu
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
We introduce ProcWorld, a large-scale benchmark for partially observable embodied spatial reasoning and long-term planning with large language models (LLM) and vision language models (VLM). ProcWorld features a wide range of challenging embodied navigation and object manipulation tasks, covering 16 task types, 5,000 rooms, and over 10 million evaluation trajectories with diverse data distribution. ProcWorld supports configurable observation modes, ranging from text-only descriptions to vision-only observations. It enables text-based actions to control the agent following language instructions. ProcWorld has presented significant challenges for LLMs and VLMs: (1) active information gathering given partial observations for disambiguation; (2) simultaneous localization and decision-making by tracking the spatio-temporal state-action distribution; (3) constrained reasoning with dynamic states subject to physical reachability. Our extensive evaluation of 15 foundation models and 5 reasoning algorithms (with over 1 million rollouts) indicates larger models perform better. However, ProcWorld remains highly challenging for existing state-of-the-art models and in-context learning methods due to constrained reachability and the need for combinatorial spatial reasoning.
2023
Improving Diversity of Demographic Representation in Large Language Models via Collective-Critiques and Self-Voting
Preethi Lahoti
|
Nicholas Blumm
|
Xiao Ma
|
Raghavendra Kotikalapudi
|
Sahitya Potluri
|
Qijun Tan
|
Hansa Srinivasan
|
Ben Packer
|
Ahmad Beirami
|
Alex Beutel
|
Jilin Chen
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
A crucial challenge for generative large language models (LLMs) is diversity: when a user’s prompt is under-specified, models may follow implicit assumptions while generating a response, which may result in homogenization of the responses, as well as certain demographic groups being under-represented or even erased from the generated responses. In this paper, we formalize the problem diversity of representation in LLM generations. We present evaluation datasets and propose metrics to measure diversity in generated responses along people and culture axes. We find that LLMs understand the notion of diversity, and that they can reason and critique their own responses for that goal. This finding motivated a new prompting technique called collective-critique and self-voting (CCSV) to self-improve people diversity of LLMs by tapping into its diversity reasoning capabilities, without relying on handcrafted examples or prompt tuning. Extensive empirical experiments with both human and automated evaluations show that our proposed approach is effective at improving people and culture diversity, and outperforms all baseline methods by a large margin.
Search
Fix author
Co-authors
- Ahmad Beirami 1
- Alex Beutel 1
- Nicholas Blumm 1
- Jilin Chen 1
- Tao Kong 1
- show all...