GroundCocoa: A Benchmark for Evaluating Compositional & Conditional Reasoning in Language Models

Harsh Kohli, Sachin Kumar, Huan Sun


Abstract
The rapid progress of large language models (LLMs) has seen them excel and frequently surpass human performance on standard benchmarks. This has enabled many downstream applications, such as LLM agents, to rely on their reasoning to address complex task requirements. However, LLMs are known to unexpectedly falter in simple tasks and under seemingly straightforward circumstances - underscoring the need for better and more diverse evaluation setups to measure their true capabilities. To this end, we choose to study compositional and conditional reasoning, two aspects that are central to human cognition, and introduce GroundCocoa - a lexically diverse benchmark connecting these reasoning skills to the real-world problem of flight booking. Our task involves aligning detailed user preferences with available flight options presented in a multiple-choice format. Results indicate a significant disparity in performance among current state-of-the-art LLMs with even the best performing model, GPT-4 Turbo, not exceeding 67% accuracy despite advanced prompting techniques.
Anthology ID:
2025.naacl-long.420
Volume:
Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)
Month:
April
Year:
2025
Address:
Albuquerque, New Mexico
Editors:
Luis Chiruzzo, Alan Ritter, Lu Wang
Venue:
NAACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
8280–8295
Language:
URL:
https://preview.aclanthology.org/landing_page/2025.naacl-long.420/
DOI:
Bibkey:
Cite (ACL):
Harsh Kohli, Sachin Kumar, and Huan Sun. 2025. GroundCocoa: A Benchmark for Evaluating Compositional & Conditional Reasoning in Language Models. In Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers), pages 8280–8295, Albuquerque, New Mexico. Association for Computational Linguistics.
Cite (Informal):
GroundCocoa: A Benchmark for Evaluating Compositional & Conditional Reasoning in Language Models (Kohli et al., NAACL 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/landing_page/2025.naacl-long.420.pdf