Woojeong Kim
2025
DICE-BENCH: Evaluating the Tool-Use Capabilities of Large Language Models in Multi-Round, Multi-Party Dialogues
Kyochul Jang
|
Donghyeon Lee
|
Kyusik Kim
|
Dongseok Heo
|
Taewhoo Lee
|
Woojeong Kim
|
Bongwon Suh
Findings of the Association for Computational Linguistics: ACL 2025
Existing function-calling benchmarks focus on single-turn interactions. However, they overlook the complexity of real-world scenarios. To quantify how existing benchmarks address practical applications, we introduce DICE-SCORE, a metric that evaluates the dispersion of tool-related information such as function name and parameter values throughout the dialogue. Analyzing existing benchmarks through DICE-SCORE reveals notably low scores, highlighting the need for more realistic scenarios. To address this gap, we present DICE-BENCH, a framework that constructs practical function-calling datasets by synthesizing conversations through a tool graph that maintains dependencies across rounds and a multi-agent system with distinct personas to enhance dialogue naturalness. The final dataset comprises 1,607 high-DICE-SCORE instances. Our experiments on 19 LLMs with DICE-BENCH show that significant advances are still required before such models can be deployed effectively in real-world settings. Our code and data are all publicly available.
Search
Fix author
Co-authors
- Dongseok Heo 1
- Kyochul Jang 1
- Kyusik Kim 1
- Donghyeon Lee 1
- Taewhoo Lee 1
- show all...