Zhiyu Zoey Chen
2024
Large Language Models as Zero-shot Dialogue State Tracker through Function Calling
Zekun Li
|
Zhiyu Zoey Chen
|
Mike Ross
|
Patrick Huber
|
Seungwhan Moon
|
Zhaojiang Lin
|
Luna Dong
|
Adithya Sagar
|
Xifeng Yan
|
Paul A. Crook
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Large language models (LLMs) are increasingly prevalent in conversational systems due to their advanced understanding and generative capabilities in general contexts. However, their effectiveness in task-oriented dialogues (TOD), which requires not only response generation but also effective dialogue state tracking (DST) within specific tasks and domains, remains less satisfying. In this work, we propose a novel approach FnCTOD for solving DST with LLMs through function calling. This method improves zero-shot DST, allowing adaptation to diverse domains without extensive data collection or model tuning. Our experimental results demonstrate that our approach achieves exceptional performance with both modestly sized open-source and also proprietary LLMs: with in-context prompting it enables various 7B or 13B parameter models to surpass the previous state-of-the-art (SOTA) achieved by ChatGPT, and improves ChatGPT’s performance beating the SOTA by 5.6% average joint goal accuracy (JGA). Individual model results for GPT-3.5 and GPT-4 are boosted by 4.8% and 14%, respectively. We also show that by fine-tuning on a small collection of diverse task-oriented dialogues, we can equip modestly sized models, specifically a 13B parameter LLaMA2-Chat model, with function-calling capabilities and DST performance comparable to ChatGPT while maintaining their chat capabilities. We have made the code publicly available at https://github.com/facebookresearch/FnCTOD.
Search
Fix author
Co-authors
- Paul A. Crook 1
- Xin Luna Dong 1
- Patrick Huber 1
- Zekun Li 1
- Zhaojiang Lin 1
- show all...
Venues
- acl1