Task-Level Thinking Steps Help Large Language Models for Challenging Classification Task

Chunhui Du, Jidong Tian, Haoran Liao, Jindou Chen, Hao He, Yaohui Jin


Abstract
Large language models (LLMs) have shown incredible performance on many tasks such as dialogue generation, commonsense reasoning and question answering. In-context learning (ICL) is an important paradigm for adapting LLMs to the downstream tasks by prompting few demonstrations. However, the distribution of demonstrations can severely affect the performance, especially for challenging classification tasks. In this paper, we propose the concept of task-level thinking steps that can eliminate bias introduced by demonstrations. Further, to help LLMs distinguish confusing classes, we design a progressive revision framework, which can improve the thinking steps by correcting hard demonstrations. Experimental results prove the superiority of our proposed method, achieving best performance on three kinds of challenging classification tasks in the zero-shot and few-shot settings. Besides, with task-level thinking steps, automatically generated chain-of-thoughts (CoTs) bring more competitive performance.
Anthology ID:
2023.emnlp-main.150
Volume:
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
Month:
December
Year:
2023
Address:
Singapore
Editors:
Houda Bouamor, Juan Pino, Kalika Bali
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2454–2470
Language:
URL:
https://aclanthology.org/2023.emnlp-main.150
DOI:
10.18653/v1/2023.emnlp-main.150
Bibkey:
Cite (ACL):
Chunhui Du, Jidong Tian, Haoran Liao, Jindou Chen, Hao He, and Yaohui Jin. 2023. Task-Level Thinking Steps Help Large Language Models for Challenging Classification Task. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 2454–2470, Singapore. Association for Computational Linguistics.
Cite (Informal):
Task-Level Thinking Steps Help Large Language Models for Challenging Classification Task (Du et al., EMNLP 2023)
Copy Citation:
PDF:
https://preview.aclanthology.org/nschneid-patch-4/2023.emnlp-main.150.pdf
Video:
 https://preview.aclanthology.org/nschneid-patch-4/2023.emnlp-main.150.mp4