Dianhui Chu
2024
Analyzing Chain-of-thought Prompting in Black-Box Large Language Models via Estimated V-information
Zecheng Wang
|
Chunshan Li
|
Zhao Yang
|
Qingbin Liu
|
Yanchao Hao
|
Xi Chen
|
Dianhui Chu
|
Dianbo Sui
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)
Chain-of-Thought (CoT) prompting combined with large language models (LLM) has shown great potential in improving performance on challenging reasoning tasks. While understanding why CoT prompting is effective is crucial for the application and improvement of CoT prompting, few studies have addressed this issue. Besides, almost no prior work has conducted theoretical analysis on CoT prompting in the context of black-box models. In this paper, we approach the analysis of CoT prompting in black-box LLMs from an information-theoretic perspective. Specifically, we propose a new metric, EPVI (Estimated Pointwise V-Information), which extends the concept of pointwise V-information to black-box models, quantifying the label-relevant new information introduced by CoT prompting beyond the pre-existing information in the input. Based on this, we conduct a series of experiments at both the task and instance levels to analyze CoT prompting, demonstrating that the effectiveness of CoT prompting can be attributed to its capacity to influence the difficulty of model inference by augmenting or reducing the model-usable information. Furthermore, we show that selecting high-quality demonstrations of CoT reasoning based on EPVI can improve the downstream performance of reasoning tasks.
Search
Co-authors
- Zecheng Wang 1
- Chunshan Li 1
- Zhao Yang 1
- Qingbin Liu 1
- Yanchao Hao 1
- show all...