Estimating Large Language Model Capabilities without Labeled Test Data

Harvey Fu, Qinyuan Ye, Albert Xu, Xiang Ren, Robin Jia


Abstract
Large Language Models (LLMs) have exhibited an impressive ability to perform in-context learning (ICL) from only a few examples, but the success of ICL varies widely from task to task. Thus, it is important to quickly determine whether ICL is applicable to a new task, but directly evaluating ICL accuracy can be expensive in situations where test data is expensive to annotate—the exact situations where ICL is most appealing. In this paper, we propose the task of ICL accuracy estimation, in which we predict the accuracy of an LLM when doing in-context learning on a new task given only unlabeled test data for that task. To perform ICL accuracy estimation, we propose a method that trains a meta-model using LLM confidence scores as features. We compare our method to several strong accuracy estimation baselines on a new benchmark that covers 4 LLMs and 3 task collections. The meta-model improves over all baselines across 7 out of 12 settings and achieves the same estimation performance as directly evaluating on 40 collected labeled test examples per task. At the same time, no existing approach provides an accurate and reliable ICL accuracy estimation in every setting, highlighting the need for better ways to measure the uncertainty of LLM predictions.
Anthology ID:
2023.findings-emnlp.639
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2023
Month:
December
Year:
2023
Address:
Singapore
Editors:
Houda Bouamor, Juan Pino, Kalika Bali
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
9530–9546
Language:
URL:
https://aclanthology.org/2023.findings-emnlp.639
DOI:
10.18653/v1/2023.findings-emnlp.639
Bibkey:
Cite (ACL):
Harvey Fu, Qinyuan Ye, Albert Xu, Xiang Ren, and Robin Jia. 2023. Estimating Large Language Model Capabilities without Labeled Test Data. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 9530–9546, Singapore. Association for Computational Linguistics.
Cite (Informal):
Estimating Large Language Model Capabilities without Labeled Test Data (Fu et al., Findings 2023)
Copy Citation:
PDF:
https://preview.aclanthology.org/naacl24-info/2023.findings-emnlp.639.pdf