TOWER: Tree Organized Weighting for Evaluating Complex Instructions

Noah Ziems, Zhihan Zhang, Meng Jiang


Abstract
Evaluating the ability of large language models (LLMs) to follow complex human-written instructions is essential for their deployment in real-world applications. While benchmarks like Chatbot Arena use human judges to assess model performance, they are resource-intensive and time-consuming. Alternative methods using LLMs as judges, such as AlpacaEval, MT Bench, WildBench, and InFoBench offer improvements but still do not capture that certain complex instruction aspects are more important than others to follow.To address this gap, we propose a novel evaluation metric, TOWER, that incorporates human-judged importance into the assessment of complex instruction following. We show that human annotators agree with tree-based representations of these complex instructions nearly as much as they agree with other human annotators. We release tree-based annotations of the InFoBench dataset and the corresponding evaluation code to facilitate future research.
Anthology ID:
2024.findings-emnlp.809
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2024
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
13803–13810
Language:
URL:
https://preview.aclanthology.org/icon-24-ingestion/2024.findings-emnlp.809/
DOI:
10.18653/v1/2024.findings-emnlp.809
Bibkey:
Cite (ACL):
Noah Ziems, Zhihan Zhang, and Meng Jiang. 2024. TOWER: Tree Organized Weighting for Evaluating Complex Instructions. In Findings of the Association for Computational Linguistics: EMNLP 2024, pages 13803–13810, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
TOWER: Tree Organized Weighting for Evaluating Complex Instructions (Ziems et al., Findings 2024)
Copy Citation:
PDF:
https://preview.aclanthology.org/icon-24-ingestion/2024.findings-emnlp.809.pdf