Ensembling Prompting Strategies for Zero-Shot Hierarchical Text Classification with Large Language Models

Mingxuan Xia, Zhijie Jiang, Haobo Wang, Junbo Zhao, Tianlei Hu, Gang Chen


Abstract
Hierarchical text classification aims to classify documents into multiple labels within a hierarchical taxonomy, making it an essential yet challenging task in natural language processing. Recently, using Large Language Models (LLM) to tackle hierarchical text classification in a zero-shot manner has attracted increasing attention due to their cost-efficiency and flexibility. Given the challenges of understanding the hierarchy, various HTC prompting strategies have been explored to elicit the best performance from LLMs.However, our empirical study reveals that LLMs are highly sensitive to these prompting strategies—(i) within a task, different strategies yield substantially different results, and (ii) across various tasks, the relative effectiveness of a given strategy varies significantly. To address this, we propose a novel ensemble method, HiEPS, which integrates the results of diverse prompting strategies to promote LLMs’ reliability. We also introduce a path-valid voting mechanism for ensembling, which selects a valid result with the highest path frequency score. Extensive experiments on three benchmark datasets show that HiEPS boosts the performance of single prompting strategies and achieves SOTA results. The source code is available at https://github.com/MingxuanXia/HiEPS.
Anthology ID:
2025.emnlp-main.918
Volume:
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
18200–18219
Language:
URL:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.918/
DOI:
Bibkey:
Cite (ACL):
Mingxuan Xia, Zhijie Jiang, Haobo Wang, Junbo Zhao, Tianlei Hu, and Gang Chen. 2025. Ensembling Prompting Strategies for Zero-Shot Hierarchical Text Classification with Large Language Models. In Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, pages 18200–18219, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
Ensembling Prompting Strategies for Zero-Shot Hierarchical Text Classification with Large Language Models (Xia et al., EMNLP 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.918.pdf
Checklist:
 2025.emnlp-main.918.checklist.pdf