Abstract
In recent years, large language models (LLMs) have achieved strong performance on benchmark tasks, especially in zero or few-shot settings. However, these benchmarks often do not adequately address the challenges posed in the real-world, such as that of hierarchical classification. In order to address this challenge, we propose refactoring conventional tasks on hierarchical datasets into a more indicative long-tail prediction task. We observe LLMs are more prone to failure in these cases. To address these limitations, we propose the use of entailment-contradiction prediction in conjunction with LLMs, which allows for strong performance in a strict zero-shot setting. Importantly, our method does not require any parameter updates, a resource-intensive process and achieves strong performance across multiple datasets.- Anthology ID:
- 2023.acl-short.152
- Volume:
- Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)
- Month:
- July
- Year:
- 2023
- Address:
- Toronto, Canada
- Editors:
- Anna Rogers, Jordan Boyd-Graber, Naoaki Okazaki
- Venue:
- ACL
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 1782–1792
- Language:
- URL:
- https://aclanthology.org/2023.acl-short.152
- DOI:
- 10.18653/v1/2023.acl-short.152
- Cite (ACL):
- Rohan Bhambhoria, Lei Chen, and Xiaodan Zhu. 2023. A Simple and Effective Framework for Strict Zero-Shot Hierarchical Classification. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 1782–1792, Toronto, Canada. Association for Computational Linguistics.
- Cite (Informal):
- A Simple and Effective Framework for Strict Zero-Shot Hierarchical Classification (Bhambhoria et al., ACL 2023)
- PDF:
- https://preview.aclanthology.org/naacl24-info/2023.acl-short.152.pdf