Exploring Hybrid Sampling Inference for Aspect-based Sentiment Analysis

Xiaoyi Bao, Minjie Qiang, Jinghang Gu, Zhongqing Wang, Chu-Ren Huang


Abstract
As the training of large language models (LLMs) will encounter high computational costs, massive works are now focusing on inference. Their methods can be generally summarised as re-sampling the target multiple times and performing a vote upon the outputs. Despite bringing significant performance improvements, it is a high-cost method that requires multiple sampling with the preset size. In this paper, we propose a simple yet efficient inference strategies named __Hybrid Sampling__ that combining both multiple and single sampling to greatly reduce the cost of multiple sampling without sacrificing performance. __Hybrid Sampling__ could dynamically choose the essential part of generated sequence for multiple sampling and proceed the rest with single sampling, achieving a performance-cost balance. Extensive experiments in several benchmarks underscore the robustness and effectiveness of our proposed Hybrid Sampling and more importantly, it is much faster.
Anthology ID:
2025.findings-naacl.236
Volume:
Findings of the Association for Computational Linguistics: NAACL 2025
Month:
April
Year:
2025
Address:
Albuquerque, New Mexico
Editors:
Luis Chiruzzo, Alan Ritter, Lu Wang
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
4199–4210
Language:
URL:
https://preview.aclanthology.org/Ingest-2025-COMPUTEL/2025.findings-naacl.236/
DOI:
Bibkey:
Cite (ACL):
Xiaoyi Bao, Minjie Qiang, Jinghang Gu, Zhongqing Wang, and Chu-Ren Huang. 2025. Exploring Hybrid Sampling Inference for Aspect-based Sentiment Analysis. In Findings of the Association for Computational Linguistics: NAACL 2025, pages 4199–4210, Albuquerque, New Mexico. Association for Computational Linguistics.
Cite (Informal):
Exploring Hybrid Sampling Inference for Aspect-based Sentiment Analysis (Bao et al., Findings 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/Ingest-2025-COMPUTEL/2025.findings-naacl.236.pdf