Guangyu Xie


2025

pdf bib
Comprehensive and Efficient Distillation for Lightweight Sentiment Analysis Models
Guangyu Xie | Yice Zhang | Jianzhu Bao | Qianlong Wang | Yang Sun | Bingbing Wang | Ruifeng Xu
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing

Recent efforts leverage knowledge distillation techniques to develop lightweight and practical sentiment analysis models. These methods are grounded in human-written instructions and large-scale user texts. Despite the promising results, two key challenges remain: (1) manually written instructions are limited in diversity and quantity, making them insufficient to ensure comprehensive coverage of distilled knowledge; (2) large-scale user texts incur high computational cost, hindering the practicality of these methods. To this end, we introduce CompEffDist, a comprehensive and efficient distillation framework for sentiment analysis. Our framework consists of two key modules: attribute-based automatic instruction construction and difficulty-based data filtering, which correspondingly tackle the aforementioned challenges. Applying our method across multiple model series (Llama-3, Qwen-3, and Gemma-3), we enable 3B student models to match the performance of 20x larger teacher models on most tasks. In addition, our approach greatly outperforms baseline methods in data efficiency, attaining the same performance level with only 10% of the data. All codes are available at https://github.com/HITSZ-HLT/COMPEFFDIST.

pdf bib
Targeted Distillation for Sentiment Analysis
Yice Zhang | Guangyu Xie | Jingjie Lin | Jianzhu Bao | Qianlong Wang | Xi Zeng | Ruifeng Xu
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing

This paper explores targeted distillation methods for sentiment analysis, aiming to build compact and practical models that preserve strong and generalizable sentiment analysis capabilities. To this end, we conceptually decouple the distillation target into knowledge and alignment and accordingly propose a two-stage distillation framework. Moreover, we introduce SentiBench, a comprehensive and systematic sentiment analysis benchmark that covers a diverse set of tasks across 12 datasets. We evaluate a wide range of models on this benchmark. Experimental results show that our approach substantially enhances the performance of compact models across diverse sentiment analysis tasks, and the resulting models demonstrate strong generalization to unseen tasks, showcasing robust competitiveness against existing small-scale models.