Hanqi Li


2024

pdf
Sparsity-Accelerated Training for Large Language Models
Da Ma | Lu Chen | Pengyu Wang | Hongshen Xu | Hanqi Li | Liangtai Sun | Su Zhu | Shuai Fan | Kai Yu
Findings of the Association for Computational Linguistics ACL 2024

Large language models (LLMs) have demonstrated proficiency across various natural language processing (NLP) tasks but often require additional training, such as continual pre-training and supervised fine-tuning. However, the costs associated with this, primarily due to their large parameter count, remain high. This paper proposes leveraging sparsity in pre-trained LLMs to expedite this training process. By observing sparsity in activated neurons during forward iterations, we identify the potential for computational speed-ups by excluding inactive neurons. We address associated challenges by extending existing neuron importance evaluation metrics and introducing a ladder omission rate scheduler. Our experiments on Llama-2 demonstrate that Sparsity-Accelerated Training (SAT) achieves comparable or superior performance to standard training while significantly accelerating the process. Specifically, SAT achieves a 45% throughput improvement in continual pre-training and saves 38% training time in supervised fine-tuning. It offers a simple, hardware-agnostic, and easily deployable framework for additional LLM training.

pdf
Multi-Granularity Fusion Text Semantic Matching Based on WoBERT
Hongchun Yu | Wei Pan | Xing Fan | Hanqi Li
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)

Text semantic matching is crucial in natural language processing, applied in information retrieval, question answering, and recommendation systems. Traditional text-matching methods struggle with semantic nuances in short text. Recent advancements in multi-granularity representation learning have led to increased interest in improving text semantic matching models. We propose a novel multi-granularity fusion model that harnesses WoBERT, a pre-trained language model, to enhance the accuracy of text semantic information capture. Initially, we process text using WoBERT to acquire semantic representations, effectively capturing individual text semantic nuances. Next, we employ a soft attention alignment mechanism, enabling multi-granularity fusions among characters, words, and sentences, thus further improving matching performance. Our approach was evaluated through experiments on common Chinese short text matching datasets, BQ and LCQMC. Results reveal a significant improvement in performance compared to traditional methods, particularly in terms of accuracy.

2021

pdf
Decoupled Dialogue Modeling and Semantic Parsing for Multi-Turn Text-to-SQL
Zhi Chen | Lu Chen | Hanqi Li | Ruisheng Cao | Da Ma | Mengyue Wu | Kai Yu
Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021