Baitao Gong
2022
BMCook: A Task-agnostic Compression Toolkit for Big Models
Zhengyan Zhang
|
Baitao Gong
|
Yingfa Chen
|
Xu Han
|
Guoyang Zeng
|
Weilin Zhao
|
Yanxu Chen
|
Zhiyuan Liu
|
Maosong Sun
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: System Demonstrations
Recently, pre-trained language models (PLMs) have achieved great success on various NLP tasks and have shown a trend of exponential growth in model size. To alleviate the unaffordable computational costs brought by the size growth, model compression has been widely explored. Existing efforts have achieved promising results in compressing medium-sized models for specific tasks, while task-agnostic compression for big models with over billions of parameters is rarely studied. Task-agnostic compression can provide an efficient and versatile big model for both prompting and delta tuning, leading to a more general impact than task-specific compression. Hence, we introduce a task-agnostic compression toolkit BMCook for big models. In BMCook, we implement four representative compression methods, including quantization, pruning, distillation, and MoEfication. Developers can easily combine these methods towards better efficiency. To evaluate BMCook, we apply it to compress T5-3B (a PLM with 3 billion parameters). We achieve nearly 12x efficiency improvement while maintaining over 97% of the original T5-3B performance on three typical NLP benchmarks. Moreover, the final compressed model also significantly outperforms T5-base (a PLM with 220 million parameters), which has a similar computational cost. BMCook is publicly available at https://github.com/OpenBMB/BMCook.
Search
Co-authors
- Zhengyan Zhang 1
- Yingfa Chen 1
- Xu Han 1
- Guoyang Zeng 1
- Weilin Zhao 1
- show all...