Yao Chen
2021
Compressing Large-Scale Transformer-Based Models: A Case Study on BERT
Prakhar Ganesh
|
Yao Chen
|
Xin Lou
|
Mohammad Ali Khan
|
Yin Yang
|
Hassan Sajjad
|
Preslav Nakov
|
Deming Chen
|
Marianne Winslett
Transactions of the Association for Computational Linguistics, Volume 9
Pre-trained Transformer-based models have achieved state-of-the-art performance for various Natural Language Processing (NLP) tasks. However, these models often have billions of parameters, and thus are too resource- hungry and computation-intensive to suit low- capability devices or applications with strict latency requirements. One potential remedy for this is model compression, which has attracted considerable research attention. Here, we summarize the research in compressing Transformers, focusing on the especially popular BERT model. In particular, we survey the state of the art in compression for BERT, we clarify the current best practices for compressing large-scale Transformer models, and we provide insights into the workings of various methods. Our categorization and analysis also shed light on promising future research directions for achieving lightweight, accurate, and generic NLP models.
2020
TAG : Type Auxiliary Guiding for Code Comment Generation
Ruichu Cai
|
Zhihao Liang
|
Boyan Xu
|
Zijian Li
|
Yuexing Hao
|
Yao Chen
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
Existing leading code comment generation approaches with the structure-to-sequence framework ignores the type information of the interpretation of the code, e.g., operator, string, etc. However, introducing the type information into the existing framework is non-trivial due to the hierarchical dependence among the type information. In order to address the issues above, we propose a Type Auxiliary Guiding encoder-decoder framework for the code comment generation task which considers the source code as an N-ary tree with type information associated with each node. Specifically, our framework is featured with a Type-associated Encoder and a Type-restricted Decoder which enables adaptive summarization of the source code. We further propose a hierarchical reinforcement learning method to resolve the training difficulties of our proposed framework. Extensive evaluations demonstrate the state-of-the-art performance of our framework with both the auto-evaluated metrics and case studies.
Search
Co-authors
- Ruichu Cai 1
- Zhihao Liang 1
- Boyan Xu 1
- Zijian Li 1
- Yuexing Hao 1
- show all...