Nan Hua
2020
Pruning Redundant Mappings in Transformer Models via Spectral-Normalized Identity Prior
Zi Lin
|
Jeremiah Liu
|
Zi Yang
|
Nan Hua
|
Dan Roth
Findings of the Association for Computational Linguistics: EMNLP 2020
Traditional (unstructured) pruning methods for a Transformer model focus on regularizing the individual weights by penalizing them toward zero. In this work, we explore spectral-normalized identity priors (SNIP), a structured pruning approach which penalizes an entire residual module in a Transformer model toward an identity mapping. Our method identifies and discards unimportant non-linear mappings in the residual connections by applying a thresholding operator on the function norm, and is applicable to any structured module including a single attention head, an entire attention blocks, or a feed-forward subnetwork. Furthermore, we introduce spectral normalization to stabilize the distribution of the post-activation values of the Transformer layers, further improving the pruning effectiveness of the proposed methodology. We conduct experiments with BERT on 5 GLUE benchmark tasks to demonstrate that SNIP achieves effective pruning results while maintaining comparable performance. Specifically, we improve the performance over the state-of-the-art by 0.5 to 1.0% on average at 50% compression ratio.
2018
Universal Sentence Encoder for English
Daniel Cer
|
Yinfei Yang
|
Sheng-yi Kong
|
Nan Hua
|
Nicole Limtiaco
|
Rhomni St. John
|
Noah Constant
|
Mario Guajardo-Cespedes
|
Steve Yuan
|
Chris Tar
|
Brian Strope
|
Ray Kurzweil
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations
We present easy-to-use TensorFlow Hub sentence embedding models having good task transfer performance. Model variants allow for trade-offs between accuracy and compute resources. We report the relationship between model complexity, resources, and transfer performance. Comparisons are made with baselines without transfer learning and to baselines that incorporate word-level transfer. Transfer learning using sentence-level embeddings is shown to outperform models without transfer learning and often those that use only word-level transfer. We show good transfer task performance with minimal training data and obtain encouraging results on word embedding association tests (WEAT) of model bias.
Search
Co-authors
- Daniel Cer 1
- Yinfei Yang 1
- Sheng-yi Kong 1
- Nicole Limtiaco 1
- Rhomni St. John 1
- show all...