Zi Yang
2020
Pruning Redundant Mappings in Transformer Models via Spectral-Normalized Identity Prior
Zi Lin
|
Jeremiah Liu
|
Zi Yang
|
Nan Hua
|
Dan Roth
Findings of the Association for Computational Linguistics: EMNLP 2020
Traditional (unstructured) pruning methods for a Transformer model focus on regularizing the individual weights by penalizing them toward zero. In this work, we explore spectral-normalized identity priors (SNIP), a structured pruning approach which penalizes an entire residual module in a Transformer model toward an identity mapping. Our method identifies and discards unimportant non-linear mappings in the residual connections by applying a thresholding operator on the function norm, and is applicable to any structured module including a single attention head, an entire attention blocks, or a feed-forward subnetwork. Furthermore, we introduce spectral normalization to stabilize the distribution of the post-activation values of the Transformer layers, further improving the pruning effectiveness of the proposed methodology. We conduct experiments with BERT on 5 GLUE benchmark tasks to demonstrate that SNIP achieves effective pruning results while maintaining comparable performance. Specifically, we improve the performance over the state-of-the-art by 0.5 to 1.0% on average at 50% compression ratio.
2017
Tackling Biomedical Text Summarization: OAQA at BioASQ 5B
Khyathi Chandu
|
Aakanksha Naik
|
Aditya Chandrasekar
|
Zi Yang
|
Niloy Gupta
|
Eric Nyberg
BioNLP 2017
In this paper, we describe our participation in phase B of task 5b of the fifth edition of the annual BioASQ challenge, which includes answering factoid, list, yes-no and summary questions from biomedical data. We describe our techniques with an emphasis on ideal answer generation, where the goal is to produce a relevant, precise, non-redundant, query-oriented summary from multiple relevant documents. We make use of extractive summarization techniques to address this task and experiment with different biomedical ontologies and various algorithms including agglomerative clustering, Maximum Marginal Relevance (MMR) and sentence compression. We propose a novel word embedding based tf-idf similarity metric and a soft positional constraint which improve our system performance. We evaluate our techniques on test batch 4 from the fourth edition of the challenge. Our best system achieves a ROUGE-2 score of 0.6534 and ROUGE-SU4 score of 0.6536.
Structural Embedding of Syntactic Trees for Machine Comprehension
Rui Liu
|
Junjie Hu
|
Wei Wei
|
Zi Yang
|
Eric Nyberg
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing
Deep neural networks for machine comprehension typically utilizes only word or character embeddings without explicitly taking advantage of structured linguistic information such as constituency trees and dependency trees. In this paper, we propose structural embedding of syntactic trees (SEST), an algorithm framework to utilize structured information and encode them into vector representations that can boost the performance of algorithms for the machine comprehension. We evaluate our approach using a state-of-the-art neural attention model on the SQuAD dataset. Experimental results demonstrate that our model can accurately identify the syntactic boundaries of the sentences and extract answers that are syntactically coherent over the baseline methods.
2016
Learning to Answer Biomedical Questions: OAQA at BioASQ 4B
Zi Yang
|
Yue Zhou
|
Eric Nyberg
Proceedings of the Fourth BioASQ workshop
Search
Co-authors
- Eric Nyberg 3
- Zi Lin 1
- Jeremiah Liu 1
- Nan Hua 1
- Dan Roth 1
- show all...