Yilong He
2022
A Simple Hash-Based Early Exiting Approach For Language Understanding and Generation
Tianxiang Sun
|
Xiangyang Liu
|
Wei Zhu
|
Zhichao Geng
|
Lingling Wu
|
Yilong He
|
Yuan Ni
|
Guotong Xie
|
Xuanjing Huang
|
Xipeng Qiu
Findings of the Association for Computational Linguistics: ACL 2022
Early exiting allows instances to exit at different layers according to the estimation of difficulty. Previous works usually adopt heuristic metrics such as the entropy of internal outputs to measure instance difficulty, which suffers from generalization and threshold-tuning. In contrast, learning to exit, or learning to predict instance difficulty is a more appealing way. Though some effort has been devoted to employing such “learn-to-exit” modules, it is still unknown whether and how well the instance difficulty can be learned. As a response, we first conduct experiments on the learnability of instance difficulty, which demonstrates that modern neural models perform poorly on predicting instance difficulty. Based on this observation, we propose a simple-yet-effective Hash-based Early Exiting approach HashEE) that replaces the learn-to-exit modules with hash functions to assign each token to a fixed exiting layer. Different from previous methods, HashEE requires no internal classifiers nor extra parameters, and therefore is more efficient. HashEE can be used in various tasks (including language understanding and generation) and model architectures such as seq2seq models. Experimental results on classification, regression, and generation tasks demonstrate that HashEE can achieve higher performance with fewer FLOPs and inference time compared with previous state-of-the-art early exiting methods.
2021
paht_nlp @ MEDIQA 2021: Multi-grained Query Focused Multi-Answer Summarization
Wei Zhu
|
Yilong He
|
Ling Chai
|
Yunxiao Fan
|
Yuan Ni
|
Guotong Xie
|
Xiaoling Wang
Proceedings of the 20th Workshop on Biomedical Language Processing
In this article, we describe our systems for the MEDIQA 2021 Shared Tasks. First, we will describe our method for the second task, Multi-Answer Summarization (MAS). For extractive summarization, two series of methods are applied. The first one follows (CITATION). First a RoBERTa model is first applied to give a local ranking of the candidate sentences. Then a Markov Chain model is applied to evaluate the sentences globally. The second method applies cross-sentence contextualization to improve the local ranking and discard the global ranking step. Our methods achieve the 1st Place in the MAS task. For the question summarization (QS) and radiology report summarization (RRS) tasks, we explore how end-to-end pre-trained seq2seq model perform. A series of tricks for improving the fine-tuning performances are validated.
Search
Co-authors
- Wei Zhu 2
- Yuan Ni 2
- Guotong Xie 2
- Tianxiang Sun 1
- Xiangyang Liu 1
- show all...