Haichao Zhu
2022
Distilled Dual-Encoder Model for Vision-Language Understanding
Zekun Wang
|
Wenhui Wang
|
Haichao Zhu
|
Ming Liu
|
Bing Qin
|
Furu Wei
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
On vision-language understanding (VLU) tasks, fusion-encoder vision-language models achieve superior results but sacrifice efficiency because of the simultaneous encoding of images and text. On the contrary, the dual encoder model that separately encodes images and text has the advantage in efficiency, while failing on VLU tasks due to the lack of deep cross-modal interactions. To get the best of both worlds, we propose DiDE, a framework that distills the knowledge of the fusion-encoder teacher model into the dual-encoder student model. Since the cross-modal interaction is the key to the superior performance of teacher model but is absent in the student model, we encourage the student not only to mimic the predictions of teacher, but also to calculate the cross-modal attention distributions and align with the teacher. Experimental results demonstrate that DiDE is competitive with the fusion-encoder teacher model in performance (only a 1% drop) while enjoying 4 times faster inference. Further analyses reveal that the proposed cross-modal attention distillation is crucial to the success of our framework.
2021
Less Is More: Domain Adaptation with Lottery Ticket for Reading Comprehension
Haichao Zhu
|
Zekun Wang
|
Heng Zhang
|
Ming Liu
|
Sendong Zhao
|
Bing Qin
Findings of the Association for Computational Linguistics: EMNLP 2021
In this paper, we propose a simple few-shot domain adaptation paradigm for reading comprehension. We first identify the lottery subnetwork structure within the Transformer-based source domain model via gradual magnitude pruning. Then, we only fine-tune the lottery subnetwork, a small fraction of the whole parameters, on the annotated target domain data for adaptation. To obtain more adaptable subnetworks, we introduce self-attention attribution to weigh parameters, beyond simply pruning the smallest magnitude parameters, which can be seen as combining structured pruning and unstructured magnitude pruning softly. Experimental results show that our method outperforms the full model fine-tuning adaptation on four out of five domains when only a small amount of annotated data available for adaptation. Moreover, introducing self-attention attribution reserves more parameters for important attention heads in the lottery subnetwork and improves the target domain model performance. Our further analyses reveal that, besides exploiting fewer parameters, the choice of subnetworks is critical to the effectiveness.
2019
Learning to Ask Unanswerable Questions for Machine Reading Comprehension
Haichao Zhu
|
Li Dong
|
Furu Wei
|
Wenhui Wang
|
Bing Qin
|
Ting Liu
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics
Machine reading comprehension with unanswerable questions is a challenging task. In this work, we propose a data augmentation technique by automatically generating relevant unanswerable questions according to an answerable question paired with its corresponding paragraph that contains the answer. We introduce a pair-to-sequence model for unanswerable question generation, which effectively captures the interactions between the question and the paragraph. We also present a way to construct training data for our question generation models by leveraging the existing reading comprehension dataset. Experimental results show that the pair-to-sequence model performs consistently better compared with the sequence-to-sequence baseline. We further use the automatically generated unanswerable questions as a means of data augmentation on the SQuAD 2.0 dataset, yielding 1.9 absolute F1 improvement with BERT-base model and 1.7 absolute F1 improvement with BERT-large model.
Search
Co-authors
- Bing Qin 3
- Zekun Wang 2
- Ming Liu 2
- Wenhui Wang 2
- Furu Wei 2
- show all...