Honglei Zhuang


2022

pdf
ED2LM: Encoder-Decoder to Language Model for Faster Document Re-ranking Inference
Kai Hui | Honglei Zhuang | Tao Chen | Zhen Qin | Jing Lu | Dara Bahri | Ji Ma | Jai Gupta | Cicero Nogueira dos Santos | Yi Tay | Donald Metzler
Findings of the Association for Computational Linguistics: ACL 2022

State-of-the-art neural models typically encode document-query pairs using cross-attention for re-ranking. To this end, models generally utilize an encoder-only (like BERT) paradigm or an encoder-decoder (like T5) approach. These paradigms, however, are not without flaws, i.e., running the model on all query-document pairs at inference-time incurs a significant computational cost. This paper proposes a new training and inference paradigm for re-ranking. We propose to finetune a pretrained encoder-decoder model using in the form of document to query generation. Subsequently, we show that this encoder-decoder architecture can be decomposed into a decoder-only language model during inference. This results in significant inference time speedups since the decoder-only architecture only needs to learn to interpret static encoder embeddings during inference. Our experiments show that this new paradigm achieves results that are comparable to the more expensive cross-attention ranking approaches while being up to 6.8X faster. We believe this work paves the way for more efficient neural rankers that leverage large pretrained models.

2017

pdf
Identifying Semantically Deviating Outlier Documents
Honglei Zhuang | Chi Wang | Fangbo Tao | Lance Kaplan | Jiawei Han
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing

A document outlier is a document that substantially deviates in semantics from the majority ones in a corpus. Automatic identification of document outliers can be valuable in many applications, such as screening health records for medical mistakes. In this paper, we study the problem of mining semantically deviating document outliers in a given corpus. We develop a generative model to identify frequent and characteristic semantic regions in the word embedding space to represent the given corpus, and a robust outlierness measure which is resistant to noisy content in documents. Experiments conducted on two real-world textual data sets show that our method can achieve an up to 135% improvement over baselines in terms of recall at top-1% of the outlier ranking.