Yu Lin
2022
Improving Contextual Representation with Gloss Regularized Pre-training
Yu Lin
|
Zhecheng An
|
Peihao Wu
|
Zejun Ma
Findings of the Association for Computational Linguistics: NAACL 2022
Though achieving impressive results on many NLP tasks, the BERT-like masked language models (MLM) encounter the discrepancy between pre-training and inference. In light of this gap, we investigate the contextual representation of pre-training and inference from the perspective of word probability distribution. We discover that BERT risks neglecting the contextual word similarity in pre-training. To tackle this issue, we propose an auxiliary gloss regularizer module to BERT pre-training (GR-BERT), to enhance word semantic similarity. By predicting masked words and aligning contextual embeddings to corresponding glosses simultaneously, the word similarity can be explicitly modeled. We design two architectures for GR-BERT and evaluate our model in downstream tasks. Experimental results show that the gloss regularizer benefits BERT in word-level and sentence-level semantic representation. The GR-BERT achieves new state-of-the-art in lexical substitution task and greatly promotes BERT sentence representation in both unsupervised and supervised STS tasks.
Controllable Fake Document Infilling for Cyber Deception
Yibo Hu
|
Yu Lin
|
Erick Skorupa Parolin
|
Latifur Khan
|
Kevin Hamlen
Findings of the Association for Computational Linguistics: EMNLP 2022
Recent works in cyber deception study how to deter malicious intrusion by generating multiple fake versions of a critical document to impose costs on adversaries who need to identify the correct information. However, existing approaches are context-agnostic, resulting in sub-optimal and unvaried outputs. We propose a novel context-aware model, Fake Document Infilling (FDI), by converting the problem to a controllable mask-then-infill procedure. FDI masks important concepts of varied lengths in the document, then infills a realistic but fake alternative considering both the previous and future contexts. We conduct comprehensive evaluations on technical documents and news stories. Results show that FDI outperforms the baselines in generating highly believable fakes with moderate modification to protect critical information and deceive adversaries.
2020
SetConv: A New Approach for Learning from Imbalanced Data
Yang Gao
|
Yi-Fan Li
|
Yu Lin
|
Charu Aggarwal
|
Latifur Khan
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)
For many real-world classification problems, e.g., sentiment classification, most existing machine learning methods are biased towards the majority class when the Imbalance Ratio (IR) is high. To address this problem, we propose a set convolution (SetConv) operation and an episodic training strategy to extract a single representative for each class, so that classifiers can later be trained on a balanced class distribution. We prove that our proposed algorithm is permutation-invariant despite the order of inputs, and experiments on multiple large-scale benchmark text datasets show the superiority of our proposed framework when compared to other SOTA methods.
Search
Co-authors
- Latifur Khan 2
- Zhecheng An 1
- Peihao Wu 1
- Zejun Ma 1
- Yibo Hu 1
- show all...