Longxiang Gao
2021
Exploring the Vulnerability of Natural Language Processing Models via Universal Adversarial Texts
Xinzhe Li
|
Ming Liu
|
Xingjun Ma
|
Longxiang Gao
Proceedings of the 19th Annual Workshop of the Australasian Language Technology Association
Universal adversarial texts (UATs) refer to short pieces of text units that can largely affect the predictions of NLP models. Recent studies on universal adversarial attacks assume the accessibility of datasets for the task, which is not realistic. We propose two types of Data-Free Adjusted Gradient (DFAG) attacks to show that it is possible to generate effective UATs with only one arbitrary example which could be manually crafted. Based on the proposed DFAG attacks, this paper explores the vulnerability of commonly used NLP models in terms of two factors: network architectures and pre-trained embeddings. Our empirical studies on three text classification datasets reveal that: 1) CNN based models are more extremely vulnerable to UATs while self-attention models show the most robustness, 2) the vulnerability of CNN and LSTM models and robustness of self-attention models could be attributed to whether they rely on training data artifacts for their predictions, and 3) the pre-trained embeddings could expose vulnerability to both universal adversarial attack and the UAT transfer attack.
2020
Monash-Summ@LongSumm 20 SciSummPip: An Unsupervised Scientific Paper Summarization Pipeline
Jiaxin Ju
|
Ming Liu
|
Longxiang Gao
|
Shirui Pan
Proceedings of the First Workshop on Scholarly Document Processing
The Scholarly Document Processing (SDP) workshop is to encourage more efforts on natural language understanding of scientific task. It contains three shared tasks and we participate in the LongSumm shared task. In this paper, we describe our text summarization system, SciSummPip, inspired by SummPip (Zhao et al., 2020) that is an unsupervised text summarization system for multi-document in News domain. Our SciSummPip includes a transformer-based language model SciBERT (Beltagy et al., 2019) for contextual sentence representation, content selection with PageRank (Page et al., 1999), sentence graph construction with both deep and linguistic information, sentence graph clustering and within-graph summary generation. Our work differs from previous method in that content selection and a summary length constraint is applied to adapt to the scientific domain. The experiment results on both training dataset and blind test dataset show the effectiveness of our method, and we empirically verify the robustness of modules used in SciSummPip with BERTScore (Zhang et al., 2019a).
Search