Lu Zhang


2022

pdf
EPiDA: An Easy Plug-in Data Augmentation Framework for High Performance Text Classification
Minyi Zhao | Lu Zhang | Yi Xu | Jiandong Ding | Jihong Guan | Shuigeng Zhou
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

Recent works have empirically shown the effectiveness of data augmentation (DA) in NLP tasks, especially for those suffering from data scarcity. Intuitively, given the size of generated data, their diversity and quality are crucial to the performance of targeted tasks. However, to the best of our knowledge, most existing methods consider only either the diversity or the quality of augmented data, thus cannot fully mine the potential of DA for NLP. In this paper, we present an easy and plug-in data augmentation framework EPiDA to support effective text classification. EPiDA employs two mechanisms: relative entropy maximization (REM) and conditional entropy minimization (CEM) to control data generation, where REM is designed to enhance the diversity of augmented data while CEM is exploited to ensure their semantic consistency. EPiDA can support efficient and continuous data generation for effective classifier training. Extensive experiments show that EPiDA outperforms existing SOTA methods in most cases, though not using any agent networks or pre-trained generation networks, and it works well with various DA algorithms and classification models.

pdf
Generating Textual Adversaries with Minimal Perturbation
Xingyi Zhao | Lu Zhang | Depeng Xu | Shuhan Yuan
Findings of the Association for Computational Linguistics: EMNLP 2022

Many word-level adversarial attack approaches for textual data have been proposed in recent studies. However, due to the massive search space consisting of combinations of candidate words, the existing approaches face the problem of preserving the semantics of texts when crafting adversarial counterparts. In this paper, we develop a novel attack strategy to find adversarial texts with high similarity to the original texts while introducing minimal perturbation. The rationale is that we expect the adversarial texts with small perturbation can better preserve the semantic meaning of original texts. Experiments show that, compared with state-of-the-art attack approaches, our approach achieves higher success rates and lower perturbation rates in four benchmark datasets.

pdf
Robust Hate Speech Detection via Mitigating Spurious Correlations
Kshitiz Tiwari | Shuhan Yuan | Lu Zhang
Proceedings of the 2nd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 12th International Joint Conference on Natural Language Processing (Volume 2: Short Papers)

We develop a novel robust hate speech detection model that can defend against both word- and character-level adversarial attacks. We identify the essential factor that vanilla detection models are vulnerable to adversarial attacks is the spurious correlation between certain target words in the text and the prediction label. To mitigate such spurious correlation, we describe the process of hate speech detection by a causal graph. Then, we employ the causal strength to quantify the spurious correlation and formulate a regularized entropy loss function. We show that our method generalizes the backdoor adjustment technique in causal inference. Finally, the empirical evaluation shows the efficacy of our method.

2021

pdf
Weakly-supervised Text Classification Based on Keyword Graph
Lu Zhang | Jiandong Ding | Yi Xu | Yingyao Liu | Shuigeng Zhou
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing

Weakly-supervised text classification has received much attention in recent years for it can alleviate the heavy burden of annotating massive data. Among them, keyword-driven methods are the mainstream where user-provided keywords are exploited to generate pseudo-labels for unlabeled texts. However, existing methods treat keywords independently, thus ignore the correlation among them, which should be useful if properly exploited. In this paper, we propose a novel framework called ClassKG to explore keyword-keyword correlation on keyword graph by GNN. Our framework is an iterative process. In each iteration, we first construct a keyword graph, so the task of assigning pseudo labels is transformed to annotating keyword subgraphs. To improve the annotation quality, we introduce a self-supervised task to pretrain a subgraph annotator, and then finetune it. With the pseudo labels generated by the subgraph annotator, we then train a text classifier to classify the unlabeled texts. Finally, we re-extract keywords from the classified texts. Extensive experiments on both long-text and short-text datasets show that our method substantially outperforms the existing ones.

2020

pdf
MCMH: Learning Multi-Chain Multi-Hop Rules for Knowledge Graph Reasoning
Lu Zhang | Mo Yu | Tian Gao | Yue Yu
Findings of the Association for Computational Linguistics: EMNLP 2020

Multi-hop reasoning approaches over knowledge graphs infer a missing relationship between entities with a multi-hop rule, which corresponds to a chain of relationships. We extend existing works to consider a generalized form of multi-hop rules, where each rule is a set of relation chains. To learn such generalized rules efficiently, we propose a two-step approach that first selects a small set of relation chains as a rule and then evaluates the confidence of the target relationship by jointly scoring the selected chains. A game-theoretical framework is proposed to this end to simultaneously optimize the rule selection and prediction steps. Empirical results show that our multi-chain multi-hop (MCMH) rules result in superior results compared to the standard single-chain approaches, justifying both our formulation of generalized rules and the effectiveness of the proposed learning framework.

2016

pdf
How Transferable are Neural Networks in NLP Applications?
Lili Mou | Zhao Meng | Rui Yan | Ge Li | Yan Xu | Lu Zhang | Zhi Jin
Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing

pdf
Sequence to Backward and Forward Sequences: A Content-Introducing Approach to Generative Short-Text Conversation
Lili Mou | Yiping Song | Rui Yan | Ge Li | Lu Zhang | Zhi Jin
Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers

Using neural networks to generate replies in human-computer dialogue systems is attracting increasing attention over the past few years. However, the performance is not satisfactory: the neural network tends to generate safe, universally relevant replies which carry little meaning. In this paper, we propose a content-introducing approach to neural network-based generative dialogue systems. We first use pointwise mutual information (PMI) to predict a noun as a keyword, reflecting the main gist of the reply. We then propose seq2BF, a “sequence to backward and forward sequences” model, which generates a reply containing the given keyword. Experimental results show that our approach significantly outperforms traditional sequence-to-sequence models in terms of human evaluation and the entropy measure, and that the predicted keyword can appear at an appropriate position in the reply.

pdf
Natural Language Inference by Tree-Based Convolution and Heuristic Matching
Lili Mou | Rui Men | Ge Li | Yan Xu | Lu Zhang | Rui Yan | Zhi Jin
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)

2015

pdf
Discriminative Neural Sentence Modeling by Tree-Based Convolution
Lili Mou | Hao Peng | Ge Li | Yan Xu | Lu Zhang | Zhi Jin
Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing

2009

pdf
Extracting Paraphrases of Technical Terms from Noisy Parallel Software Corpora
Xiaoyin Wang | David Lo | Jing Jiang | Lu Zhang | Hong Mei
Proceedings of the ACL-IJCNLP 2009 Conference Short Papers