2023
pdf
abs
A Black-Box Attack on Code Models via Representation Nearest Neighbor Search
Jie Zhang
|
Wei Ma
|
Qiang Hu
|
Shangqing Liu
|
Xiaofei Xie
|
Yves Le Traon
|
Yang Liu
Findings of the Association for Computational Linguistics: EMNLP 2023
Existing methods for generating adversarial code examples face several challenges: limted availability of substitute variables, high verification costs for these substitutes, and the creation of adversarial samples with noticeable perturbations. To address these concerns, our proposed approach, RNNS, uses a search seed based on historical attacks to find potential adversarial substitutes. Rather than directly using the discrete substitutes, they are mapped to a continuous vector space using a pre-trained variable name encoder. Based on the vector representation, RNNS predicts and selects better substitutes for attacks. We evaluated the performance of RNNS across six coding tasks encompassing three programming languages: Java, Python, and C. We employed three pre-trained code models (CodeBERT, GraphCodeBERT, and CodeT5) that resulted in a cumulative of 18 victim models. The results demonstrate that RNNS outperforms baselines in terms of ASR and QT. Furthermore, the perturbation of adversarial examples introduced by RNNS is smaller compared to the baselines in terms of the number of replaced variables and the change in variable length. Lastly, our experiments indicate that RNNS is efficient in attacking defended models and can be employed for adversarial training.
pdf
abs
The USTC’s Dialect Speech Translation System for IWSLT 2023
Pan Deng
|
Shihao Chen
|
Weitai Zhang
|
Jie Zhang
|
Lirong Dai
Proceedings of the 20th International Conference on Spoken Language Translation (IWSLT 2023)
This paper presents the USTC system for the IWSLT 2023 Dialectal and Low-resource shared task, which involves translation from Tunisian Arabic to English. We aim to investigate the mutual transfer between Tunisian Arabic and Modern Standard Arabic (MSA) to enhance the performance of speech translation (ST) by following standard pre-training and fine-tuning pipelines. We synthesize a substantial amount of pseudo Tunisian-English paired data using a multi-step pre-training approach. Integrating a Tunisian-MSA translation module into the end-to-end ST model enables the transfer from Tunisian to MSA and facilitates linguistic normalization of the dialect. To increase the robustness of the ST system, we optimize the model’s ability to adapt to ASR errors and propose a model ensemble method. Results indicate that applying the dialect transfer method can increase the BLEU score of dialectal ST. It is shown that the optimal system ensembles both cascaded and end-to-end ST models, achieving BLEU improvements of 2.4 and 2.8 in test1 and test2 sets, respectively, compared to the best published system.
pdf
abs
System Report for CCL23-Eval Task 7: Chinese Grammatical Error Diagnosis Based on Model Fusion
Yanmei Ma
|
Laiqi Wang
|
Zhenghua Chen
|
Yanran Zhou
|
Ya Han
|
Jie Zhang
Proceedings of the 22nd Chinese National Conference on Computational Linguistics (Volume 3: Evaluations)
“The purpose of the Chinese Grammatical Error Diagnosis task is to identify the positions andtypes of grammar errors in Chinese texts. In Track 2 of CCL2023-CLTC, Chinese grammarerrors are classified into four categories: Redundant Words, Missing Words, Word Selection, andWord Ordering Errors. We conducted data filtering, model research, and model fine-tuning insequence. Then, we performed weighted fusion of models based on perplexity calculations andintroduced various post-processing strategies. As a result, the performance of the model on thetest set, measured by COM, reached 49.12.”
2020
pdf
abs
Cascaded Semantic and Positional Self-Attention Network for Document Classification
Juyong Jiang
|
Jie Zhang
|
Kai Zhang
Findings of the Association for Computational Linguistics: EMNLP 2020
Transformers have shown great success in learning representations for language modelling. However, an open challenge still remains on how to systematically aggregate semantic information (word embedding) with positional (or temporal) information (word orders). In this work, we propose a new architecture to aggregate the two sources of information using cascaded semantic and positional self-attention network (CSPAN) in the context of document classification. The CSPAN uses a semantic self-attention layer cascaded with Bi-LSTM to process the semantic and positional information in a sequential manner, and then adaptively combine them together through a residue connection. Compared with commonly used positional encoding schemes, CSPAN can exploit the interaction between semantics and word positions in a more interpretable and adaptive manner, and the classification performance can be notably improved while simultaneously preserving a compact model size and high convergence rate. We evaluate the CSPAN model on several benchmark data sets for document classification with careful ablation studies, and demonstrate the encouraging results compared with state of the art.
pdf
abs
Diversify Question Generation with Continuous Content Selectors and Question Type Modeling
Zhen Wang
|
Siwei Rao
|
Jie Zhang
|
Zhen Qin
|
Guangjian Tian
|
Jun Wang
Findings of the Association for Computational Linguistics: EMNLP 2020
Generating questions based on answers and relevant contexts is a challenging task. Recent work mainly pays attention to the quality of a single generated question. However, question generation is actually a one-to-many problem, as it is possible to raise questions with different focuses on contexts and various means of expression. In this paper, we explore the diversity of question generation and come up with methods from these two aspects. Specifically, we relate contextual focuses with content selectors, which are modeled by a continuous latent variable with the technique of conditional variational auto-encoder (CVAE). In the realization of CVAE, a multimodal prior distribution is adopted to allow for more diverse content selectors. To take into account various means of expression, question types are explicitly modeled and a diversity-promoting algorithm is proposed further. Experimental results on public datasets show that our proposed method can significantly improve the diversity of generated questions, especially from the perspective of using different question types. Overall, our proposed method achieves a better trade-off between generation quality and diversity compared with existing approaches.
pdf
abs
Data Augmentation for Multiclass Utterance Classification – A Systematic Study
Binxia Xu
|
Siyuan Qiu
|
Jie Zhang
|
Yafang Wang
|
Xiaoyu Shen
|
Gerard de Melo
Proceedings of the 28th International Conference on Computational Linguistics
Utterance classification is a key component in many conversational systems. However, classifying real-world user utterances is challenging, as people may express their ideas and thoughts in manifold ways, and the amount of training data for some categories may be fairly limited, resulting in imbalanced data distributions. To alleviate these issues, we conduct a comprehensive survey regarding data augmentation approaches for text classification, including simple random resampling, word-level transformations, and neural text generation to cope with imbalanced data. Our experiments focus on multi-class datasets with a large number of data samples, which has not been systematically studied in previous work. The results show that the effectiveness of different data augmentation schemes depends on the nature of the dataset under consideration.
2018
pdf
abs
An Operation Network for Abstractive Sentence Compression
Naitong Yu
|
Jie Zhang
|
Minlie Huang
|
Xiaoyan Zhu
Proceedings of the 27th International Conference on Computational Linguistics
Sentence compression condenses a sentence while preserving its most important contents. Delete-based models have the strong ability to delete undesired words, while generate-based models are able to reorder or rephrase the words, which are more coherent to human sentence compression. In this paper, we propose Operation Network, a neural network approach for abstractive sentence compression, which combines the advantages of both delete-based and generate-based sentence compression models. The central idea of Operation Network is to model the sentence compression process as an editing procedure. First, unnecessary words are deleted from the source sentence, then new words are either generated from a large vocabulary or copied directly from the source sentence. A compressed sentence can be obtained by a series of such edit operations (delete, copy and generate). Experiments show that Operation Network outperforms state-of-the-art baselines.
2010
pdf
A Review Selection Approach for Accurate Feature Rating Estimation
Chong Long
|
Jie Zhang
|
Xiaoyan Zhu
Coling 2010: Posters
2006
pdf
Exploring Syntactic Features for Relation Extraction using a Convolution Tree Kernel
Min Zhang
|
Jie Zhang
|
Jian Su
Proceedings of the Human Language Technology Conference of the NAACL, Main Conference
pdf
A Composite Kernel to Extract Relations between Entities with Both Flat and Structured Features
Min Zhang
|
Jie Zhang
|
Jian Su
|
GuoDong Zhou
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics
2005
pdf
Exploring Various Knowledge in Relation Extraction
GuoDong Zhou
|
Jian Su
|
Jie Zhang
|
Min Zhang
Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL’05)
2004
pdf
Multi-Criteria-based Active Learning for Named Entity Recognition
Dan Shen
|
Jie Zhang
|
Jian Su
|
Guodong Zhou
|
Chew-Lim Tan
Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics (ACL-04)
2003
pdf
Effective Adaptation of Hidden Markov Model-based Named Entity Recognizer for Biomedical Domain
Dan Shen
|
Jie Zhang
|
Guodong Zhou
|
Jian Su
|
Chew-Lim Tan
Proceedings of the ACL 2003 Workshop on Natural Language Processing in Biomedicine