2024
pdf
abs
Message Passing on Semantic-Anchor-Graphs for Fine-grained Emotion Representation Learning and Classification
Pinyi Zhang
|
Jingyang Chen
|
Junchen Shen
|
Zijie Zhai
|
Ping Li
|
Jie Zhang
|
Kai Zhang
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Emotion classification has wide applications in education, robotics, virtual reality, etc. However, identifying subtle differences between fine-grained emotion categories remains challenging. Current methods typically aggregate numerous token embeddings of a sentence into a single vector, which, while being an efficient compressor, may not fully capture complex semantic and temporal distributions. To solve this problem, we propose SEmantic ANchor Graph Neural Networks (SEAN-GNN) for fine-grained emotion classification. It learns a group of representative, multi-faceted semantic anchors in the token embedding space: using these anchors as a global reference, any sentence can be projected onto them to form a “semantic-anchor graph”, with node attributes and edge weights quantifying the semantic and temporal information respectively. The graph structure is well aligned across sentences and, importantly, allows for generating comprehensive emotion representations regarding K different anchors. Message passing on this graph can further integrate and refine the learned features. Empirically, SEAN-GNN can generate meaningful semantic anchors and discriminative graph patterns for different emotion, with promising classification results on 6 popular benchmark datasets against state-of-the-arts.
pdf
abs
SIBO: A Simple Booster for Parameter-Efficient Fine-Tuning
Zhihao Wen
|
Jie Zhang
|
Yuan Fang
Findings of the Association for Computational Linguistics: ACL 2024
Fine-tuning all parameters of large language models (LLMs) necessitates substantial computational power and extended time. Latest advancements in parameter-efficient fine-tuning (PEFT) techniques, such as Adapter tuning and LoRA, allow for adjustments to only a minor fraction of the parameters of these LLMs. Concurrently, it has been noted that the issue of over-smoothing diminishes the effectiveness of these Transformer-based LLMs, resulting in suboptimal performances in downstream tasks. In this paper, we present SIBO, which is a SImple BOoster to enhance PEFT, by injecting an initial residual. SIBO is straightforward and readily extensible to a range of state-of-the-art PEFT techniques to alleviate over-smoothing and enhance performance. Extensive experiments on 22 benchmark datasets demonstrate that SIBO significantly enhances the performance of various strong baselines, achieving up to 15.7% and 23.5% improvement over existing PEFT methods on the arithmetic and commonsense reasoning tasks, respectively.
pdf
abs
Towards Tracing Trustworthiness Dynamics: Revisiting Pre-training Period of Large Language Models
Chen Qian
|
Jie Zhang
|
Wei Yao
|
Dongrui Liu
|
Zhenfei Yin
|
Yu Qiao
|
Yong Liu
|
Jing Shao
Findings of the Association for Computational Linguistics: ACL 2024
Ensuring the trustworthiness of large language models (LLMs) is crucial. Most studies concentrate on fully pre-trained LLMs to better understand and improve LLMs’ trustworthiness. In this paper, to reveal the untapped potential of pre-training, we pioneer the exploration of LLMs’ trustworthiness during this period, focusing on five key dimensions: reliability, privacy, toxicity, fairness, and robustness. To begin with, we apply linear probing to LLMs. The high probing accuracy suggests that LLMs in early pre-training can already distinguish concepts in each trustworthiness dimension. Therefore, to further uncover the hidden possibilities of pre-training, we extract steering vectors from a LLM’s pre-training checkpoints to enhance the LLM’s trustworthiness. Finally, inspired by the theoretical result that mutual information estimation is bounded by linear probing accuracy, we also probe LLMs with mutual information to investigate the dynamics of trustworthiness during pre-training. We are the first to observe a similar two-phase phenomenon: fitting and compression. This research provides an initial exploration of trustworthiness modeling during LLM pre-training, seeking to unveil new insights and spur further developments in the field.
pdf
abs
Dual Prompt Tuning based Contrastive Learning for Hierarchical Text Classification
Sishi Xiong
|
Yu Zhao
|
Jie Zhang
|
Li Mengxiang
|
Zhongjiang He
|
Xuelong Li
|
Shuangyong Song
Findings of the Association for Computational Linguistics: ACL 2024
Hierarchical text classification aims at categorizing texts into a multi-tiered tree-structured hierarchy of labels. Existing methods pay more attention to capture hierarchy-aware text feature by exploiting explicit parent-child relationships, while interactions between peer labels are rarely taken into account, resulting in severe label confusion within each layer. In this work, we propose a novel Dual Prompt Tuning (DPT) method, which emphasizes identifying discrimination among peer labels by performing contrastive learning on each hierarchical layer. We design an innovative hand-crafted prompt containing slots for both positive and negative label predictions to cooperate with contrastive learning. In addition, we introduce a label hierarchy self-sensing auxiliary task to ensure cross-layer label consistency. Extensive experiments demonstrate that DPT achieves significant improvements and outperforms the current state-of-the-art methods on BGC and RCV1-V2 benchmark datasets.
pdf
abs
RU22Fact: Optimizing Evidence for Multilingual Explainable Fact-Checking on Russia-Ukraine Conflict
Yirong Zeng
|
Xiao Ding
|
Yi Zhao
|
Xiangyu Li
|
Jie Zhang
|
Chao Yao
|
Ting Liu
|
Bing Qin
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)
Fact-checking is the task of verifying the factuality of a given claim by examining the available evidence. High-quality evidence plays a vital role in enhancing fact-checking systems and facilitating the generation of explanations that are understandable to humans. However, the provision of both sufficient and relevant evidence for explainable fact-checking systems poses a challenge. To tackle this challenge, we propose a method based on a Large Language Model to automatically retrieve and summarize evidence from the Web. Furthermore, we construct RU22Fact, a novel multilingual explainable fact-checking dataset on the Russia-Ukraine conflict in 2022 of 16K samples, each containing real-world claims, optimized evidence, and referenced explanation. To establish a baseline for our dataset, we also develop an end-to-end explainable fact-checking system to verify claims and generate explanations. Experimental results demonstrate the prospect of optimized evidence in increasing fact-checking performance and also indicate the possibility of further progress in the end-to-end claim verification and explanation generation tasks.
2023
pdf
abs
The USTC’s Dialect Speech Translation System for IWSLT 2023
Pan Deng
|
Shihao Chen
|
Weitai Zhang
|
Jie Zhang
|
Lirong Dai
Proceedings of the 20th International Conference on Spoken Language Translation (IWSLT 2023)
This paper presents the USTC system for the IWSLT 2023 Dialectal and Low-resource shared task, which involves translation from Tunisian Arabic to English. We aim to investigate the mutual transfer between Tunisian Arabic and Modern Standard Arabic (MSA) to enhance the performance of speech translation (ST) by following standard pre-training and fine-tuning pipelines. We synthesize a substantial amount of pseudo Tunisian-English paired data using a multi-step pre-training approach. Integrating a Tunisian-MSA translation module into the end-to-end ST model enables the transfer from Tunisian to MSA and facilitates linguistic normalization of the dialect. To increase the robustness of the ST system, we optimize the model’s ability to adapt to ASR errors and propose a model ensemble method. Results indicate that applying the dialect transfer method can increase the BLEU score of dialectal ST. It is shown that the optimal system ensembles both cascaded and end-to-end ST models, achieving BLEU improvements of 2.4 and 2.8 in test1 and test2 sets, respectively, compared to the best published system.
pdf
abs
A Black-Box Attack on Code Models via Representation Nearest Neighbor Search
Jie Zhang
|
Wei Ma
|
Qiang Hu
|
Shangqing Liu
|
Xiaofei Xie
|
Yves Le Traon
|
Yang Liu
Findings of the Association for Computational Linguistics: EMNLP 2023
Existing methods for generating adversarial code examples face several challenges: limted availability of substitute variables, high verification costs for these substitutes, and the creation of adversarial samples with noticeable perturbations. To address these concerns, our proposed approach, RNNS, uses a search seed based on historical attacks to find potential adversarial substitutes. Rather than directly using the discrete substitutes, they are mapped to a continuous vector space using a pre-trained variable name encoder. Based on the vector representation, RNNS predicts and selects better substitutes for attacks. We evaluated the performance of RNNS across six coding tasks encompassing three programming languages: Java, Python, and C. We employed three pre-trained code models (CodeBERT, GraphCodeBERT, and CodeT5) that resulted in a cumulative of 18 victim models. The results demonstrate that RNNS outperforms baselines in terms of ASR and QT. Furthermore, the perturbation of adversarial examples introduced by RNNS is smaller compared to the baselines in terms of the number of replaced variables and the change in variable length. Lastly, our experiments indicate that RNNS is efficient in attacking defended models and can be employed for adversarial training.
pdf
abs
System Report for CCL23-Eval Task 7: Chinese Grammatical Error Diagnosis Based on Model Fusion
Yanmei Ma
|
Laiqi Wang
|
Zhenghua Chen
|
Yanran Zhou
|
Ya Han
|
Jie Zhang
Proceedings of the 22nd Chinese National Conference on Computational Linguistics (Volume 3: Evaluations)
“The purpose of the Chinese Grammatical Error Diagnosis task is to identify the positions andtypes of grammar errors in Chinese texts. In Track 2 of CCL2023-CLTC, Chinese grammarerrors are classified into four categories: Redundant Words, Missing Words, Word Selection, andWord Ordering Errors. We conducted data filtering, model research, and model fine-tuning insequence. Then, we performed weighted fusion of models based on perplexity calculations andintroduced various post-processing strategies. As a result, the performance of the model on thetest set, measured by COM, reached 49.12.”
2020
pdf
abs
Data Augmentation for Multiclass Utterance Classification – A Systematic Study
Binxia Xu
|
Siyuan Qiu
|
Jie Zhang
|
Yafang Wang
|
Xiaoyu Shen
|
Gerard de Melo
Proceedings of the 28th International Conference on Computational Linguistics
Utterance classification is a key component in many conversational systems. However, classifying real-world user utterances is challenging, as people may express their ideas and thoughts in manifold ways, and the amount of training data for some categories may be fairly limited, resulting in imbalanced data distributions. To alleviate these issues, we conduct a comprehensive survey regarding data augmentation approaches for text classification, including simple random resampling, word-level transformations, and neural text generation to cope with imbalanced data. Our experiments focus on multi-class datasets with a large number of data samples, which has not been systematically studied in previous work. The results show that the effectiveness of different data augmentation schemes depends on the nature of the dataset under consideration.
pdf
abs
Cascaded Semantic and Positional Self-Attention Network for Document Classification
Juyong Jiang
|
Jie Zhang
|
Kai Zhang
Findings of the Association for Computational Linguistics: EMNLP 2020
Transformers have shown great success in learning representations for language modelling. However, an open challenge still remains on how to systematically aggregate semantic information (word embedding) with positional (or temporal) information (word orders). In this work, we propose a new architecture to aggregate the two sources of information using cascaded semantic and positional self-attention network (CSPAN) in the context of document classification. The CSPAN uses a semantic self-attention layer cascaded with Bi-LSTM to process the semantic and positional information in a sequential manner, and then adaptively combine them together through a residue connection. Compared with commonly used positional encoding schemes, CSPAN can exploit the interaction between semantics and word positions in a more interpretable and adaptive manner, and the classification performance can be notably improved while simultaneously preserving a compact model size and high convergence rate. We evaluate the CSPAN model on several benchmark data sets for document classification with careful ablation studies, and demonstrate the encouraging results compared with state of the art.
pdf
abs
Diversify Question Generation with Continuous Content Selectors and Question Type Modeling
Zhen Wang
|
Siwei Rao
|
Jie Zhang
|
Zhen Qin
|
Guangjian Tian
|
Jun Wang
Findings of the Association for Computational Linguistics: EMNLP 2020
Generating questions based on answers and relevant contexts is a challenging task. Recent work mainly pays attention to the quality of a single generated question. However, question generation is actually a one-to-many problem, as it is possible to raise questions with different focuses on contexts and various means of expression. In this paper, we explore the diversity of question generation and come up with methods from these two aspects. Specifically, we relate contextual focuses with content selectors, which are modeled by a continuous latent variable with the technique of conditional variational auto-encoder (CVAE). In the realization of CVAE, a multimodal prior distribution is adopted to allow for more diverse content selectors. To take into account various means of expression, question types are explicitly modeled and a diversity-promoting algorithm is proposed further. Experimental results on public datasets show that our proposed method can significantly improve the diversity of generated questions, especially from the perspective of using different question types. Overall, our proposed method achieves a better trade-off between generation quality and diversity compared with existing approaches.
2018
pdf
abs
An Operation Network for Abstractive Sentence Compression
Naitong Yu
|
Jie Zhang
|
Minlie Huang
|
Xiaoyan Zhu
Proceedings of the 27th International Conference on Computational Linguistics
Sentence compression condenses a sentence while preserving its most important contents. Delete-based models have the strong ability to delete undesired words, while generate-based models are able to reorder or rephrase the words, which are more coherent to human sentence compression. In this paper, we propose Operation Network, a neural network approach for abstractive sentence compression, which combines the advantages of both delete-based and generate-based sentence compression models. The central idea of Operation Network is to model the sentence compression process as an editing procedure. First, unnecessary words are deleted from the source sentence, then new words are either generated from a large vocabulary or copied directly from the source sentence. A compressed sentence can be obtained by a series of such edit operations (delete, copy and generate). Experiments show that Operation Network outperforms state-of-the-art baselines.
2010
pdf
A Review Selection Approach for Accurate Feature Rating Estimation
Chong Long
|
Jie Zhang
|
Xiaoyan Zhu
Coling 2010: Posters
2006
pdf
A Composite Kernel to Extract Relations between Entities with Both Flat and Structured Features
Min Zhang
|
Jie Zhang
|
Jian Su
|
GuoDong Zhou
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics
pdf
Exploring Syntactic Features for Relation Extraction using a Convolution Tree Kernel
Min Zhang
|
Jie Zhang
|
Jian Su
Proceedings of the Human Language Technology Conference of the NAACL, Main Conference
2005
pdf
Exploring Various Knowledge in Relation Extraction
GuoDong Zhou
|
Jian Su
|
Jie Zhang
|
Min Zhang
Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL’05)
2004
pdf
Multi-Criteria-based Active Learning for Named Entity Recognition
Dan Shen
|
Jie Zhang
|
Jian Su
|
Guodong Zhou
|
Chew-Lim Tan
Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics (ACL-04)
2003
pdf
Effective Adaptation of Hidden Markov Model-based Named Entity Recognizer for Biomedical Domain
Dan Shen
|
Jie Zhang
|
Guodong Zhou
|
Jian Su
|
Chew-Lim Tan
Proceedings of the ACL 2003 Workshop on Natural Language Processing in Biomedicine