Ke Ren
2023
SeqXGPT: Sentence-Level AI-Generated Text Detection
Pengyu Wang
|
Linyang Li
|
Ke Ren
|
Botian Jiang
|
Dong Zhang
|
Xipeng Qiu
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
Widely applied large language models (LLMs) can generate human-like content, raising concerns about the abuse of LLMs. Therefore, it is important to build strong AI-generated text (AIGT) detectors. Current works only consider document-level AIGT detection, therefore, in this paper, we first introduce a sentence-level detection challenge by synthesizing a dataset that contains documents that are polished with LLMs, that is, the documents contain sentences written by humans and sentences modified by LLMs. Then we propose Sequence X (Check) GPT, a novel method that utilizes log probability lists from white-box LLMs as features for sentence-level AIGT detection. These features are composed like waves in speech processing and cannot be studied by LLMs. Therefore, we build SeqXGPT based on convolution and self-attention networks. We test it in both sentence and document-level detection challenges. Experimental results show that previous methods struggle in solving sentence-level AIGT detection, while our method not only significantly surpasses baseline methods in both sentence and document-level detection challenges but also exhibits strong generalization capabilities.
Watermarking LLMs with Weight Quantization
Linyang Li
|
Botian Jiang
|
Pengyu Wang
|
Ke Ren
|
Hang Yan
|
Xipeng Qiu
Findings of the Association for Computational Linguistics: EMNLP 2023
Abuse of large language models reveals high risks as large language models are being deployed at an astonishing speed. It is important to protect the model weights to avoid malicious usage that violates licenses of open-source large language models. This paper proposes a novel watermarking strategy that plants watermarks in the quantization process of large language models without pre-defined triggers during inference. The watermark works when the model is used in the fp32 mode and remains hidden when the model is quantized to int8, in this way, the users can only inference the model without further supervised fine-tuning of the model. We successfully plant the watermark into open-source large language model weights including GPT-Neo and LLaMA. We hope our proposed method can provide a potential direction for protecting model weights in the era of large language model applications.
PerturbScore: Connecting Discrete and Continuous Perturbations in NLP
Linyang Li
|
Ke Ren
|
Yunfan Shao
|
Pengyu Wang
|
Xipeng Qiu
Findings of the Association for Computational Linguistics: EMNLP 2023
With the rapid development of neural network applications in NLP, model robustness problem is gaining more attention. Different from computer vision, the discrete nature of texts makes it more challenging to explore robustness in NLP. Therefore, in this paper, we aim to connect discrete perturbations with continuous perturbations, therefore we can use such connections as a bridge to help understand discrete perturbations in NLP models. Specifically, we first explore how to connect and measure the correlation between discrete perturbations and continuous perturbations. Then we design a regression task as a PerturbScore to learn the correlation automatically. Through experimental results, we find that we can build a connection between discrete and continuous perturbations and use the proposed PerturbScore to learn such correlation, surpassing previous methods used in discrete perturbation measuring. Further, the proposed PerturbScore can be well generalized to different datasets, perturbation methods, indicating that we can use it as a powerful tool to study model robustness in NLP.
Search
Co-authors
- Pengyu Wang 3
- Linyang Li 3
- Xipeng Qiu 3
- Botian Jiang 2
- Dong Zhang 1
- show all...