Zhenwei Tang
2024
SPIN: Sparsifying and Integrating Internal Neurons in Large Language Models for Text Classification
Difan Jiao
|
Yilun Liu
|
Zhenwei Tang
|
Daniel Matter
|
Jürgen Pfeffer
|
Ashton Anderson
Findings of the Association for Computational Linguistics ACL 2024
Among the many tasks that Large Language Models (LLMs) have revolutionized is text classification. Current text classification paradigms, however, rely solely on the output of the final layer in the LLM, with the rich information contained in internal neurons largely untapped. In this study, we present SPIN: a model-agnostic framework that sparsifies and integrates internal neurons of intermediate layers of LLMs for text classification. Specifically, SPIN sparsifies internal neurons by linear probing-based salient neuron selection layer by layer, avoiding noise from unrelated neurons and ensuring efficiency. The cross-layer salient neurons are then integrated to serve as multi-layered features for the classification head. Extensive experimental results show our proposed SPIN significantly improves text classification accuracy, efficiency, and interpretability.
2023
DiffuDetox: A Mixed Diffusion Model for Text Detoxification
Griffin Floto
|
Mohammad Mahdi Abdollah Pour
|
Parsa Farinneya
|
Zhenwei Tang
|
Ali Pesaranghader
|
Manasa Bharadwaj
|
Scott Sanner
Findings of the Association for Computational Linguistics: ACL 2023
Text detoxification is a conditional text generation task aiming to remove offensive content from toxic text. It is highly useful for online forums and social media, where offensive content is frequently encountered. Intuitively, there are diverse ways to detoxify sentences while preserving their meanings, and we can select from detoxified sentences before displaying text to users. Conditional diffusion models are particularly suitable for this task given their demonstrated higher generative diversity than existing conditional text generation models based on language models. Nonetheless, text fluency declines when they are trained with insufficient data, which is the case for this task. In this work, we propose DiffuDetox, a mixed conditional and unconditional diffusion model for text detoxification. The conditional model takes toxic text as the condition and reduces its toxicity, yielding a diverse set of detoxified sentences. The unconditional model is trained to recover the input text, which allows the introduction of additional fluent text for training and thus ensures text fluency. Extensive experimental results and in-depth analysis demonstrate the effectiveness of our proposed DiffuDetox.
Search
Co-authors
- Difan Jiao 1
- Yilun Liu 1
- Daniel Matter 1
- Jürgen Pfeffer 1
- Ashton Anderson 1
- show all...