Liu Xiao
2025
Cracking the Code: Enhancing Implicit Hate Speech Detection through Coding Classification
Lu Wei
|
Liangzhi Li
|
Tong Xiang
|
Liu Xiao
|
Noa Garcia
Proceedings of the 5th Workshop on Trustworthy NLP (TrustNLP 2025)
The internet has become a hotspot for hate speech (HS), threatening societal harmony and individual well-being. While automatic detection methods perform well in identifying explicit hate speech (ex-HS), they struggle with more subtle forms, such as implicit hate speech (im-HS). We tackle this problem by introducing a new taxonomy for im-HS detection, defining six encoding strategies named *codetypes*. We present two methods for integrating codetypes into im-HS detection: 1) prompting large language models (LLMs) directly to classify sentences based on generated responses, and 2) using LLMs as encoders with codetypes embedded during the encoding process. Experiments show that the use of codetypes improves im-HS detection in both Chinese and English datasets, validating the effectiveness of our approach across different languages.
2024
Chinese Grammatical Error Correction via Large Language Model Guided Optimization Training
Liu Xiao
|
Li Ying
|
Yu Zhengtao
Proceedings of the 23rd Chinese National Conference on Computational Linguistics (Volume 1: Main Conference)
“Pre-trained language model-based methods for Chinese Grammatical Error Correction (CGEC)are categorized into Seq2Seq and Seq2Edit types. However, both Seq2Seq and Seq2Edit mod-els depend on high-quality training data significantly. Considering the strong generation andinference ability of large language models (LLMs), we propose a large language model-guidedoptimization training method to exploit LLMs to extract error knowledge to optimize the tradi-tional CGEC model training process. On the one hand, we use error types and confusion sets asextra knowledge to guide LLMs to generate diverse pseudo data, thus extending the error distri-bution of our training data. On the other hand, LLMs are utilized to infer the predicted resultsfrom our CGEC models and obtain the re-training data, thus iteratively optimizing our pre-trainedCGEC models. Experiments on two benchmark datasets show that our LLMs-guided optimiza-tion method with small-scale training data can achieve comparable results with baseline modelswith large-scale training data. Detailed comparison experiments demonstrate that both the earlydeviser pseudo data and the later re-training data are extremely useful for traditional CGEC modeloptimization training, and can benefit from each other. We will release our code and prompts athttps://github.com/SakuraAcedia/llm-cgec-got to facilitate future work.”