San-Hee Park
2023
“Why do I feel offended?” - Korean Dataset for Offensive Language Identification
San-Hee Park
|
Kang-Min Kim
|
O-Joun Lee
|
Youjin Kang
|
Jaewon Lee
|
Su-Min Lee
|
SangKeun Lee
Findings of the Association for Computational Linguistics: EACL 2023
Warning: This paper contains some offensive expressions. Offensive content is an unavoidable issue on social media. Most existing offensive language identification methods rely on the compilation of labeled datasets. However, existing methods rarely consider low-resource languages that have relatively less data available for training (e.g., Korean). To address these issues, we construct a novel KOrean Dataset for Offensive Language Identification (KODOLI). KODOLI comprises more fine-grained offensiveness categories (i.e., not offensive, likely offensive, and offensive) than existing ones. A likely offensive language refers to texts with implicit offensiveness or abusive language without offensive intentions. In addition, we propose two auxiliary tasks to help identify offensive languages: abusive language detection and sentiment analysis. We provide experimental results for baselines on KODOLI and observe that language models suffer from identifying “LIKELY” offensive statements. Quantitative results and qualitative analysis demonstrate that jointly learning offensive language, abusive language and sentiment information improves the performance of offensive language identification.
2021
KOAS: Korean Text Offensiveness Analysis System
San-Hee Park
|
Kang-Min Kim
|
Seonhee Cho
|
Jun-Hyung Park
|
Hyuntae Park
|
Hyuna Kim
|
Seongwon Chung
|
SangKeun Lee
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing: System Demonstrations
Warning: This manuscript contains a certain level of offensive expression. As communication through social media platforms has grown immensely, the increasing prevalence of offensive language online has become a critical problem. Notably in Korea, one of the countries with the highest Internet usage, automatic detection of offensive expressions has recently been brought to attention. However, morphological richness and complex syntax of Korean causes difficulties in neural model training. Furthermore, most of previous studies mainly focus on the detection of abusive language, disregarding implicit offensiveness and underestimating a different degree of intensity. To tackle these problems, we present KOAS, a system that fully exploits both contextual and linguistic features and estimates an offensiveness score for a text. We carefully designed KOAS with a multi-task learning framework and constructed a Korean dataset for offensive analysis from various domains. Refer for a detailed demonstration.
Search
Co-authors
- Kang-Min Kim 2
- SangKeun Lee 2
- O-Joun Lee 1
- Youjin Kang 1
- Jaewon Lee 1
- show all...