Jihyung Moon


2021

pdf
How Does the Hate Speech Corpus Concern Sociolinguistic Discussions? A Case Study on Korean Online News Comments
Won Ik Cho | Jihyung Moon
Proceedings of the Workshop on Natural Language Processing for Digital Humanities

Social consensus has been established on the severity of online hate speech since it not only causes mental harm to the target, but also gives displeasure to the people who read it. For Korean, the definition and scope of hate speech have been discussed widely in researches, but such considerations were hardly extended to the construction of hate speech corpus. Therefore, we create a Korean online hate speech dataset with concrete annotation guideline to see how real world toxic expressions concern sociolinguistic discussions. This inductive observation reveals that hate speech in online news comments is mainly composed of social bias and toxicity. Furthermore, we check how the final corpus corresponds with the definition and scope of hate speech, and confirm that the overall procedure and outcome is in concurrence with the sociolinguistic discussions.

pdf
VUS at IWSLT 2021: A Finetuned Pipeline for Offline Speech Translation
Yong Rae Jo | Youngki Moon | Minji Jung | Jungyoon Choi | Jihyung Moon | Won Ik Cho
Proceedings of the 18th International Conference on Spoken Language Translation (IWSLT 2021)

In this technical report, we describe the fine-tuned ASR-MT pipeline used for the IWSLT shared task. We remove less useful speech samples by checking WER with an ASR model, and further train a wav2vec and Transformers-based ASR module based on the filtered data. In addition, we cleanse the errata that can interfere with the machine translation process and use it for Transformer-based MT module training. Finally, in the actual inference phase, we use a sentence boundary detection model trained with constrained data to properly merge fragment ASR outputs into full sentences. The merged sentences are post-processed using part of speech. The final result is yielded by the trained MT module. The performance using the dev set displays BLEU 20.37, and this model records the performance of BLEU 20.9 with the test set.

2020

pdf
PATQUEST: Papago Translation Quality Estimation
Yujin Baek | Zae Myung Kim | Jihyung Moon | Hyunjoong Kim | Eunjeong Park
Proceedings of the Fifth Conference on Machine Translation

This paper describes the system submitted by Papago team for the quality estimation task at WMT 2020. It proposes two key strategies for quality estimation: (1) task-specific pretraining scheme, and (2) task-specific data augmentation. The former focuses on devising learning signals for pretraining that are closely related to the downstream task. We also present data augmentation techniques that simulate the varying levels of errors that the downstream dataset may contain. Thus, our PATQUEST models are exposed to erroneous translations in both stages of task-specific pretraining and finetuning, effectively enhancing their generalization capability. Our submitted models achieve significant improvement over the baselines for Task 1 (Sentence-Level Direct Assessment; EN-DE only), and Task 3 (Document-Level Score).

pdf
BEEP! Korean Corpus of Online News Comments for Toxic Speech Detection
Jihyung Moon | Won Ik Cho | Junbum Lee
Proceedings of the Eighth International Workshop on Natural Language Processing for Social Media

Toxic comments in online platforms are an unavoidable social issue under the cloak of anonymity. Hate speech detection has been actively done for languages such as English, German, or Italian, where manually labeled corpus has been released. In this work, we first present 9.4K manually labeled entertainment news comments for identifying Korean toxic speech, collected from a widely used online news platform in Korea. The comments are annotated regarding social bias and hate speech since both aspects are correlated. The inter-annotator agreement Krippendorff’s alpha score is 0.492 and 0.496, respectively. We provide benchmarks using CharCNN, BiLSTM, and BERT, where BERT achieves the highest score on all tasks. The models generally display better performance on bias identification, since the hate speech detection is a more subjective issue. Additionally, when BERT is trained with bias label for hate speech detection, the prediction score increases, implying that bias and hate are intertwined. We make our dataset publicly available and open competitions with the corpus and benchmarks.

pdf
Revisiting Round-trip Translation for Quality Estimation
Jihyung Moon | Hyunchang Cho | Eunjeong L. Park
Proceedings of the 22nd Annual Conference of the European Association for Machine Translation

Quality estimation (QE), a task of evaluating the quality of translation automatically without human-translated reference, is one of the important challenges for machine translation (MT). As the QE methods, BLEU score for round-trip translation (RTT) had been considered. However, it was found to be a poor predictor of translation quality since BLEU was not an adequate metric to detect semantic similarity between input and RTT. Recently, the pre-trained language models have made breakthroughs in many NLP tasks by providing semantically meaningful word and sentence embeddings. In this paper, we employ the semantic embeddings to RTT-based QE metric. Our method achieves the highest correlations with human judgments compared to WMT 2019 quality estimation metric task submissions. Additionally, we observe that with semantic-level metrics, RTT-based QE is robust to the choice of a backward translation system and shows consistent performance on both SMT and NMT forward translation systems.