2024
pdf
abs
Ignore Me But Don’t Replace Me: Utilizing Non-Linguistic Elements for Pretraining on the Cybersecurity Domain
Eugene Jang
|
Jian Cui
|
Dayeon Yim
|
Youngjin Jin
|
Jin-Woo Chung
|
Seungwon Shin
|
Yongjae Lee
Findings of the Association for Computational Linguistics: NAACL 2024
Cybersecurity information is often technically complex and relayed through unstructured text, making automation of cyber threat intelligence highly challenging. For such text domains that involve high levels of expertise, pretraining on in-domain corpora has been a popular method for language models to obtain domain expertise. However, cybersecurity texts often contain non-linguistic elements (such as URLs and hash values) that could be unsuitable with the established pretraining methodologies. Previous work in other domains have removed or filtered such text as noise, but the effectiveness of these methods have not been investigated, especially in the cybersecurity domain. We experiment with different pretraining methodologies to account for non-linguistic elements (NLEs) and evaluate their effectiveness through downstream tasks and probing tasks. Our proposed strategy, a combination of selective MLM and jointly training NLE token classification, outperforms the commonly taken approach of replacing NLEs. We use our domain-customized methodology to train CyBERTuned, a cybersecurity domain language model that outperforms other cybersecurity PLMs on most tasks.
2023
pdf
abs
DarkBERT: A Language Model for the Dark Side of the Internet
Youngjin Jin
|
Eugene Jang
|
Jian Cui
|
Jin-Woo Chung
|
Yongjae Lee
|
Seungwon Shin
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Recent research has suggested that there are clear differences in the language used in the Dark Web compared to that of the Surface Web. As studies on the Dark Web commonly require textual analysis of the domain, language models specific to the Dark Web may provide valuable insights to researchers. In this work, we introduce DarkBERT, a language model pretrained on Dark Web data. We describe the steps taken to filter and compile the text data used to train DarkBERT to combat the extreme lexical and structural diversity of the Dark Web that may be detrimental to building a proper representation of the domain. We evaluate DarkBERT and its vanilla counterpart along with other widely used language models to validate the benefits that a Dark Web domain specific model offers in various use cases. Our evaluations show that DarkBERT outperforms current language models and may serve as a valuable resource for future research on the Dark Web.
2022
pdf
abs
Shedding New Light on the Language of the Dark Web
Youngjin Jin
|
Eugene Jang
|
Yongjae Lee
|
Seungwon Shin
|
Jin-Woo Chung
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
The hidden nature and the limited accessibility of the Dark Web, combined with the lack of public datasets in this domain, make it difficult to study its inherent characteristics such as linguistic properties. Previous works on text classification of Dark Web domain have suggested that the use of deep neural models may be ineffective, potentially due to the linguistic differences between the Dark and Surface Webs. However, not much work has been done to uncover the linguistic characteristics of the Dark Web. This paper introduces CoDA, a publicly available Dark Web dataset consisting of 10000 web documents tailored towards text-based Dark Web analysis. By leveraging CoDA, we conduct a thorough linguistic analysis of the Dark Web and examine the textual differences between the Dark Web and the Surface Web. We also assess the performance of various methods of Dark Web page classification. Finally, we compare CoDA with an existing public Dark Web dataset and evaluate their suitability for various use cases.
2018
pdf
Feature Attention Network: Interpretable Depression Detection from Social Media
Hoyun Song
|
Jinseon You
|
Jin-Woo Chung
|
Jong C. Park
Proceedings of the 32nd Pacific Asia Conference on Language, Information and Computation
2017
pdf
abs
Extraction of Gene-Environment Interaction from the Biomedical Literature
Jinseon You
|
Jin-Woo Chung
|
Wonsuk Yang
|
Jong C. Park
Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers)
Genetic information in the literature has been extensively looked into for the purpose of discovering the etiology of a disease. As the gene-disease relation is sensitive to external factors, their identification is important to study a disease. Environmental influences, which are usually called Gene-Environment interaction (GxE), have been considered as important factors and have extensively been researched in biology. Nevertheless, there is still a lack of systems for automatic GxE extraction from the biomedical literature due to new challenges: (1) there are no preprocessing tools and corpora for GxE, (2) expressions of GxE are often quite implicit, and (3) document-level comprehension is usually required. We propose to overcome these challenges with neural network models and show that a modified sequence-to-sequence model with a static RNN decoder produces a good performance in GxE recognition.
2015
pdf
CoMAGD: Annotation of Gene-Depression Relations
Rize Jin
|
Jinseon You
|
Jin-Woo Chung
|
Hee-Jin Lee
|
Maria Wolters
|
Jong Park
Proceedings of BioNLP 15
pdf
Corpus annotation with a linguistic analysis of the associations between event mentions and spatial expressions
Jin-Woo Chung
|
Jinseon You
|
Jong C. Park
Proceedings of the 29th Pacific Asia Conference on Language, Information and Computation