Ohjoon Kwon


2025

pdf bib
Taxonomy and Analysis of Sensitive User Queries in Generative AI Search System
Hwiyeol Jo | Taiwoo Park | Hyunwoo Lee | Nayoung Choi | Changbong Kim | Ohjoon Kwon | Donghyeon Jeon | Eui-Hyeon Lee | Kyoungho Shin | Sun Suk Lim | Kyungmi Kim | Jihye Lee | Sun Kim
Findings of the Association for Computational Linguistics: NAACL 2025

Although there has been a growing interest among industries in integrating generative LLMs into their services, limited experience and scarcity of resources act as a barrier in launching and servicing large-scale LLM-based services. In this paper, we share our experiences in developing and operating generative AI models within a national-scale search engine, with a specific focus on the sensitiveness of user queries. We propose a taxonomy for sensitive search queries, outline our approaches, and present a comprehensive analysis report on sensitive queries from actual users. We believe that our experiences in launching generative AI search systems can contribute to reducing the barrier in building generative LLM-based services.

pdf bib
Conflict and Overlap Classification in Construction Standards Using a Large Language Model
Seong-Jin Park | Youn-Gyu Jin | Hyun-Young Moon | Choi Bong-Hyuck | Lee Seung Hwan | Ohjoon Kwon | Kang-Min Kim
Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 3: Industry Track)

Construction standards across different countries provide technical guidelines to ensure the quality and safety of buildings and facilities, with periodic revisions to accommodate advances in construction technology. However, these standards often contain overlapping or conflicting content owing to their broad scope and interdependence, complicating the revision process and creating public inconvenience. Although current expert-driven manual approaches aim to mitigate these issues, they are time-consuming, costly, and error-prone. To address these challenges, we propose conflict and overlap classification in construction standards using a large language model (COSLLM), a framework that leverages a construction domain-adapted large language model for the semantic comparison of sentences in construction standards. COSLLM utilizes a two-step reasoning process that adaptively employs chain-of-thought reasoning for the in-depth analysis of sentences suspected of overlaps or conflicts, ensuring computational and temporal efficiency while maintaining high classification accuracy. The framework achieved an accuracy of 97.9% and a macro F1-score of 0.907 in classifying real-world sentence pairs derived from Korean construction standards as overlapping, conflicting, or neutral. Furthermore, we develop and deploy a real-time web-based system powered by COSLLM to facilitate the efficient establishment and revision of construction standards.

2024

pdf bib
SLM as Guardian: Pioneering AI Safety with Small Language Model
Ohjoon Kwon | Donghyeon Jeon | Nayoung Choi | Gyu-Hwung Cho | Hwiyeol Jo | Changbong Kim | Hyunwoo Lee | Inho Kang | Sun Kim | Taiwoo Park
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing: Industry Track

Most prior safety research of large language models (LLMs) has focused on enhancing the alignment of LLMs to better suit the safety requirements of their use cases. However, internalizing such safeguard features into larger models brought challenges of higher training cost and unintended degradation of helpfulness. In this paper, we leverage a smaller LLM for both harmful query detection and safeguard response generation. We introduce our safety requirements and the taxonomy of harmfulness categories, and then propose a multi-task learning mechanism fusing the two tasks into a single model. We demonstrate the effectiveness of our approach, providing on par or surpassing harmful query detection and safeguard response performance compared to the publicly available LLMs.

2021

pdf bib
Handling Out-Of-Vocabulary Problem in Hangeul Word Embeddings
Ohjoon Kwon | Dohyun Kim | Soo-Ryeon Lee | Junyoung Choi | SangKeun Lee
Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume

Word embedding is considered an essential factor in improving the performance of various Natural Language Processing (NLP) models. However, it is hardly applicable in real-world datasets as word embedding is generally studied with a well-refined corpus. Notably, in Hangeul (Korean writing system), which has a unique writing system, various kinds of Out-Of-Vocabulary (OOV) appear from typos. In this paper, we propose a robust Hangeul word embedding model against typos, while maintaining high performance. The proposed model utilizes a Convolutional Neural Network (CNN) architecture with a channel attention mechanism that learns to infer the original word embeddings. The model train with a dataset that consists of a mix of typos and correct words. To demonstrate the effectiveness of the proposed model, we conduct three kinds of intrinsic and extrinsic tasks. While the existing embedding models fail to maintain stable performance as the noise level increases, the proposed model shows stable performance.