Wonjun Jang
2025
A Practical Approach for Building Production-Grade Conversational Agents with Workflow Graphs
Chiwan Park
|
Wonjun Jang
|
Daeryong Kim
|
Aelim Ahn
|
Kichang Yang
|
Woosung Hwang
|
Jihyeon Roh
|
Hyerin Park
|
Hyosun Wang
|
Min Seok Kim
|
Jihoon Kang
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 6: Industry Track)
The advancement of Large Language Models (LLMs) has led to significant improvements in various service domains, including search, recommendation, and chatbot applications.However, applying state-of-the-art (SOTA) research to industrial settings presents challenges, as it requires maintaining flexible conversational abilities while also strictly complying with service-specific constraints.This can be seen as two conflicting requirements due to the probabilistic nature of LLMs.In this paper, we propose our approach to addressing this challenge and detail the strategies we employed to overcome their inherent limitations in real-world applications.We conduct a practical case study of a conversational agent designed for the e-commerce domain, detailing our implementation workflow and optimizations.Our findings provide insights into bridging the gap between academic research and real-world application, introducing a framework for developing scalable, controllable, and reliable AI-driven agents.
2022
APEACH: Attacking Pejorative Expressions with Analysis on Crowd-Generated Hate Speech Evaluation Datasets
Kichang Yang
|
Wonjun Jang
|
Won Ik Cho
Findings of the Association for Computational Linguistics: EMNLP 2022
In hate speech detection, developing training and evaluation datasets across various domains is the critical issue. Whereas, major approaches crawl social media texts and hire crowd-workers to annotate the data. Following this convention often restricts the scope of pejorative expressions to a single domain lacking generalization. Sometimes domain overlap between training corpus and evaluation set overestimate the prediction performance when pretraining language models on low-data language. To alleviate these problems in Korean, we propose APEACH that asks unspecified users to generate hate speech examples followed by minimal post-labeling. We find that APEACH can collect useful datasets that are less sensitive to the lexical overlaps between the pretraining corpus and the evaluation set, thereby properly measuring the model performance.
Search
Fix author
Co-authors
- Kichang Yang 2
- Aelim Ahn 1
- Won Ik Cho 1
- Woosung Hwang 1
- Jihoon Kang 1
- show all...