Byeongchang Kim


2021

pdf bib
How Robust are Fact Checking Systems on Colloquial Claims?
Byeongchang Kim | Hyunwoo Kim | Seokhee Hong | Gunhee Kim
Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

Knowledge is now starting to power neural dialogue agents. At the same time, the risk of misinformation and disinformation from dialogue agents also rises. Verifying the veracity of information from formal sources are widely studied in computational fact checking. In this work, we ask: How robust are fact checking systems on claims in colloquial style? We aim to open up new discussions in the intersection of fact verification and dialogue safety. In order to investigate how fact checking systems behave on colloquial claims, we transfer the styles of claims from FEVER (Thorne et al., 2018) into colloquialism. We find that existing fact checking systems that perform well on claims in formal style significantly degenerate on colloquial claims with the same semantics. Especially, we show that document retrieval is the weakest spot in the system even vulnerable to filler words, such as “yeah” and “you know”. The document recall of WikiAPI retriever (Hanselowski et al., 2018) which is 90.0% on FEVER, drops to 72.2% on the colloquial claims. We compare the characteristics of colloquial claims to those of claims in formal style, and demonstrate the challenging issues in them.

pdf bib
Perspective-taking and Pragmatics for Generating Empathetic Responses Focused on Emotion Causes
Hyunwoo Kim | Byeongchang Kim | Gunhee Kim
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing

Empathy is a complex cognitive ability based on the reasoning of others’ affective states. In order to better understand others and express stronger empathy in dialogues, we argue that two issues must be tackled at the same time: (i) identifying which word is the cause for the other’s emotion from his or her utterance and (ii) reflecting those specific words in the response generation. However, previous approaches for recognizing emotion cause words in text require sub-utterance level annotations, which can be demanding. Taking inspiration from social cognition, we leverage a generative estimator to infer emotion cause words from utterances with no word-level label. Also, we introduce a novel method based on pragmatics to make dialogue models focus on targeted words in the input during generation. Our method is applicable to any dialogue models with no additional training on the fly. We show our approach improves multiple best-performing dialogue agents on generating more focused empathetic responses in terms of both automatic and human evaluation.

2020

pdf bib
Will I Sound Like Me? Improving Persona Consistency in Dialogues through Pragmatic Self-Consciousness
Hyunwoo Kim | Byeongchang Kim | Gunhee Kim
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)

We explore the task of improving persona consistency of dialogue agents. Recent models tackling consistency often train with additional Natural Language Inference (NLI) labels or attach trained extra modules to the generative agent for maintaining consistency. However, such additional labels and training can be demanding. Also, we find even the best-performing persona-based agents are insensitive to contradictory words. Inspired by social cognition and pragmatics, we endow existing dialogue agents with public self-consciousness on the fly through an imaginary listener. Our approach, based on the Rational Speech Acts framework (Frank and Goodman, 2012), can enforce dialogue agents to refrain from uttering contradiction. We further extend the framework by learning the distractor selection, which has been usually done manually or randomly. Results on Dialogue NLI (Welleck et al., 2019) and PersonaChat (Zhang et al., 2018) dataset show that our approach reduces contradiction and improves consistency of existing dialogue models. Moreover, we show that it can be generalized to improve context-consistency beyond persona in dialogues.

2019

pdf bib
AudioCaps: Generating Captions for Audios in The Wild
Chris Dongjoo Kim | Byeongchang Kim | Hyunmin Lee | Gunhee Kim
Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)

We explore the problem of Audio Captioning: generating natural language description for any kind of audio in the wild, which has been surprisingly unexplored in previous research. We contribute a large-scale dataset of 46K audio clips with human-written text pairs collected via crowdsourcing on the AudioSet dataset. Our thorough empirical studies not only show that our collected captions are indeed faithful to audio inputs but also discover what forms of audio representation and captioning models are effective for the audio captioning. From extensive experiments, we also propose two novel components that help improve audio captioning performance: the top-down multi-scale encoder and aligned semantic attention.

pdf bib
Abstractive Summarization of Reddit Posts with Multi-level Memory Networks
Byeongchang Kim | Hyunwoo Kim | Gunhee Kim
Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)

We address the problem of abstractive summarization in two directions: proposing a novel dataset and a new model. First, we collect Reddit TIFU dataset, consisting of 120K posts from the online discussion forum Reddit. We use such informal crowd-generated posts as text source, in contrast with existing datasets that mostly use formal documents as source such as news articles. Thus, our dataset could less suffer from some biases that key sentences usually located at the beginning of the text and favorable summary candidates are already inside the text in similar forms. Second, we propose a novel abstractive summarization model named multi-level memory networks (MMN), equipped with multi-level memory to store the information of text from different levels of abstraction. With quantitative evaluation and user studies via Amazon Mechanical Turk, we show the Reddit TIFU dataset is highly abstractive and the MMN outperforms the state-of-the-art summarization models.

2004

pdf bib
Using Higher-level Linguistic Knowledge for Speech Recognition Error Correction in a Spoken Q/A Dialog
Minwoo Jeong | Byeongchang Kim | Gary Geunbae Lee
Proceedings of the HLT-NAACL 2004 Workshop on Spoken Language Understanding for Conversational Systems and Higher Level Linguistic Information for Speech Processing

2001

pdf bib
Automatic Corpus-based Tone Prediction using K-ToBI Representation
Jin-Seok Lee | Byeongchang Kim | Gary Geunbae Lee
Proceedings of the 2001 Conference on Empirical Methods in Natural Language Processing

2000

pdf bib
Decision-Tree based Error Correction for Statistical Phrase Break Prediction in Korean
Byeongchang Kim | Geunbae Lee
COLING 2000 Volume 2: The 18th International Conference on Computational Linguistics

pdf bib
POSCAT: A Morpheme-based Speech Corpus Annotation Tool
Byeongchang Kim | Jin-seok Lee | Jeongwon Cha | Geunbae Lee
Proceedings of the Second International Conference on Language Resources and Evaluation (LREC’00)

1998

pdf bib
Unlimited Vocabulary Grapheme to Phoneme Conversion for Korean TTS
Byeongchang Kim | WonIl Lee | Geunbae Lee | Jong-Hyeok Lee
36th Annual Meeting of the Association for Computational Linguistics and 17th International Conference on Computational Linguistics, Volume 1

pdf bib
Unlimited Vocabulary Grapheme to Phoneme Conversion for Korean TTS
Byeongchang Kim | WonIl Lee | Geunbae Lee | Jong-Hyeok Lee
COLING 1998 Volume 1: The 17th International Conference on Computational Linguistics