Hyokun Yun


2022

pdf
MICO: Selective Search with Mutual Information Co-training
Zhanyu Wang | Xiao Zhang | Hyokun Yun | Choon Hui Teo | Trishul Chilimbi
Proceedings of the 29th International Conference on Computational Linguistics

In contrast to traditional exhaustive search, selective search first clusters documents into several groups before all the documents are searched exhaustively by a query, to limit the search executed within one group or only a few groups. Selective search is designed to reduce the latency and computation in modern large-scale search systems. In this study, we propose MICO, a Mutual Information CO-training framework for selective search with minimal supervision using the search logs. After training, MICO does not only cluster the documents, but also routes unseen queries to the relevant clusters for efficient retrieval. In our empirical experiments, MICO significantly improves the performance on multiple metrics of selective search and outperforms a number of existing competitive baselines.

2019

pdf
Robustness to Capitalization Errors in Named Entity Recognition
Sravan Bodapati | Hyokun Yun | Yaser Al-Onaizan
Proceedings of the 5th Workshop on Noisy User-generated Text (W-NUT 2019)

Robustness to capitalization errors is a highly desirable characteristic of named entity recognizers, yet we find standard models for the task are surprisingly brittle to such noise.Existing methods to improve robustness to the noise completely discard given orthographic information, which significantly degrades their performance on well-formed text. We propose a simple alternative approach based on data augmentation, which allows the model to learn to utilize or ignore orthographic information depending on its usefulness in the context. It achieves competitive robustness to capitalization errors while making negligible compromise to its performance on well-formed text and significantly improving generalization power on noisy user-generated text. Our experiments clearly and consistently validate our claim across different types of machine learning models, languages, and dataset sizes.

2017

pdf
Deep Active Learning for Named Entity Recognition
Yanyao Shen | Hyokun Yun | Zachary Lipton | Yakov Kronrod | Animashree Anandkumar
Proceedings of the 2nd Workshop on Representation Learning for NLP

Deep neural networks have advanced the state of the art in named entity recognition. However, under typical training procedures, advantages over classical methods emerge only with large datasets. As a result, deep learning is employed only when large public datasets or a large budget for manually labeling data is available. In this work, we show otherwise: by combining deep learning with active learning, we can outperform classical methods even with a significantly smaller amount of training data.

2016

pdf
WordRank: Learning Word Embeddings via Robust Ranking
Shihao Ji | Hyokun Yun | Pinar Yanardag | Shin Matsushima | S. V. N. Vishwanathan
Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing