Michelle Yuan
2022
Adapting Coreference Resolution Models through Active Learning
Michelle Yuan
|
Patrick Xia
|
Chandler May
|
Benjamin Van Durme
|
Jordan Boyd-Graber
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Neural coreference resolution models trained on one dataset may not transfer to new, low-resource domains. Active learning mitigates this problem by sampling a small subset of data for annotators to label. While active learning is well-defined for classification tasks, its application to coreference resolution is neither well-defined nor fully understood. This paper explores how to actively label coreference, examining sources of model uncertainty and document reading costs. We compare uncertainty sampling strategies and their advantages through thorough error analysis. In both synthetic and human experiments, labeling spans within the same document is more effective than annotating spans across documents. The findings contribute to a more realistic development of coreference resolution models.
2020
Interactive Refinement of Cross-Lingual Word Embeddings
Michelle Yuan
|
Mozhi Zhang
|
Benjamin Van Durme
|
Leah Findlater
|
Jordan Boyd-Graber
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)
Cross-lingual word embeddings transfer knowledge between languages: models trained on high-resource languages can predict in low-resource languages. We introduce CLIME, an interactive system to quickly refine cross-lingual word embeddings for a given classification problem. First, CLIME ranks words by their salience to the downstream task. Then, users mark similarity between keywords and their nearest neighbors in the embedding space. Finally, CLIME updates the embeddings using the annotations. We evaluate CLIME on identifying health-related text in four low-resource languages: Ilocano, Sinhalese, Tigrinya, and Uyghur. Embeddings refined by CLIME capture more nuanced word semantics and have higher test accuracy than the original embeddings. CLIME often improves accuracy faster than an active learning baseline and can be easily combined with active learning to improve results.
Cold-start Active Learning through Self-supervised Language Modeling
Michelle Yuan
|
Hsuan-Tien Lin
|
Jordan Boyd-Graber
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)
Active learning strives to reduce annotation costs by choosing the most critical examples to label. Typically, the active learning strategy is contingent on the classification model. For instance, uncertainty sampling depends on poorly calibrated model confidence scores. In the cold-start setting, active learning is impractical because of model instability and data scarcity. Fortunately, modern NLP provides an additional source of information: pre-trained language models. The pre-training loss can find examples that surprise the model and should be labeled for efficient fine-tuning. Therefore, we treat the language modeling loss as a proxy for classification uncertainty. With BERT, we develop a simple strategy based on the masked language modeling loss that minimizes labeling costs for text classification. Compared to other baselines, our approach reaches higher accuracy within less sampling iterations and computation time.
Search
Co-authors
- Jordan Boyd-Graber 3
- Benjamin Van Durme 2
- Mozhi Zhang 1
- Leah Findlater 1
- Hsuan-Tien Lin 1
- show all...