Hadrien Glaude
2021
Meta-Learning for Few-Shot Named Entity Recognition
Cyprien de Lichy
|
Hadrien Glaude
|
William Campbell
Proceedings of the 1st Workshop on Meta Learning and Its Applications to Natural Language Processing
Meta-learning has recently been proposed to learn models and algorithms that can generalize from a handful of examples. However, applications to structured prediction and textual tasks pose challenges for meta-learning algorithms. In this paper, we apply two meta-learning algorithms, Prototypical Networks and Reptile, to few-shot Named Entity Recognition (NER), including a method for incorporating language model pre-training and Conditional Random Fields (CRF). We propose a task generation scheme for converting classical NER datasets into the few-shot setting, for both training and evaluation. Using three public datasets, we show these meta-learning algorithms outperform a reasonable fine-tuned BERT baseline. In addition, we propose a novel combination of Prototypical Networks and Reptile.
2020
Proceedings of the 2nd Workshop on Life-long Learning for Spoken Language Systems
William M. Campbell
|
Alex Waibel
|
Dilek Hakkani-Tur
|
Timothy J. Hazen
|
Kevin Kilgour
|
Eunah Cho
|
Varun Kumar
|
Hadrien Glaude
Proceedings of the 2nd Workshop on Life-long Learning for Spoken Language Systems
2019
A Closer Look At Feature Space Data Augmentation For Few-Shot Intent Classification
Varun Kumar
|
Hadrien Glaude
|
Cyprien de Lichy
|
Wlliam Campbell
Proceedings of the 2nd Workshop on Deep Learning Approaches for Low-Resource NLP (DeepLo 2019)
New conversation topics and functionalities are constantly being added to conversational AI agents like Amazon Alexa and Apple Siri. As data collection and annotation is not scalable and is often costly, only a handful of examples for the new functionalities are available, which results in poor generalization performance. We formulate it as a Few-Shot Integration (FSI) problem where a few examples are used to introduce a new intent. In this paper, we study six feature space data augmentation methods to improve classification performance in FSI setting in combination with both supervised and unsupervised representation learning methods such as BERT. Through realistic experiments on two public conversational datasets, SNIPS, and the Facebook Dialog corpus, we show that data augmentation in feature space provides an effective way to improve intent classification performance in few-shot setting beyond traditional transfer learning approaches. In particular, we show that (a) upsampling in latent space is a competitive baseline for feature space augmentation (b) adding the difference between two examples to a new example is a simple yet effective data augmentation method.
Search
Co-authors
- Alex Waibel 1
- Cyprien de Lichy 2
- Dilek Hakkani-Tur 1
- Eunah Cho 1
- Kevin Kilgour 1
- show all...