Steven Rennie
2021
CNNBiF: CNN-based Bigram Features for Named Entity Recognition
Chul Sung
|
Vaibhava Goel
|
Etienne Marcheret
|
Steven Rennie
|
David Nahamoo
Findings of the Association for Computational Linguistics: EMNLP 2021
Transformer models fine-tuned with a sequence labeling objective have become the dominant choice for named entity recognition tasks. However, a self-attention mechanism with unconstrained length can fail to fully capture local dependencies, particularly when training data is limited. In this paper, we propose a novel joint training objective which better captures the semantics of words corresponding to the same entity. By augmenting the training objective with a group-consistency loss component we enhance our ability to capture local dependencies while still enjoying the advantages of the unconstrained self-attention mechanism. On the CoNLL2003 dataset, our method achieves a test F1 of 93.98 with a single transformer model. More importantly our fine-tuned CoNLL2003 model displays significant gains in generalization to out of domain datasets: on the OntoNotes subset we achieve an F1 of 72.67 which is 0.49 points absolute better than the baseline, and on the WNUT16 set an F1 of 68.22 which is a gain of 0.48 points. Furthermore, on the WNUT17 dataset we achieve an F1 of 55.85, yielding a 2.92 point absolute improvement.
2020
Unsupervised Adaptation of Question Answering Systems via Generative Self-training
Steven Rennie
|
Etienne Marcheret
|
Neil Mallinar
|
David Nahamoo
|
Vaibhava Goel
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)
BERT-era question answering systems have recently achieved impressive performance on several question-answering (QA) tasks. These systems are based on representations that have been pre-trained on self-supervised tasks such as word masking and sentence entailment, using massive amounts of data. Nevertheless, additional pre-training closer to the end-task, such as training on synthetic QA pairs, has been shown to improve performance. While recent work has considered augmenting labelled data and leveraging large unlabelled datasets to generate synthetic QA data, directly adapting to target data has received little attention. In this paper we investigate the iterative generation of synthetic QA pairs as a way to realize unsupervised self adaptation. Motivated by the success of the roundtrip consistency method for filtering generated QA pairs, we present iterative generalizations of the approach, which maximize an approximation of a lower bound on the probability of the adaptation data. By adapting on synthetic QA pairs generated on the target data, our method is able to improve QA systems significantly, using an order of magnitude less synthetic data and training computation than existing augmentation approaches.
2010
A Power Mean Based Algorithm for Combining Multiple Alignment Tables
Sameer Maskey
|
Steven Rennie
|
Bowen Zhou
Coling 2010: Posters
Search
Co-authors
- Vaibhava Goel 2
- Etienne Marcheret 2
- David Nahamoo 2
- Chul Sung 1
- Neil Mallinar 1
- show all...