Lingyu Gao
2020
A Cross-Task Analysis of Text Span Representations
Shubham Toshniwal
|
Haoyue Shi
|
Bowen Shi
|
Lingyu Gao
|
Karen Livescu
|
Kevin Gimpel
Proceedings of the 5th Workshop on Representation Learning for NLP
Many natural language processing (NLP) tasks involve reasoning with textual spans, including question answering, entity recognition, and coreference resolution. While extensive research has focused on functional architectures for representing words and sentences, there is less work on representing arbitrary spans of text within sentences. In this paper, we conduct a comprehensive empirical evaluation of six span representation methods using eight pretrained language representation models across six tasks, including two tasks that we introduce. We find that, although some simple span representations are fairly reliable across tasks, in general the optimal span representation varies by task, and can also vary within different facets of individual tasks. We also find that the choice of span representation has a bigger impact with a fixed pretrained encoder than with a fine-tuned encoder.
Distractor Analysis and Selection for Multiple-Choice Cloze Questions for Second-Language Learners
Lingyu Gao
|
Kevin Gimpel
|
Arnar Jensson
Proceedings of the Fifteenth Workshop on Innovative Use of NLP for Building Educational Applications
We consider the problem of automatically suggesting distractors for multiple-choice cloze questions designed for second-language learners. We describe the creation of a dataset including collecting manual annotations for distractor selection. We assess the relationship between the choices of the annotators and features based on distractors and the correct answers, both with and without the surrounding passage context in the cloze questions. Simple features of the distractor and correct answer correlate with the annotations, though we find substantial benefit to additionally using large-scale pretrained models to measure the fit of the distractor in the context. Based on these analyses, we propose and train models to automatically select distractors, and measure the importance of model components quantitatively.
Search
Co-authors
- Kevin Gimpel 2
- Shubham Toshniwal 1
- Haoyue Shi 1
- Bowen Shi 1
- Karen Livescu 1
- show all...