Omri Koshorek
2019
On the Limits of Learning to Actively Learn Semantic Representations
Omri Koshorek
|
Gabriel Stanovsky
|
Yichu Zhou
|
Vivek Srikumar
|
Jonathan Berant
Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL)
One of the goals of natural language understanding is to develop models that map sentences into meaning representations. However, training such models requires expensive annotation of complex structures, which hinders their adoption. Learning to actively-learn(LTAL) is a recent paradigm for reducing the amount of labeled data by learning a policy that selects which samples should be labeled. In this work, we examine LTAL for learning semantic representations, such as QA-SRL. We show that even an oracle policy that is allowed to pick examples that maximize performance on the test set (and constitutes an upper bound on the potential of LTAL), does not substantially improve performance compared to a random policy. We investigate factors that could explain this finding and show that a distinguishing characteristic of successful applications of LTAL is the interaction between optimization and the oracle policy selection process. In successful applications of LTAL, the examples selected by the oracle policy do not substantially depend on the optimization procedure, while in our setup the stochastic nature of optimization strongly affects the examples selected by the oracle. We conclude that the current applicability of LTAL for improving data efficiency in learning semantic meaning representations is limited.
2018
Text Segmentation as a Supervised Learning Task
Omri Koshorek
|
Adir Cohen
|
Noam Mor
|
Michael Rotman
|
Jonathan Berant
Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)
Text segmentation, the task of dividing a document into contiguous segments based on its semantic structure, is a longstanding challenge in language understanding. Previous work on text segmentation focused on unsupervised methods such as clustering or graph search, due to the paucity in labeled data. In this work, we formulate text segmentation as a supervised learning problem, and present a large new dataset for text segmentation that is automatically extracted and labeled from Wikipedia. Moreover, we develop a segmentation model based on this dataset and show that it generalizes well to unseen natural text.
Search
Co-authors
- Jonathan Berant 2
- Adir Cohen 1
- Noam Mor 1
- Michael Rotman 1
- Gabriel Stanovsky 1
- show all...