Rohan Paul
2021
Multi-facet Universal Schema
Rohan Paul
|
Haw-Shiuan Chang
|
Andrew McCallum
Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume
Universal schema (USchema) assumes that two sentence patterns that share the same entity pairs are similar to each other. This assumption is widely adopted for solving various types of relation extraction (RE) tasks. Nevertheless, each sentence pattern could contain multiple facets, and not every facet is similar to all the facets of another sentence pattern co-occurring with the same entity pair. To address the violation of the USchema assumption, we propose multi-facet universal schema that uses a neural model to represent each sentence pattern as multiple facet embeddings and encourage one of these facet embeddings to be close to that of another sentence pattern if they co-occur with the same entity pair. In our experiments, we demonstrate that multi-facet embeddings significantly outperform their single-facet embedding counterpart, compositional universal schema (CUSchema) (Verga et al., 2016), in distantly supervised relation extraction tasks. Moreover, we can also use multiple embeddings to detect the entailment relation between two sentence patterns when no manual label is available.
2019
Leveraging Past References for Robust Language Grounding
Subhro Roy
|
Michael Noseworthy
|
Rohan Paul
|
Daehyung Park
|
Nicholas Roy
Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL)
Grounding referring expressions to objects in an environment has traditionally been considered a one-off, ahistorical task. However, in realistic applications of grounding, multiple users will repeatedly refer to the same set of objects. As a result, past referring expressions for objects can provide strong signals for grounding subsequent referring expressions. We therefore reframe the grounding problem from the perspective of coreference detection and propose a neural network that detects when two expressions are referring to the same object. The network combines information from vision and past referring expressions to resolve which object is being referred to. Our experiments show that detecting referring expression coreference is an effective way to ground objects described by subtle visual properties, which standard visual grounding models have difficulty capturing. We also show the ability to detect object coreference allows the grounding model to perform well even when it encounters object categories not seen in the training data.
Search
Co-authors
- Haw-Shiuan Chang 1
- Andrew Mccallum 1
- Subhro Roy 1
- Michael Noseworthy 1
- Daehyung Park 1
- show all...