Chuxu Zhang
2022
Grape: Knowledge Graph Enhanced Passage Reader for Open-domain Question Answering
Mingxuan Ju
|
Wenhao Yu
|
Tong Zhao
|
Chuxu Zhang
|
Yanfang Ye
Findings of the Association for Computational Linguistics: EMNLP 2022
A common thread of open-domain question answering (QA) models employs a retriever-reader pipeline that first retrieves a handful of relevant passages from Wikipedia and then peruses the passages to produce an answer. However, even state-of-the-art readers fail to capture the complex relationships between entities appearing in questions and retrieved passages, leading to answers that contradict the facts. In light of this, we propose a novel knowledge graph enhanced passage reader, namely Grape, to improve the reader performance for open-domain QA. Specifically, for each pair of question and retrieved passage, we first construct a localized bipartite graph, attributed to entity embeddings extracted from the intermediate layer of the reader model. Then, a graph neural network learns relational knowledge while fusing graph and contextual representations into the hidden states of the reader model. Experiments on three open-domain QA benchmarks show Grape can improve the state-of-the-art performance by up to 2.2 exact match score with a negligible overhead increase, with the same retriever and retrieved passages. Our code is publicly available at https://github.com/jumxglhf/GRAPE.
2020
Few-Shot Multi-Hop Relation Reasoning over Knowledge Bases
Chuxu Zhang
|
Lu Yu
|
Mandana Saebi
|
Meng Jiang
|
Nitesh Chawla
Findings of the Association for Computational Linguistics: EMNLP 2020
Multi-hop relation reasoning over knowledge base is to generate effective and interpretable relation prediction through reasoning paths. The current methods usually require sufficient training data (fact triples) for each query relation, impairing their performances over few-shot relations (with limited triples) which are common in knowledge base. To this end, we propose FIRE, a novel few-shot multi-hop relation learning model. FIRE applies reinforcement learning to model the sequential steps of multi-hop reasoning, besides performs heterogeneous structure encoding and knowledge-aware search space pruning. The meta-learning technique is employed to optimize model parameters that could quickly adapt to few-shot relations. Empirical study on two datasets demonstrate that FIRE outperforms state-of-the-art methods.
Search
Co-authors
- Lu Yu 1
- Mandana Saebi 1
- Meng Jiang 1
- Nitesh Chawla 1
- Mingxuan Ju 1
- show all...