Scott Manley
2022
Improving Human Annotation Effectiveness for Fact Collection by Identifying the Most Relevant Answers
Pranav Kamath
|
Yiwen Sun
|
Thomas Semere
|
Adam Green
|
Scott Manley
|
Xiaoguang Qi
|
Kun Qian
|
Yunyao Li
Proceedings of the Fourth Workshop on Data Science with Human-in-the-Loop (Language Advances)
Identifying and integrating missing facts is a crucial task for knowledge graph completion to ensure robustness towards downstream applications such as question answering. Adding new facts for a knowledge graph in real world system often involves human verification effort, where candidate facts are verified for accuracy by human annotators. This process is labor-intensive, time-consuming, and inefficient since only a small number of missing facts can be identified. This paper proposes a simple but effective human-in-the-loop framework for fact collection that searches for a diverse set of highly relevant candidate facts for human annotation. Empirical results presented in this work demonstrate that the proposed solution leads to both improvements in i) the quality of the candidate facts as well as ii) the ability of discovering more facts to grow the knowledge graph without requiring additional human effort.
Search
Co-authors
- Pranav Kamath 1
- Yiwen Sun 1
- Thomas Semere 1
- Adam Green 1
- Xiaoguang Qi 1
- show all...
Venues
- dash1