Haiyang Yu
Other people with similar names: Haiyang Yu
2023
Universal Information Extraction with Meta-Pretrained Self-Retrieval
Xin Cong
|
Bowen Yu
|
Mengcheng Fang
|
Tingwen Liu
|
Haiyang Yu
|
Zhongkai Hu
|
Fei Huang
|
Yongbin Li
|
Bin Wang
Findings of the Association for Computational Linguistics: ACL 2023
Universal Information Extraction (Universal IE) aims to solve different extraction tasks in a uniform text-to-structure generation manner. Such a generation procedure tends to struggle when there exist complex information structures to be extracted. Retrieving knowledge from external knowledge bases may help models to overcome this problem but it is impossible to construct a knowledge base suitable for various IE tasks. Inspired by the fact that large amount of knowledge are stored in the pretrained language models (PLM) and can be retrieved explicitly, in this paper, we propose MetaRetriever to retrieve task-specific knowledge from PLMs to enhance universal IE. As different IE tasks need different knowledge, we further propose a Meta-Pretraining Algorithm which allows MetaRetriever to quicktly achieve maximum task-specific retrieval performance when fine-tuning on downstream IE tasks. Experimental results show that MetaRetriever achieves the new state-of-the-art on 4 IE tasks, 12 datasets under fully-supervised, low-resource and few-shot scenarios.
Unified Language Representation for Question Answering over Text, Tables, and Images
Bowen Yu
|
Cheng Fu
|
Haiyang Yu
|
Fei Huang
|
Yongbin Li
Findings of the Association for Computational Linguistics: ACL 2023
When trying to answer complex questions, people often rely on multiple sources of information, such as visual, textual, and tabular data. Previous approaches to this problem have focused on designing input features or model structure in the multi-modal space, which is inflexible for cross-modal reasoning or data-efficient training. In this paper, we call for an alternative paradigm, which transforms the images and tables into unified language representations, so that we can simplify the task into a simpler textual QA problem that can be solved using three steps: retrieval, ranking, and generation, all within a language space. This idea takes advantage of the power of pre-trained language models and is implemented in a framework called Solar. Our experimental results show that Solar outperforms all existing methods by 10.6-32.3 pts on two datasets, MultimodalQA and MMCoQA, across ten different metrics. Additionally, Solar achieves the best performance on the WebQA leaderboard.
Search
Fix author
Co-authors
- Fei Huang 2
- Yongbin Li 2
- Bowen Yu 2
- Xin Cong 1
- Mengcheng Fang 1
- show all...