Ruyi Gan
2022
BioBART: Pretraining and Evaluation of A Biomedical Generative Language Model
Hongyi Yuan
|
Zheng Yuan
|
Ruyi Gan
|
Jiaxing Zhang
|
Yutao Xie
|
Sheng Yu
Proceedings of the 21st Workshop on Biomedical Language Processing
Pretrained language models have served as important backbones for natural language processing. Recently, in-domain pretraining has been shown to benefit various domain-specific downstream tasks. In the biomedical domain, natural language generation (NLG) tasks are of critical importance, while understudied. Approaching natural language understanding (NLU) tasks as NLG achieves satisfying performance in the general domain through constrained language generation or language prompting. We emphasize the lack of in-domain generative language models and the unsystematic generative downstream benchmarks in the biomedical domain, hindering the development of the research community. In this work, we introduce the generative language model BioBART that adapts BART to the biomedical domain. We collate various biomedical language generation tasks including dialogue, summarization, entity linking, and named entity recognition. BioBART pretrained on PubMed abstracts has enhanced performance compared to BART and set strong baselines on several tasks. Furthermore, we conduct ablation studies on the pretraining tasks for BioBART and find that sentence permutation has negative effects on downstream tasks.
Zero-Shot Learners for Natural Language Understanding via a Unified Multiple Choice Perspective
Ping Yang
|
Junjie Wang
|
Ruyi Gan
|
Xinyu Zhu
|
Lin Zhang
|
Ziwei Wu
|
Xinyu Gao
|
Jiaxing Zhang
|
Tetsuya Sakai
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
We propose a new paradigm for zero-shot learners that is format agnostic, i.e., it is compatible with any format and applicable to a list of language tasks, such as text classification, commonsense reasoning, coreference resolution, and sentiment analysis. Zero-shot learning aims to train a model on a given task such that it can address new learning tasks without any additional training. Our approach converts zero-shot learning into multiple-choice tasks, avoiding problems in commonly used large-scale generative models such as FLAN. It not only adds generalization ability to models but also significantly reduces the number of parameters. Our method shares the merits of efficient training and deployment. Our approach shows state-of-the-art performance on several benchmarks and produces satisfactory results on tasks such as natural language inference and text classification. Our model achieves this success with only 235M parameters, which is substantially smaller than state-of-the-art models with billions of parameters. The code and pre-trained models are available at https://github.com/IDEA-CCNL/Fengshenbang-LM/tree/main/fengshen/examples/unimc .
Search
Co-authors
- Jiaxing Zhang 2
- Hongyi Yuan 1
- Zheng Yuan 1
- Yutao Xie 1
- Sheng Yu 1
- show all...