Abstract
In this study, we investigate the employment of the pre-trained BERT language model to tackle question generation tasks. We introduce three neural architectures built on top of BERT for question generation tasks. The first one is a straightforward BERT employment, which reveals the defects of directly using BERT for text generation. Accordingly, we propose another two models by restructuring our BERT employment into a sequential manner for taking information from previous decoded results. Our models are trained and evaluated on the recent question-answering dataset SQuAD. Experiment results show that our best model yields state-of-the-art performance which advances the BLEU 4 score of the existing best models from 16.85 to 22.17.- Anthology ID:
- D19-5821
- Volume:
- Proceedings of the 2nd Workshop on Machine Reading for Question Answering
- Month:
- November
- Year:
- 2019
- Address:
- Hong Kong, China
- Venue:
- WS
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 154–162
- Language:
- URL:
- https://aclanthology.org/D19-5821
- DOI:
- 10.18653/v1/D19-5821
- Cite (ACL):
- Ying-Hong Chan and Yao-Chung Fan. 2019. A Recurrent BERT-based Model for Question Generation. In Proceedings of the 2nd Workshop on Machine Reading for Question Answering, pages 154–162, Hong Kong, China. Association for Computational Linguistics.
- Cite (Informal):
- A Recurrent BERT-based Model for Question Generation (Chan & Fan, 2019)
- PDF:
- https://preview.aclanthology.org/ingestion-script-update/D19-5821.pdf
- Data
- SQuAD