Applying Information Extraction to Storybook Question and Answer Generation

Kai-Yen Kao, Chia-Hui Chang


Abstract
For educators, how to generate high quality question-answer pairs from story text is a time-consuming and labor-intensive task. The purpose is not to make students unable to answer, but to ensure that students understand the story text through the generated question-answer pairs. In this paper, we improve the FairyTaleQA question generation method by incorporating question type and its definition to the input for fine-tuning the BART (Lewis et al., 2020) model. Furthermore, we make use of the entity and relation extraction from (Zhong and Chen, 2021) as an element of template-based question generation.
Anthology ID:
2022.rocling-1.36
Volume:
Proceedings of the 34th Conference on Computational Linguistics and Speech Processing (ROCLING 2022)
Month:
November
Year:
2022
Address:
Taipei, Taiwan
Venue:
ROCLING
SIG:
Publisher:
The Association for Computational Linguistics and Chinese Language Processing (ACLCLP)
Note:
Pages:
289–298
Language:
Chinese
URL:
https://aclanthology.org/2022.rocling-1.36
DOI:
Bibkey:
Cite (ACL):
Kai-Yen Kao and Chia-Hui Chang. 2022. Applying Information Extraction to Storybook Question and Answer Generation. In Proceedings of the 34th Conference on Computational Linguistics and Speech Processing (ROCLING 2022), pages 289–298, Taipei, Taiwan. The Association for Computational Linguistics and Chinese Language Processing (ACLCLP).
Cite (Informal):
Applying Information Extraction to Storybook Question and Answer Generation (Kao & Chang, ROCLING 2022)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingestion-script-update/2022.rocling-1.36.pdf
Data
FairytaleQA