Few-shot Question Generation for Reading Comprehension

Yin Poon, John Sie Yuen Lee, Yuylam@hkmu.edu.hk Yuylam@hkmu.edu.hk, Wlsuen@hkmu.edu.hk Wlsuen@hkmu.edu.hk, Eong@hkmu.edu.hk Eong@hkmu.edu.hk, Skwchu@hkmu.edu.hk Skwchu@hkmu.edu.hk


Abstract
According to the internationally recognized PIRLS (Progress in International Reading Literacy Study) assessment standards, reading comprehension questions should require not only information retrieval, but also higher-order processes such as inferencing, interpreting and evaluation. However, these kinds of questions are often not available in large quantities for training question generation models. This paper investigates whether pre-trained Large Language Models (LLMs) can produce higher-order questions. Human assessment on a Chinese dataset shows that few-shot LLM prompting generates more usable and higher-order questions than two competitive neural baselines.
Anthology ID:
2024.sighan-1.3
Volume:
Proceedings of the 10th SIGHAN Workshop on Chinese Language Processing (SIGHAN-10)
Month:
August
Year:
2024
Address:
Bangkok, Thailand
Editors:
Kam-Fai Wong, Min Zhang, Ruifeng Xu, Jing Li, Zhongyu Wei, Lin Gui, Bin Liang, Runcong Zhao
Venues:
SIGHAN | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
21–27
Language:
URL:
https://aclanthology.org/2024.sighan-1.3
DOI:
Bibkey:
Cite (ACL):
Yin Poon, John Sie Yuen Lee, Yuylam@hkmu.edu.hk Yuylam@hkmu.edu.hk, Wlsuen@hkmu.edu.hk Wlsuen@hkmu.edu.hk, Eong@hkmu.edu.hk Eong@hkmu.edu.hk, and Skwchu@hkmu.edu.hk Skwchu@hkmu.edu.hk. 2024. Few-shot Question Generation for Reading Comprehension. In Proceedings of the 10th SIGHAN Workshop on Chinese Language Processing (SIGHAN-10), pages 21–27, Bangkok, Thailand. Association for Computational Linguistics.
Cite (Informal):
Few-shot Question Generation for Reading Comprehension (Poon et al., SIGHAN-WS 2024)
Copy Citation:
PDF:
https://preview.aclanthology.org/nschneid-patch-5/2024.sighan-1.3.pdf