Large Language Models are Students at Various Levels: Zero-shot Question Difficulty Estimation

Jae-Woo Park, Seong-Jin Park, Hyun-Sik Won, Kang-Min Kim


Abstract
Recent advancements in educational platforms have emphasized the importance of personalized education. Accurately estimating question difficulty based on the ability of the student group is essential for personalized question recommendations. Several studies have focused on predicting question difficulty using student question-solving records or textual information about the questions. However, these approaches require a large amount of student question-solving records and fail to account for the subjective difficulties perceived by different student groups. To address these limitations, we propose the LLaSA framework that utilizes large language models to represent students at various levels. Our proposed method, LLaSA and the zero-shot LLaSA, can estimate question difficulty both with and without students’ question-solving records. In evaluations on the DBE-KT22 and ASSISTMents 2005–2006 benchmarks, the zero-shot LLaSA demonstrated a performance comparable to those of strong baseline models even without any training. When evaluated using the classification method, LLaSA outperformed the baseline models, achieving state-of-the-art performance. In addition, the zero-shot LLaSA showed a high correlation with the regressed IRT curve when compared to question difficulty derived from students’ question-solving records, highlighting its potential for real-world applications.
Anthology ID:
2024.findings-emnlp.477
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2024
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
8157–8177
Language:
URL:
https://preview.aclanthology.org/fix-sig-urls/2024.findings-emnlp.477/
DOI:
10.18653/v1/2024.findings-emnlp.477
Bibkey:
Cite (ACL):
Jae-Woo Park, Seong-Jin Park, Hyun-Sik Won, and Kang-Min Kim. 2024. Large Language Models are Students at Various Levels: Zero-shot Question Difficulty Estimation. In Findings of the Association for Computational Linguistics: EMNLP 2024, pages 8157–8177, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
Large Language Models are Students at Various Levels: Zero-shot Question Difficulty Estimation (Park et al., Findings 2024)
Copy Citation:
PDF:
https://preview.aclanthology.org/fix-sig-urls/2024.findings-emnlp.477.pdf