Cheng Cheng
2024
Towards Explainable Computerized Adaptive Testing with Large Language Model
Cheng Cheng
|
GuanHao Zhao
|
Zhenya Huang
|
Yan Zhuang
|
Zhaoyuan Pan
|
Qi Liu
|
Xin Li
|
Enhong Chen
Findings of the Association for Computational Linguistics: EMNLP 2024
As intelligent education evolves, it will provide students with multiple personalized learning services based on their individual abilities. Computerized adaptive testing (CAT) is designed to accurately measure a student’s ability using the least questions, providing an efficient and personalized testing method. However, existing methods mainly focus on minimizing the number of questions required to assess ability, often lacking clear and reliable explanations for the question selection process. Educators and students can hardly trust and accept CAT systems without an understanding of the rationale behind the question selection process. To address this issue, we introduce LLM-Agent-Based CAT (LACAT), a novel agent powered by large language models to enhance CAT with human-like interpretability and explanation capabilities. LACAT consists of three key modules: the Summarizer, which generates interpretable student profiles; the Reasoner, which personalizes questions and provides human-readable explanations; and the Critic, which learns from past choices to optimize future question selection. We conducted extensive experiments on three real-world educational datasets. The results demonstrate that LACAT can perform comparably or superior to traditional CAT methods in accuracy and significantly improve the transparency and acceptability of the testing process. Human evaluations further confirm that LACAT can generate high-quality, understandable explanations, thereby enhancing student trust and satisfaction.
Search
Co-authors
- Enhong Chen 1
- GuanHao Zhao 1
- Qi Liu 1
- Xin Li 1
- Yan Zhuang 1
- show all...