EduAdapt: A Question Answer Benchmark Dataset for Evaluating Grade-Level Adaptability in LLMs

Numaan Naeem, Abdellah El Mekki, Muhammad Abdul-Mageed


Abstract
Large language models (LLMs) are transforming education by answering questions, explaining complex concepts, and generating content across a wide range of subjects. Despite strong performance on academic benchmarks, they often fail to tailor responses to students’ grade levels. This is a critical need in K-12 education, where age-appropriate vocabulary and explanation are essential for effective learning. Existing models frequently produce outputs that are too advanced or vague for younger learners, and there are no standardized benchmarks to evaluate their ability to adjust across cognitive and developmental stages. To address this gap, we introduce EduAdapt, a benchmark of nearly 48k grade-labeled QA pairs across nine science subjects, spanning Grades 1-12 and grouped into four grade levels. We evaluate a diverse set of open-source LLMs on EduAdapt and find that while larger models generally perform better, they still struggle with generating suitable responses for early-grade students (Grades 1-5). Our work presents the first dataset and evaluation framework for assessing grade-level adaptability in LLMs, aiming to foster more developmentally aligned educational AI systems through better training and prompting strategies. EduAdapt code and datasets are publicly available at https://github.com/NaumanNaeem/EduAdapt.
Anthology ID:
2025.emnlp-main.1736
Volume:
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
34224–34251
Language:
URL:
https://preview.aclanthology.org/name-variant-enfa-fane/2025.emnlp-main.1736/
DOI:
10.18653/v1/2025.emnlp-main.1736
Bibkey:
Cite (ACL):
Numaan Naeem, Abdellah El Mekki, and Muhammad Abdul-Mageed. 2025. EduAdapt: A Question Answer Benchmark Dataset for Evaluating Grade-Level Adaptability in LLMs. In Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, pages 34224–34251, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
EduAdapt: A Question Answer Benchmark Dataset for Evaluating Grade-Level Adaptability in LLMs (Naeem et al., EMNLP 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/name-variant-enfa-fane/2025.emnlp-main.1736.pdf
Checklist:
 2025.emnlp-main.1736.checklist.pdf