Finding Educationally Supportive Contexts for Vocabulary Learning with Attention-Based Models

Sungjin Nam, Kevyn Collins-Thompson, David Jurgens, Xin Tong


Abstract
When learning new vocabulary, both humans and machines acquire critical information about the meaning of an unfamiliar word through contextual information in a sentence or passage. However, not all contexts are equally helpful for learning an unfamiliar ‘target’ word. Some contexts provide a rich set of semantic clues to the target word’s meaning, while others are less supportive. We explore the task of finding educationally supportive contexts with respect to a given target word for vocabulary learning scenarios, particularly for improving student literacy skills. Because of their inherent context-based nature, attention-based deep learning methods provide an ideal starting point. We evaluate attention-based approaches for predicting the amount of educational support from contexts, ranging from a simple custom model using pre-trained embeddings with an additional attention layer, to a commercial Large Language Model (LLM). Using an existing major benchmark dataset for educational context support prediction, we found that a sophisticated but generic LLM had poor performance, while a simpler model using a custom attention-based approach achieved the best-known performance to date on this dataset.
Anthology ID:
2024.lrec-main.640
Volume:
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)
Month:
May
Year:
2024
Address:
Torino, Italia
Editors:
Nicoletta Calzolari, Min-Yen Kan, Veronique Hoste, Alessandro Lenci, Sakriani Sakti, Nianwen Xue
Venues:
LREC | COLING
SIG:
Publisher:
ELRA and ICCL
Note:
Pages:
7286–7295
Language:
URL:
https://aclanthology.org/2024.lrec-main.640
DOI:
Bibkey:
Cite (ACL):
Sungjin Nam, Kevyn Collins-Thompson, David Jurgens, and Xin Tong. 2024. Finding Educationally Supportive Contexts for Vocabulary Learning with Attention-Based Models. In Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024), pages 7286–7295, Torino, Italia. ELRA and ICCL.
Cite (Informal):
Finding Educationally Supportive Contexts for Vocabulary Learning with Attention-Based Models (Nam et al., LREC-COLING 2024)
Copy Citation:
PDF:
https://preview.aclanthology.org/naacl-24-ws-corrections/2024.lrec-main.640.pdf