Minsam Kim
2022
GRAM: Fast Fine-tuning of Pre-trained Language Models for Content-based Collaborative Filtering
Yoonseok Yang
|
Kyu Seok Kim
|
Minsam Kim
|
Juneyoung Park
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
Content-based collaborative filtering (CCF) predicts user-item interactions based on both users’ interaction history and items’ content information. Recently, pre-trained language models (PLM) have been used to extract high-quality item encodings for CCF. However, it is resource-intensive to train a PLM-based CCF model in an end-to-end (E2E) manner, since optimization involves back-propagating through every content encoding within a given user interaction sequence. To tackle this issue, we propose GRAM (GRadient Accumulation for Multi-modality in CCF), which exploits the fact that a given item often appears multiple times within a batch of interaction histories. Specifically, Single-step GRAM aggregates each item encoding’s gradients for back-propagation, with theoretic equivalence to the standard E2E training. As an extension of Single-step GRAM, we propose Multi-step GRAM, which increases the gradient update latency, achieving a further speedup with drastically less GPU memory. GRAM significantly improves training efficiency (up to 146x) on five datasets from two task domains of Knowledge Tracing and News Recommendation. Our code is available at https://github.com/yoonseok312/GRAM.
Evaluating the Knowledge Dependency of Questions
Hyeongdon Moon
|
Yoonseok Yang
|
Hangyeol Yu
|
Seunghyun Lee
|
Myeongho Jeong
|
Juneyoung Park
|
Jamin Shin
|
Minsam Kim
|
Seungtaek Choi
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
The automatic generation of Multiple Choice Questions (MCQ) has the potential to reduce the time educators spend on student assessment significantly. However, existing evaluation metrics for MCQ generation, such as BLEU, ROUGE, and METEOR, focus on the n-gram based similarity of the generated MCQ to the gold sample in the dataset and disregard their educational value.They fail to evaluate the MCQ’s ability to assess the student’s knowledge of the corresponding target fact. To tackle this issue, we propose a novel automatic evaluation metric, coined Knowledge Dependent Answerability (KDA), which measures the MCQ’s answerability given knowledge of the target fact. Specifically, we first show how to measure KDA based on student responses from a human survey.Then, we propose two automatic evaluation metrics, KDA_disc and KDA_cont, that approximate KDA by leveraging pre-trained language models to imitate students’ problem-solving behavior.Through our human studies, we show that KDA_disc and KDA_soft have strong correlations with both (1) KDA and (2) usability in an actual classroom setting, labeled by experts. Furthermore, when combined with n-gram based similarity metrics, KDA_disc and KDA_cont are shown to have a strong predictive power for various expert-labeled MCQ quality measures.
Search
Co-authors
- Yoonseok Yang 2
- Juneyoung Park 2
- Kyu Seok Kim 1
- Hyeongdon Moon 1
- Hangyeol Yu 1
- show all...