Chenghao Jia
2021
Heterogeneous Graph Neural Networks for Concept Prerequisite Relation Learning in Educational Data
Chenghao Jia
|
Yongliang Shen
|
Yechun Tang
|
Lu Sun
|
Weiming Lu
Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
Prerequisite relations among concepts are crucial for educational applications, such as curriculum planning and intelligent tutoring. In this paper, we propose a novel concept prerequisite relation learning approach, named CPRL, which combines both concept representation learned from a heterogeneous graph and concept pairwise features. Furthermore, we extend CPRL under weakly supervised settings to make our method more practical, including learning prerequisite relations from learning object dependencies and generating training data with data programming. Our experiments on four datasets show that the proposed approach achieves the state-of-the-art results comparing with existing methods.
2020
SynET: Synonym Expansion using Transitivity
Jiale Yu
|
Yongliang Shen
|
Xinyin Ma
|
Chenghao Jia
|
Chen Chen
|
Weiming Lu
Findings of the Association for Computational Linguistics: EMNLP 2020
In this paper, we study a new task of synonym expansion using transitivity, and propose a novel approach named SynET, which considers both the contexts of two given synonym pairs. It introduces an auxiliary task to reduce the impact of noisy sentences, and proposes a Multi-Perspective Entity Matching Network to match entities from multiple perspectives. Extensive experiments on a real-world dataset show the effectiveness of our approach.
Adversarial Self-Supervised Data-Free Distillation for Text Classification
Xinyin Ma
|
Yongliang Shen
|
Gongfan Fang
|
Chen Chen
|
Chenghao Jia
|
Weiming Lu
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)
Large pre-trained transformer-based language models have achieved impressive results on a wide range of NLP tasks. In the past few years, Knowledge Distillation(KD) has become a popular paradigm to compress a computationally expensive model to a resource-efficient lightweight model. However, most KD algorithms, especially in NLP, rely on the accessibility of the original training dataset, which may be unavailable due to privacy issues. To tackle this problem, we propose a novel two-stage data-free distillation method, named Adversarial self-Supervised Data-Free Distillation (AS-DFD), which is designed for compressing large-scale transformer-based models (e.g., BERT). To avoid text generation in discrete space, we introduce a Plug & Play Embedding Guessing method to craft pseudo embeddings from the teacher’s hidden knowledge. Meanwhile, with a self-supervised module to quantify the student’s ability, we adapt the difficulty of pseudo embeddings in an adversarial training manner. To the best of our knowledge, our framework is the first data-free distillation framework designed for NLP tasks. We verify the effectiveness of our method on several text classification datasets.
Search
Co-authors
- Yongliang Shen 3
- Weiming Lu 3
- Xinyin Ma 2
- Chen Chen 2
- Yechun Tang 1
- show all...