Yangkai Du
2021
Constructing contrastive samples via summarization for text classification with limited annotations
Yangkai Du
|
Tengfei Ma
|
Lingfei Wu
|
Fangli Xu
|
Xuhong Zhang
|
Bo Long
|
Shouling Ji
Findings of the Association for Computational Linguistics: EMNLP 2021
Contrastive Learning has emerged as a powerful representation learning method and facilitates various downstream tasks especially when supervised data is limited. How to construct efficient contrastive samples through data augmentation is key to its success. Unlike vision tasks, the data augmentation method for contrastive learning has not been investigated sufficiently in language tasks. In this paper, we propose a novel approach to construct contrastive samples for language tasks using text summarization. We use these samples for supervised contrastive learning to gain better text representations which greatly benefit text classification tasks with limited annotations. To further improve the method, we mix up samples from different classes and add an extra regularization, named Mixsum, in addition to the cross-entropy-loss. Experiments on real-world text classification datasets (Amazon-5, Yelp-5, AG News, and IMDb) demonstrate the effectiveness of the proposed contrastive learning framework with summarization-based data augmentation and Mixsum regularization.
Structured Self-Supervised Pretraining for Commonsense Knowledge Graph Completion
Jiayuan Huang
|
Yangkai Du
|
Shuting Tao
|
Kun Xu
|
Pengtao Xie
Transactions of the Association for Computational Linguistics, Volume 9
To develop commonsense-grounded NLP applications, a comprehensive and accurate commonsense knowledge graph (CKG) is needed. It is time-consuming to manually construct CKGs and many research efforts have been devoted to the automatic construction of CKGs. Previous approaches focus on generating concepts that have direct and obvious relationships with existing concepts and lack an capability to generate unobvious concepts. In this work, we aim to bridge this gap. We propose a general graph-to-paths pretraining framework that leverages high-order structures in CKGs to capture high-order relationships between concepts. We instantiate this general framework to four special cases: long path, path-to-path, router, and graph-node-path. Experiments on two datasets demonstrate the effectiveness of our methods. The code will be released via the public GitHub repository.
Search
Co-authors
- Tengfei Ma 1
- Lingfei Wu 1
- Fangli Xu 1
- Xuhong Zhang 1
- Bo Long 1
- show all...