Pengyu Li
2020
COVID-19 Literature Topic-Based Search via Hierarchical NMF
Rachel Grotheer
|
Longxiu Huang
|
Yihuan Huang
|
Alona Kryshchenko
|
Oleksandr Kryshchenko
|
Pengyu Li
|
Xia Li
|
Elizaveta Rebrova
|
Kyung Ha
|
Deanna Needell
Proceedings of the 1st Workshop on NLP for COVID-19 (Part 2) at EMNLP 2020
A dataset of COVID-19-related scientific literature is compiled, combining the articles from several online libraries and selecting those with open access and full text available. Then, hierarchical nonnegative matrix factorization is used to organize literature related to the novel coronavirus into a tree structure that allows researchers to search for relevant literature based on detected topics. We discover eight major latent topics and 52 granular subtopics in the body of literature, related to vaccines, genetic structure and modeling of the disease and patient studies, as well as related diseases and virology. In order that our tool may help current researchers, an interactive website is created that organizes available literature using this hierarchical structure.
2019
NeuralClassifier: An Open-source Neural Hierarchical Multi-label Text Classification Toolkit
Liqun Liu
|
Funan Mu
|
Pengyu Li
|
Xin Mu
|
Jing Tang
|
Xingsheng Ai
|
Ran Fu
|
Lifeng Wang
|
Xing Zhou
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics: System Demonstrations
In this paper, we introduce NeuralClassifier, a toolkit for neural hierarchical multi-label text classification. NeuralClassifier is designed for quick implementation of neural models for hierarchical multi-label classification task, which is more challenging and common in real-world scenarios. A salient feature is that NeuralClassifier currently provides a variety of text encoders, such as FastText, TextCNN, TextRNN, RCNN, VDCNN, DPCNN, DRNN, AttentiveConvNet and Transformer encoder, etc. It also supports other text classification scenarios, including binary-class and multi-class classification. Built on PyTorch, the core operations are calculated in batch, making the toolkit efficient with the acceleration of GPU. Experiments show that models built in our toolkit achieve comparable performance with reported results in the literature.