Qiaozhu Mei


2021

pdf
Explainable Prediction of Text Complexity: The Missing Preliminaries for Text Simplification
Cristina Garbacea | Mengtian Guo | Samuel Carton | Qiaozhu Mei
Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)

Text simplification reduces the language complexity of professional content for accessibility purposes. End-to-end neural network models have been widely adopted to directly generate the simplified version of input text, usually functioning as a blackbox. We show that text simplification can be decomposed into a compact pipeline of tasks to ensure the transparency and explainability of the process. The first two steps in this pipeline are often neglected: 1) to predict whether a given piece of text needs to be simplified, and 2) if yes, to identify complex parts of the text. The two tasks can be solved separately using either lexical or deep learning methods, or solved jointly. Simply applying explainable complexity prediction as a preliminary step, the out-of-sample text simplification performance of the state-of-the-art, black-box simplification models can be improved by a large margin.

2020

pdf
UMSIForeseer at SemEval-2020 Task 11: Propaganda Detection by Fine-Tuning BERT with Resampling and Ensemble Learning
Yunzhe Jiang | Cristina Garbacea | Qiaozhu Mei
Proceedings of the Fourteenth Workshop on Semantic Evaluation

We describe our participation at the SemEval 2020 “Detection of Propaganda Techniques in News Articles” - Techniques Classification (TC) task, designed to categorize textual fragments into one of the 14 given propaganda techniques. Our solution leverages pre-trained BERT models. We present our model implementations, evaluation results and analysis of these results. We also investigate the potential of combining language models with resampling and ensemble learning methods to deal with data imbalance and improve performance.

2019

pdf
The Strength of the Weakest Supervision: Topic Classification Using Class Labels
Jiatong Li | Kai Zheng | Hua Xu | Qiaozhu Mei | Yue Wang
Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Student Research Workshop

When developing topic classifiers for real-world applications, we begin by defining a set of meaningful topic labels. Ideally, an intelligent classifier can understand these labels right away and start classifying documents. Indeed, a human can confidently tell if an article is about science, politics, sports, or none of the above, after knowing just the class labels. We study the problem of training an initial topic classifier using only class labels. We investigate existing techniques for solving this problem and propose a simple but effective approach. Experiments on a variety of topic classification data sets show that learning from class labels can save significant initial labeling effort, essentially providing a ”free” warm start to the topic classifier.

pdf
Judge the Judges: A Large-Scale Evaluation Study of Neural Language Models for Online Review Generation
Cristina Garbacea | Samuel Carton | Shiyan Yan | Qiaozhu Mei
Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)

We conduct a large-scale, systematic study to evaluate the existing evaluation methods for natural language generation in the context of generating online product reviews. We compare human-based evaluators with a variety of automated evaluation procedures, including discriminative evaluators that measure how well machine-generated text can be distinguished from human-written text, as well as word overlap metrics that assess how similar the generated text compares to human-written references. We determine to what extent these different evaluators agree on the ranking of a dozen of state-of-the-art generators for online product reviews. We find that human evaluators do not correlate well with discriminative evaluators, leaving a bigger question of whether adversarial accuracy is the correct objective for natural language generation. In general, distinguishing machine-generated text is challenging even for human evaluators, and human decisions correlate better with lexical overlaps. We find lexical diversity an intriguing metric that is indicative of the assessments of different evaluators. A post-experiment survey of participants provides insights into how to evaluate and improve the quality of natural language generation systems.

2018

pdf
Extractive Adversarial Networks: High-Recall Explanations for Identifying Personal Attacks in Social Media Posts
Samuel Carton | Qiaozhu Mei | Paul Resnick
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing

We introduce an adversarial method for producing high-recall explanations of neural text classifier decisions. Building on an existing architecture for extractive explanations via hard attention, we add an adversarial layer which scans the residual of the attention for remaining predictive signal. Motivated by the important domain of detecting personal attacks in social media comments, we additionally demonstrate the importance of manually setting a semantically appropriate “default” behavior for the model by explicitly manipulating its bias term. We develop a validation set of human-annotated personal attacks to evaluate the impact of these changes.

2011

pdf
Rumor has it: Identifying Misinformation in Microblogs
Vahed Qazvinian | Emily Rosengren | Dragomir R. Radev | Qiaozhu Mei
Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing

pdf
Simultaneous Similarity Learning and Feature-Weight Learning for Document Clustering
Pradeep Muthukrishnan | Dragomir Radev | Qiaozhu Mei
Proceedings of TextGraphs-6: Graph-based Methods for Natural Language Processing

2010

pdf
Cross-Lingual Latent Topic Extraction
Duo Zhang | Qiaozhu Mei | ChengXiang Zhai
Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics

pdf
Context Comparison of Bursty Events in Web Search and Online Media
Yunliang Jiang | Cindy Xide Lin | Qiaozhu Mei
Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing

2008

pdf
Generating Impact-Based Summaries for Scientific Literature
Qiaozhu Mei | ChengXiang Zhai
Proceedings of ACL-08: HLT

2006

pdf
Language Model Information Retrieval with Document Expansion
Tao Tao | Xuanhui Wang | Qiaozhu Mei | ChengXiang Zhai
Proceedings of the Human Language Technology Conference of the NAACL, Main Conference

2004

pdf
From Text to Exhibitions: A New Approach for E-Learning on Language and Literature based on Text Mining
Qiaozhu Mei | Junfeng Hu
Proceedings of the Workshop on eLearning for Computational Linguistics and Computational Linguistics for eLearning