David Kauchak


Complex Word Identification in Vietnamese: Towards Vietnamese Text Simplification
Phuong Nguyen | David Kauchak
Proceedings of the Workshop on Multilingual Information Access (MIA)

Text Simplification has been an extensively researched problem in English, but has not been investigated in Vietnamese. We focus on the Vietnamese-specific Complex Word Identification task, often the first step in Lexical Simplification (Shardlow, 2013). We examine three different Vietnamese datasets constructed for other Natural Language Processing tasks and show that, like in other languages, frequency is a strong signal in determining whether a word is complex, with a mean accuracy of 86.87%. Across the datasets, we find that the 10% most frequent words in many corpus can be labelled as simple, and the rest as complex, though this is more variable for smaller corpora. We also examine how human annotators perform at this task. Given the subjective nature, there is a fair amount of variability in which words are seen as difficult, though majority results are more consistent.


pdf bib
Flesch-Kincaid is Not a Text Simplification Evaluation Metric
Teerapaun Tanprasert | David Kauchak
Proceedings of the 1st Workshop on Natural Language Generation, Evaluation, and Metrics (GEM 2021)

Sentence-level text simplification is currently evaluated using both automated metrics and human evaluation. For automatic evaluation, a combination of metrics is usually employed to evaluate different aspects of the simplification. Flesch-Kincaid Grade Level (FKGL) is one metric that has been regularly used to measure the readability of system output. In this paper, we argue that FKGL should not be used to evaluate text simplification systems. We provide experimental analyses on recent system output showing that the FKGL score can easily be manipulated to improve the score dramatically with only minor impact on other automated metrics (BLEU and SARI). Instead of using FKGL, we suggest that the component statistics, along with others, be used for posthoc analysis to understand system behavior.

Improving Human Text Simplification with Sentence Fusion
Max Schwarzer | Teerapaun Tanprasert | David Kauchak
Proceedings of the Fifteenth Workshop on Graph-Based Methods for Natural Language Processing (TextGraphs-15)

The quality of fully automated text simplification systems is not good enough for use in real-world settings; instead, human simplifications are used. In this paper, we examine how to improve the cost and quality of human simplifications by leveraging crowdsourcing. We introduce a graph-based sentence fusion approach to augment human simplifications and a reranking approach to both select high quality simplifications and to allow for targeting simplifications with varying levels of simplicity. Using the Newsela dataset (Xu et al., 2015) we show consistent improvements over experts at varying simplification levels and find that the additional sentence fusion simplifications allow for simpler output than the human simplifications alone.


AutoMeTS: The Autocomplete for Medical Text Simplification
Hoang Van | David Kauchak | Gondy Leroy
Proceedings of the 28th International Conference on Computational Linguistics

The goal of text simplification (TS) is to transform difficult text into a version that is easier to understand and more broadly accessible to a wide variety of readers. In some domains, such as healthcare, fully automated approaches cannot be used since information must be accurately preserved. Instead, semi-automated approaches can be used that assist a human writer in simplifying text faster and at a higher quality. In this paper, we examine the application of autocomplete to text simplification in the medical domain. We introduce a new parallel medical data set consisting of aligned English Wikipedia with Simple English Wikipedia sentences and examine the application of pretrained neural language models (PNLMs) on this dataset. We compare four PNLMs (BERT, RoBERTa, XLNet, and GPT-2), and show how the additional context of the sentence to be simplified can be incorporated to achieve better results (6.17% absolute improvement over the best individual model). We also introduce an ensemble model that combines the four PNLMs and outperforms the best individual model by 2.1%, resulting in an overall word prediction accuracy of 64.52%.


Pomona at SemEval-2016 Task 11: Predicting Word Complexity Based on Corpus Frequency
David Kauchak
Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016)


Learning a Lexical Simplifier Using Wikipedia
Colby Horn | Cathryn Manduca | David Kauchak
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)


pdf bib
Sentence Simplification as Tree Transduction
Dan Feblowitz | David Kauchak
Proceedings of the Second Workshop on Predicting and Improving Text Readability for Target Reader Populations

Improving Text Simplification Language Modeling Using Unsimplified Text Data
David Kauchak
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)


pdf bib
Learning to Simplify Sentences Using Wikipedia
Will Coster | David Kauchak
Proceedings of the Workshop on Monolingual Text-To-Text Generation

Simple English Wikipedia: A New Text Simplification Task
William Coster | David Kauchak
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies


Paraphrasing for Automatic Evaluation
David Kauchak | Regina Barzilay
Proceedings of the Human Language Technology Conference of the NAACL, Main Conference


Feature-Based Segmentation of Narrative Documents
David Kauchak | Francine Chen
Proceedings of the ACL Workshop on Feature Engineering for Machine Learning in Natural Language Processing