Supervised and Unsupervised Neural Approaches to Text Readability

Matej Martinc, Senja Pollak, Marko Robnik-Šikonja


Abstract
Abstract We present a set of novel neural supervised and unsupervised approaches for determining the readability of documents. In the unsupervised setting, we leverage neural language models, whereas in the supervised setting, three different neural classification architectures are tested. We show that the proposed neural unsupervised approach is robust, transferable across languages, and allows adaptation to a specific readability task and data set. By systematic comparison of several neural architectures on a number of benchmark and new labeled readability data sets in two languages, this study also offers a comprehensive analysis of different neural approaches to readability classification. We expose their strengths and weaknesses, compare their performance to current state-of-the-art classification approaches to readability, which in most cases still rely on extensive feature engineering, and propose possibilities for improvements.
Anthology ID:
2021.cl-1.6
Volume:
Computational Linguistics, Volume 47, Issue 1 - March 2021
Month:
March
Year:
2021
Address:
Cambridge, MA
Venue:
CL
SIG:
Publisher:
MIT Press
Note:
Pages:
141–179
Language:
URL:
https://aclanthology.org/2021.cl-1.6
DOI:
10.1162/coli_a_00398
Bibkey:
Cite (ACL):
Matej Martinc, Senja Pollak, and Marko Robnik-Šikonja. 2021. Supervised and Unsupervised Neural Approaches to Text Readability. Computational Linguistics, 47(1):141–179.
Cite (Informal):
Supervised and Unsupervised Neural Approaches to Text Readability (Martinc et al., CL 2021)
Copy Citation:
PDF:
https://preview.aclanthology.org/update-css-js/2021.cl-1.6.pdf
Code
 additional community code
Data
NewselaOneStopEnglish