Wenbiao Li
2024
Going Beyond Passages: Readability Assessment for Book-level Long Texts
Wenbiao Li
|
Rui Sun
|
Tianyi Zhang
|
Yunfang Wu
Proceedings of the 23rd Chinese National Conference on Computational Linguistics (Volume 1: Main Conference)
“Readability assessment for book-level long text is widely needed in real educational applica-tions. However, most of the current researches focus on passage-level readability assessmentand little work has been done to process ultra-long texts. In order to process the long sequenceof book texts better and to enhance pretrained models with difficulty knowledge, we propose anovel model DSDR, difficulty-aware segment pre-training and difficulty multi-view representa-tion. Specifically, we split all books into multiple fixed-length segments and employ unsuper-vised clustering to obtain difficulty-aware segments, which are used to re-train the pretrainedmodel to learn difficulty knowledge. Accordingly, a long text is represented by averaging mul-tiple vectors of segments with varying difficulty levels. We construct a new dataset of GradedChildren’s Books to evaluate model performance. Our proposed model achieves promising re-sults, outperforming both the traditional SVM classifier and several popular pretrained models.In addition, our work establishes a new prototype for book-level readability assessment, whichprovides an important benchmark for related research in future work.”
2022
A Unified Neural Network Model for Readability Assessment with Feature Projection and Length-Balanced Loss
Wenbiao Li
|
Wang Ziyang
|
Yunfang Wu
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
Readability assessment is a basic research task in the field of education. Traditional methods mainly employ machine learning classifiers with hundreds of linguistic features. Although the deep learning model has become the prominent approach for almost all NLP tasks, it is less explored for readability assessment. In this paper, we propose a BERT-based model with feature projection and length-balanced loss (BERT-FP-LBL) to determine the difficulty level of a given text. First, we introduce topic features guided by difficulty knowledge to complement the traditional linguistic features. From the linguistic features, we extract really useful orthogonal features to supplement BERT representations by means of projection filtering. Furthermore, we design a length-balanced loss to handle the greatly varying length distribution of the readability data. We conduct experiments on three English benchmark datasets and one Chinese dataset, and the experimental results show that our proposed model achieves significant improvements over baseline models. Interestingly, our proposed model achieves comparable results with human experts in consistency test.