Alison Chi
2023
BLESS: Benchmarking Large Language Models on Sentence Simplification
Tannon Kew
|
Alison Chi
|
Laura Vásquez-Rodríguez
|
Sweta Agrawal
|
Dennis Aumiller
|
Fernando Alva-Manchego
|
Matthew Shardlow
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
We present BLESS, a comprehensive performance benchmark of the most recent state-of-the-art Large Language Models (LLMs) on the task of text simplification (TS). We examine how well off-the-shelf LLMs can solve this challenging task, assessing a total of 44 models, differing in size, architecture, pre-training methods, and accessibility, on three test sets from different domains (Wikipedia, news, and medical) under a few-shot setting. Our analysis considers a suite of automatic metrics, as well as a large-scale quantitative investigation into the types of common edit operations performed by the different models. Furthermore, we perform a manual qualitative analysis on a subset of model outputs to better gauge the quality of the generated simplifications. Our evaluation indicates that the best LLMs, despite not being trained on TS perform comparably with state-of-the-art TS baselines. Additionally, we find that certain LLMs demonstrate a greater range and diversity of edit operations. Our performance benchmark will be available as a resource for the development of future TS methods and evaluation metrics.
Learning to Paraphrase Sentences to Different Complexity Levels
Alison Chi
|
Li-Kuang Chen
|
Yi-Chen Chang
|
Shu-Hui Lee
|
Jason S. Chang
Transactions of the Association for Computational Linguistics, Volume 11
While sentence simplification is an active research topic in NLP, its adjacent tasks of sentence complexification and same-level paraphrasing are not. To train models on all three tasks, we present two new unsupervised datasets. We compare these datasets, one labeled by a weak classifier and the other by a rule-based approach, with a single supervised dataset. Using these three datasets for training, we perform extensive experiments on both multitasking and prompting strategies. Compared to other systems trained on unsupervised parallel data, models trained on our weak classifier labeled dataset achieve state-of-the-art performance on the ASSET simplification benchmark. Our models also outperform previous work on sentence-level targeting. Finally, we establish how a handful of Large Language Models perform on these tasks under a zero-shot setting.
Search
Co-authors
- Tannon Kew 1
- Laura Vásquez-Rodríguez 1
- Sweta Agrawal 1
- Dennis Aumiller 1
- Fernando Alva-Manchego 1
- show all...