Anthony Baez


2023

pdf
LSLlama: Fine-Tuned LLaMA for Lexical Simplification
Anthony Baez | Horacio Saggion
Proceedings of the Second Workshop on Text Simplification, Accessibility and Readability

Generative Large Language Models (LLMs), such as GPT-3, have become increasingly effective and versatile in natural language processing (NLP) tasks. One such task is Lexical Simplification, where state-of-the-art methods involve complex, multi-step processes which can use both deep learning and non-deep learning processes. LLaMA, an LLM with full research access, holds unique potential for the adaption of the entire LS pipeline. This paper details the process of fine-tuning LLaMA to create LSLlama, which performs comparably to previous LS baseline models LSBert and UniHD.
Search
Co-authors
Venues