@inproceedings{baez-saggion-2023-lsllama,
    title = "{LSL}lama: Fine-Tuned {LL}a{MA} for Lexical Simplification",
    author = "Baez, Anthony  and
      Saggion, Horacio",
    editor = "{\v{S}}tajner, Sanja  and
      Saggio, Horacio  and
      Shardlow, Matthew  and
      Alva-Manchego, Fernando",
    booktitle = "Proceedings of the Second Workshop on Text Simplification, Accessibility and Readability",
    month = sep,
    year = "2023",
    address = "Varna, Bulgaria",
    publisher = "INCOMA Ltd., Shoumen, Bulgaria",
    url = "https://preview.aclanthology.org/ingest-emnlp/2023.tsar-1.10/",
    pages = "102--108",
    abstract = "Generative Large Language Models (LLMs), such as GPT-3, have become increasingly effective and versatile in natural language processing (NLP) tasks. One such task is Lexical Simplification, where state-of-the-art methods involve complex, multi-step processes which can use both deep learning and non-deep learning processes. LLaMA, an LLM with full research access, holds unique potential for the adaption of the entire LS pipeline. This paper details the process of fine-tuning LLaMA to create LSLlama, which performs comparably to previous LS baseline models LSBert and UniHD."
}Markdown (Informal)
[LSLlama: Fine-Tuned LLaMA for Lexical Simplification](https://preview.aclanthology.org/ingest-emnlp/2023.tsar-1.10/) (Baez & Saggion, TSAR 2023)
ACL