Abstract
Generative Large Language Models (LLMs), such as GPT-3, have become increasingly effective and versatile in natural language processing (NLP) tasks. One such task is Lexical Simplification, where state-of-the-art methods involve complex, multi-step processes which can use both deep learning and non-deep learning processes. LLaMA, an LLM with full research access, holds unique potential for the adaption of the entire LS pipeline. This paper details the process of fine-tuning LLaMA to create LSLlama, which performs comparably to previous LS baseline models LSBert and UniHD.- Anthology ID:
- 2023.tsar-1.10
- Volume:
- Proceedings of the Second Workshop on Text Simplification, Accessibility and Readability
- Month:
- September
- Year:
- 2023
- Address:
- Varna, Bulgaria
- Editors:
- Sanja Štajner, Horacio Saggio, Matthew Shardlow, Fernando Alva-Manchego
- Venues:
- TSAR | WS
- SIG:
- Publisher:
- INCOMA Ltd., Shoumen, Bulgaria
- Note:
- Pages:
- 102–108
- Language:
- URL:
- https://aclanthology.org/2023.tsar-1.10
- DOI:
- Cite (ACL):
- Anthony Baez and Horacio Saggion. 2023. LSLlama: Fine-Tuned LLaMA for Lexical Simplification. In Proceedings of the Second Workshop on Text Simplification, Accessibility and Readability, pages 102–108, Varna, Bulgaria. INCOMA Ltd., Shoumen, Bulgaria.
- Cite (Informal):
- LSLlama: Fine-Tuned LLaMA for Lexical Simplification (Baez & Saggion, TSAR-WS 2023)
- PDF:
- https://preview.aclanthology.org/proper-vol2-ingestion/2023.tsar-1.10.pdf