Abstract
Our methodology centers around fine-tuning a large language model (LLM), leveraging supervised learning to produce fictional text. Our model was trained on a dataset crafted from a collection of public domain books sourced from Project Gutenberg, which underwent thorough processing. The final fictional text was generated in response to a set of prompts provided in the baseline. Our approach was evaluated using a combination of automatic and human assessments, ensuring a comprehensive evaluation of our model’s performance.- Anthology ID:
- 2024.inlg-genchal.14
- Volume:
- Proceedings of the 17th International Natural Language Generation Conference: Generation Challenges
- Month:
- September
- Year:
- 2024
- Address:
- Tokyo, Japan
- Editors:
- Simon Mille, Miruna-Adriana Clinciu
- Venue:
- INLG
- SIG:
- SIGGEN
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 123–127
- Language:
- URL:
- https://preview.aclanthology.org/add_missing_videos/2024.inlg-genchal.14/
- DOI:
- Cite (ACL):
- Daria Seredina. 2024. A Report on LSG 2024: LLM Fine-Tuning for Fictional Stories Generation. In Proceedings of the 17th International Natural Language Generation Conference: Generation Challenges, pages 123–127, Tokyo, Japan. Association for Computational Linguistics.
- Cite (Informal):
- A Report on LSG 2024: LLM Fine-Tuning for Fictional Stories Generation (Seredina, INLG 2024)
- PDF:
- https://preview.aclanthology.org/add_missing_videos/2024.inlg-genchal.14.pdf