Amuro & Char: Analyzing the Relationship between Pre-Training and Fine-Tuning of Large Language Models

Kaiser Sun, Mark Dredze


Abstract
Large language model development relies on the pre-train-then-align paradigm, in which the model is typically pre-trained on a large text corpus and undergoes a tuning stage to align the model with human preference or downstream tasks. We investigate the relationship between pre-training and supervised fine-tuning by considering multiple tasks as well as different pre-trained model checkpoints. Our results on 18 datasets and two models suggest that i) although the model benefits significantly through supervised fine-tuning, it may forget previously known domain knowledge and tasks that are not seen during fine-tuning; ii) the model exhibits high sensitivity to evaluation prompts after supervised fine-tuning, but this sensitivity can be alleviated through further pre-training; iii) continual pre-training improves the model in a latent way that manifests after fine-tuning; iv) The model can already solve some tasks after pre-training while fine-tuning most benefits datasets where the model does not show capability during pre-training.
Anthology ID:
2025.repl4nlp-1.11
Volume:
Proceedings of the 10th Workshop on Representation Learning for NLP (RepL4NLP-2025)
Month:
May
Year:
2025
Address:
Albuquerque, NM
Editors:
Vaibhav Adlakha, Alexandra Chronopoulou, Xiang Lorraine Li, Bodhisattwa Prasad Majumder, Freda Shi, Giorgos Vernikos
Venues:
RepL4NLP | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
131–151
Language:
URL:
https://preview.aclanthology.org/fix-sig-urls/2025.repl4nlp-1.11/
DOI:
Bibkey:
Cite (ACL):
Kaiser Sun and Mark Dredze. 2025. Amuro & Char: Analyzing the Relationship between Pre-Training and Fine-Tuning of Large Language Models. In Proceedings of the 10th Workshop on Representation Learning for NLP (RepL4NLP-2025), pages 131–151, Albuquerque, NM. Association for Computational Linguistics.
Cite (Informal):
Amuro & Char: Analyzing the Relationship between Pre-Training and Fine-Tuning of Large Language Models (Sun & Dredze, RepL4NLP 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/fix-sig-urls/2025.repl4nlp-1.11.pdf