Latent Traits and Cross-Task Transfer: Deconstructing Dataset Interactions in LLM Fine-tuning

Shambhavi Krishna, Haw-Shiuan Chang, Taesung Lee


Abstract
Large language models are increasingly deployed across diverse applications. This often includes tasks LLMs have not encountered during training.This implies that enumerating and obtaining the high-quality training data for all tasks is infeasible. Thus, we often need to rely on transfer learning using datasets with different characteristics, and anticipate out-of-distribution requests.Motivated by this practical need, we propose an analysis framework, building a transfer learning matrix and dimensionality reduction, to dissect these cross-task interactions.We train and analyze 10 models to identify latent abilities (e.g., Reasoning, Sentiment Classification, NLU, Arithmetic)and discover the side effects of the transfer learning.Our findings reveal that performance improvements often defy explanations based on surface-level dataset similarity or source data quality. Instead, hidden statistical factors of the source dataset, such as class distribution and generation length proclivities, alongside specific linguistic features, are actually more influential.This work offers insights into the complex dynamics of transfer learning, paving the way for more predictable and effective LLM adaptation.
Anthology ID:
2025.starsem-1.18
Volume:
Proceedings of the 14th Joint Conference on Lexical and Computational Semantics (*SEM 2025)
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Lea Frermann, Mark Stevenson
Venue:
*SEM
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
225–241
Language:
URL:
https://preview.aclanthology.org/ingest-emnlp/2025.starsem-1.18/
DOI:
Bibkey:
Cite (ACL):
Shambhavi Krishna, Haw-Shiuan Chang, and Taesung Lee. 2025. Latent Traits and Cross-Task Transfer: Deconstructing Dataset Interactions in LLM Fine-tuning. In Proceedings of the 14th Joint Conference on Lexical and Computational Semantics (*SEM 2025), pages 225–241, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
Latent Traits and Cross-Task Transfer: Deconstructing Dataset Interactions in LLM Fine-tuning (Krishna et al., *SEM 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-emnlp/2025.starsem-1.18.pdf