Yu Takagi
2025
Massive Supervised Fine-tuning Experiments Reveal How Data, Layer, and Training Factors Shape LLM Alignment Quality
Yuto Harada
|
Yusuke Yamauchi
|
Yusuke Oda
|
Yohei Oseki
|
Yusuke Miyao
|
Yu Takagi
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Supervised fine-tuning (SFT) is a critical step in aligning large language models (LLMs) with human instructions and values, yet many aspects of SFT remain poorly understood. We trained a wide range of base models on a variety of datasets including code generation, mathematical reasoning, and general-domain tasks, resulting in 1,000+ SFT models under controlled conditions. We then identified the dataset properties that matter most and examined the layer-wise modifications introduced by SFT.Our findings reveal that some training–task synergies persist across all models while others vary substantially, emphasizing the importance of model-specific strategies. Moreover, we demonstrate that perplexity consistently predicts SFT effectiveness, often surpassing superficial similarity between the training data and the benchmark, and that mid-layer weight changes correlate most strongly with performance gains. We release these 1,000+ SFT models and benchmark results to accelerate further research. All resources are available at https://github.com/llm-jp/massive-sft.
2024
Unveiling Multi-level and Multi-modal Semantic Representations in the Human Brain using Large Language Models
Yuko Nakagi
|
Takuya Matsuyama
|
Naoko Koide-Majima
|
Hiroto Q. Yamaguchi
|
Rieko Kubo
|
Shinji Nishimoto
|
Yu Takagi
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
In recent studies, researchers have used large language models (LLMs) to explore semantic representations in the brain; however, they have typically assessed different levels of semantic content, such as speech, objects, and stories, separately. In this study, we recorded brain activity using functional magnetic resonance imaging (fMRI) while participants viewed 8.3 hours of dramas and movies. We annotated these stimuli at multiple semantic levels, which enabled us to extract latent representations of LLMs for this content. Our findings demonstrate that LLMs predict human brain activity more accurately than traditional language models, particularly for complex background stories. Furthermore, we identify distinct brain regions associated with different semantic representations, including multi-modal vision-semantic representations, which highlights the importance of modeling multi-level and multi-modal semantic representations simultaneously. We will make our fMRI dataset publicly available to facilitate further research on aligning LLMs with human brain function.
Search
Fix author
Co-authors
- Yuto Harada 1
- Naoko Koide-Majima 1
- Rieko Kubo 1
- Takuya Matsuyama 1
- Yusuke Miyao 1
- show all...