David Kant
2025
AutoMixer: Checkpoint Artifacts as Automatic Data Mixers
Ernie Chang
|
Yang Li
|
Patrick Huber
|
Vish Vogeti
|
David Kant
|
Yangyang Shi
|
Vikas Chandra
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
In language model training, it is desirable to equip models with capabilities from various tasks. However, it is not clear how to directly obtain the right data mixtures for these capabilities as the relationship between data and tasks is difficult to be modeled. In this work, we observe that checkpoint models exhibit emerging capabilities at different points in the training trajectory. Often, the training process saves checkpoints as artifacts that are under-utilized as a source of in-training data signals. We identify these artifact models based on their respective capabilities on the benchmarks and leverage them as data mixers by using their aggregated first-order influence approximation over source data. We demonstrated on eight reasoning benchmarks that the proposed framework shows significant improvements in the pretraining setting, with accuracy increases of up to 1.93%. Overall, this demonstrates the potential of checkpoint models to enhance data quality and optimize data mixtures.
Search
Fix author
Co-authors
- Vikas Chandra 1
- Ernie Chang 1
- Patrick Huber 1
- Yang Li (李旸) 1
- Yangyang Shi 1
- show all...
Venues
- acl1