@inproceedings{tastet-timiryasov-2024-babyllama,
    title = "{B}aby{L}lama-2: Ensemble-Distilled Models Consistently Outperform Teachers With Limited Data",
    author = "Tastet, Jean-Loup  and
      Timiryasov, Inar",
    editor = "Hu, Michael Y.  and
      Mueller, Aaron  and
      Ross, Candace  and
      Williams, Adina  and
      Linzen, Tal  and
      Zhuang, Chengxu  and
      Choshen, Leshem  and
      Cotterell, Ryan  and
      Warstadt, Alex  and
      Wilcox, Ethan Gotlieb",
    booktitle = "The 2nd BabyLM Challenge at the 28th Conference on Computational Natural Language Learning",
    month = nov,
    year = "2024",
    address = "Miami, FL, USA",
    publisher = "Association for Computational Linguistics",
    url = "https://preview.aclanthology.org/ingest-emnlp/2024.conll-babylm.26/",
    pages = "292--301",
    abstract = "We present BabyLlama-2, a 345 million parameter model distillation-pretrained from two teachers on a 10 million word corpus for the BabyLM competition. On the BLiMP and SuperGLUE benchmarks, BabyLlama-2 outperforms baselines trained on both 10 and 100 million word datasets with the same data mix, as well as its teacher models. Through an extensive hyperparameter sweep, we demonstrate that the advantages of distillation cannot be attributed to suboptimal hyperparameter selection of the teachers. Our findings underscore the need for further investigation into distillation techniques, particularly in data-limited settings."
}Markdown (Informal)
[BabyLlama-2: Ensemble-Distilled Models Consistently Outperform Teachers With Limited Data](https://preview.aclanthology.org/ingest-emnlp/2024.conll-babylm.26/) (Tastet & Timiryasov, CoNLL-BabyLM 2024)
ACL