@inproceedings{ormazabal-etal-2023-comblm,
    title = "{C}omb{LM}: Adapting Black-Box Language Models through Small Fine-Tuned Models",
    author = "Ormazabal, Aitor  and
      Artetxe, Mikel  and
      Agirre, Eneko",
    editor = "Bouamor, Houda  and
      Pino, Juan  and
      Bali, Kalika",
    booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
    month = dec,
    year = "2023",
    address = "Singapore",
    publisher = "Association for Computational Linguistics",
    url = "https://preview.aclanthology.org/ingest-emnlp/2023.emnlp-main.180/",
    doi = "10.18653/v1/2023.emnlp-main.180",
    pages = "2961--2974",
    abstract = "Methods for adapting language models (LMs) to new tasks and domains have traditionally assumed white-box access to the model, and work by modifying its parameters. However, this is incompatible with a recent trend in the field, where the highest quality models are only available as black-boxes through inference APIs. Even when the model weights are available, the computational cost of fine-tuning large LMs can be prohibitive for most practitioners. In this work, we present a lightweight method for adapting large LMs to new domains and tasks, assuming no access to their weights or intermediate activations. Our approach fine-tunes a small white-box LM and combines it with the large black-box LM at the probability level through a small network, learned on a small validation set. We validate our approach by adapting a large LM (OPT-30B) to several domains and a downstream task (machine translation), observing improved performance in all cases, of up to 9{\%}, while using a domain expert 23x smaller."
}Markdown (Informal)
[CombLM: Adapting Black-Box Language Models through Small Fine-Tuned Models](https://preview.aclanthology.org/ingest-emnlp/2023.emnlp-main.180/) (Ormazabal et al., EMNLP 2023)
ACL