@inproceedings{hu-etal-2023-meta,
    title = "Meta-Learning Online Adaptation of Language Models",
    author = "Hu, Nathan  and
      Mitchell, Eric  and
      Manning, Christopher  and
      Finn, Chelsea",
    editor = "Bouamor, Houda  and
      Pino, Juan  and
      Bali, Kalika",
    booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
    month = dec,
    year = "2023",
    address = "Singapore",
    publisher = "Association for Computational Linguistics",
    url = "https://preview.aclanthology.org/ingest-emnlp/2023.emnlp-main.268/",
    doi = "10.18653/v1/2023.emnlp-main.268",
    pages = "4418--4432",
    abstract = "Large language models encode impressively broad world knowledge in their parameters. However, the knowledge in static language models falls out of date, limiting the model{'}s effective ``shelf life.'' While online fine-tuning can reduce this degradation, we find that naively fine-tuning on a stream of documents leads to a low level of information uptake. We hypothesize that online fine-tuning does not sufficiently attend to important information. That is, the gradient signal from important tokens representing factual information is drowned out by the gradient from inherently noisy tokens, suggesting that a dynamic, context-aware learning rate may be beneficial. We therefore propose learning which tokens to upweight. We meta-train a small, autoregressive model to reweight the language modeling loss for each token during online fine-tuning, with the objective of maximizing the out-of-date base question-answering model{'}s ability to answer questions about a document after a single weighted gradient step. We call this approach Context-aware Meta-learned Loss Scaling (CaMeLS). Across three different distributions of documents, our experiments find that CaMeLS provides substantially improved information uptake on streams of thousands of documents compared with standard fine-tuning and baseline heuristics for reweighting token losses."
}Markdown (Informal)
[Meta-Learning Online Adaptation of Language Models](https://preview.aclanthology.org/ingest-emnlp/2023.emnlp-main.268/) (Hu et al., EMNLP 2023)
ACL
- Nathan Hu, Eric Mitchell, Christopher Manning, and Chelsea Finn. 2023. Meta-Learning Online Adaptation of Language Models. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 4418–4432, Singapore. Association for Computational Linguistics.