@inproceedings{rakotonirina-baroni-2024-memoryprompt,
    title = "{M}emory{P}rompt: A Light Wrapper to Improve Context Tracking in Pre-trained Language Models",
    author = "Rakotonirina, Nathanael Carraz  and
      Baroni, Marco",
    editor = "Calzolari, Nicoletta  and
      Kan, Min-Yen  and
      Hoste, Veronique  and
      Lenci, Alessandro  and
      Sakti, Sakriani  and
      Xue, Nianwen",
    booktitle = "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)",
    month = may,
    year = "2024",
    address = "Torino, Italia",
    publisher = "ELRA and ICCL",
    url = "https://preview.aclanthology.org/ingest-emnlp/2024.lrec-main.976/",
    pages = "11187--11195",
    abstract = "Transformer-based language models (LMs) track contextual information through large, hard-coded input windows. We introduce MemoryPrompt, a leaner approach in which the LM is complemented by a small auxiliary recurrent network that passes information to the LM by prefixing its regular input with a sequence of vectors, akin to soft prompts, without requiring LM finetuning. Tested on a task designed to probe a LM{'}s ability to keep track of multiple fact updates, a MemoryPrompt-augmented LM outperforms much larger LMs that have access to the full input history. We also test MemoryPrompt on a long-distance dialogue dataset, where its performance is comparable to that of a model conditioned on the entire conversation history. In both experiments we also observe that, unlike full-finetuning approaches, MemoryPrompt does not suffer from catastrophic forgetting when adapted to new tasks, thus not disrupting the generalist capabilities of the underlying LM."
}Markdown (Informal)
[MemoryPrompt: A Light Wrapper to Improve Context Tracking in Pre-trained Language Models](https://preview.aclanthology.org/ingest-emnlp/2024.lrec-main.976/) (Rakotonirina & Baroni, LREC-COLING 2024)
ACL