@inproceedings{wold-2022-effectiveness,
    title = "The Effectiveness of Masked Language Modeling and Adapters for Factual Knowledge Injection",
    author = "Wold, Sondre",
    editor = "Ustalov, Dmitry  and
      Gao, Yanjun  and
      Panchenko, Alexander  and
      Valentino, Marco  and
      Thayaparan, Mokanarangan  and
      Nguyen, Thien Huu  and
      Penn, Gerald  and
      Ramesh, Arti  and
      Jana, Abhik",
    booktitle = "Proceedings of TextGraphs-16: Graph-based Methods for Natural Language Processing",
    month = oct,
    year = "2022",
    address = "Gyeongju, Republic of Korea",
    publisher = "Association for Computational Linguistics",
    url = "https://preview.aclanthology.org/ingest-emnlp/2022.textgraphs-1.6/",
    pages = "54--59",
    abstract = "This paper studies the problem of injecting factual knowledge into large pre-trained language models. We train adapter modules on parts of the ConceptNet knowledge graph using the masked language modeling objective and evaluate the success of the method by a series of probing experiments on the LAMA probe. Mean P@K curves for different configurations indicate that the technique is effective, increasing the performance on sub-sets of the LAMA probe for large values of k by adding as little as 2.1{\%} additional parameters to the original models."
}Markdown (Informal)
[The Effectiveness of Masked Language Modeling and Adapters for Factual Knowledge Injection](https://preview.aclanthology.org/ingest-emnlp/2022.textgraphs-1.6/) (Wold, TextGraphs 2022)
ACL