@inproceedings{kovriguina-etal-2022-textgraphs,
    title = "{T}ext{G}raphs-16 Natural Language Premise Selection Task: Zero-Shot Premise Selection with Prompting Generative Language Models",
    author = "Kovriguina, Liubov  and
      Teucher, Roman  and
      Wardenga, Robert",
    editor = "Ustalov, Dmitry  and
      Gao, Yanjun  and
      Panchenko, Alexander  and
      Valentino, Marco  and
      Thayaparan, Mokanarangan  and
      Nguyen, Thien Huu  and
      Penn, Gerald  and
      Ramesh, Arti  and
      Jana, Abhik",
    booktitle = "Proceedings of TextGraphs-16: Graph-based Methods for Natural Language Processing",
    month = oct,
    year = "2022",
    address = "Gyeongju, Republic of Korea",
    publisher = "Association for Computational Linguistics",
    url = "https://preview.aclanthology.org/ingest-emnlp/2022.textgraphs-1.15/",
    pages = "127--132",
    abstract = "Automated theorem proving can benefit a lot from methods employed in natural language processing, knowledge graphs and information retrieval: this non-trivial task combines formal languages understanding, reasoning, similarity search. We tackle this task by enhancing semantic similarity ranking with prompt engineering, which has become a new paradigm in natural language understanding. None of our approaches requires additional training. Despite encouraging results reported by prompt engineering approaches for a range of NLP tasks, for the premise selection task vanilla re-ranking by prompting GPT-3 doesn{'}t outperform semantic similarity ranking with SBERT, but merging of the both rankings shows better results."
}Markdown (Informal)
[TextGraphs-16 Natural Language Premise Selection Task: Zero-Shot Premise Selection with Prompting Generative Language Models](https://preview.aclanthology.org/ingest-emnlp/2022.textgraphs-1.15/) (Kovriguina et al., TextGraphs 2022)
ACL