@inproceedings{pan-etal-2023-logic,
    title = "Logic-{LM}: Empowering Large Language Models with Symbolic Solvers for Faithful Logical Reasoning",
    author = "Pan, Liangming  and
      Albalak, Alon  and
      Wang, Xinyi  and
      Wang, William",
    editor = "Bouamor, Houda  and
      Pino, Juan  and
      Bali, Kalika",
    booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
    month = dec,
    year = "2023",
    address = "Singapore",
    publisher = "Association for Computational Linguistics",
    url = "https://preview.aclanthology.org/ingest-emnlp/2023.findings-emnlp.248/",
    doi = "10.18653/v1/2023.findings-emnlp.248",
    pages = "3806--3824",
    abstract = "Large Language Models (LLMs) have shown human-like reasoning abilities but still struggle with complex logical problems. This paper introduces a novel framework, Logic-LM, which integrates LLMs with symbolic solvers to improve logical problem-solving. Our method first utilizes LLMs to translate a natural language problem into a symbolic formulation. Afterward, a deterministic symbolic solver performs inference on the formulated problem. We also introduce a self-refinement module, which utilizes the symbolic solver{'}s error messages to revise symbolic formalizations. We demonstrate Logic-LM{'}s effectiveness on five logical reasoning datasets: ProofWriter, PrOntoQA, FOLIO, LogicalDeduction, and AR-LSAT. On average, Logic-LM achieves a significant performance boost of 39.2{\%} over using LLM alone with standard prompting and 18.4{\%} over LLM with chain-of-thought prompting. Our findings suggest that Logic-LM, by combining LLMs with symbolic logic, offers a promising avenue for faithful logical reasoning."
}Markdown (Informal)
[Logic-LM: Empowering Large Language Models with Symbolic Solvers for Faithful Logical Reasoning](https://preview.aclanthology.org/ingest-emnlp/2023.findings-emnlp.248/) (Pan et al., Findings 2023)
ACL