Aditya Senthilnathan
2023
Ranking LLM-Generated Loop Invariants for Program Verification
Saikat Chakraborty
|
Shuvendu Lahiri
|
Sarah Fakhoury
|
Akash Lal
|
Madanlal Musuvathi
|
Aseem Rastogi
|
Aditya Senthilnathan
|
Rahul Sharma
|
Nikhil Swamy
Findings of the Association for Computational Linguistics: EMNLP 2023
Synthesizing inductive loop invariants is fundamental to automating program verification. In this work we observe that Large Language Models (such as gpt-3.5 or gpt-4) are capable of synthesizing loop invariants for a class of programs in a 0-shot setting, yet require several samples to generate the correct invariants. This can lead to a large number a calls to a program verifier to establish an invariant. To address this issue, we propose a re-ranking approach for the generated results of LLMs. We have designed a ranker that can distinguish between correct inductive invariants and incorrect attempts based on the problem definition. The ranker is optimized as a contrastive ranker. Experimental results demonstrate that this re-ranking mechanism significantly improves the ranking of correct invariants among the generated candidates, leading to a notable reduction in the number of calls to a verifier.
Search
Co-authors
- Saikat Chakraborty 1
- Shuvendu Lahiri 1
- Sarah Fakhoury 1
- Akash Lal 1
- Madanlal Musuvathi 1
- show all...