Xinhe Wang
2025
Interactive and Expressive Code-Augmented Planning with Large Language Models
Anthony Zhe Liu
|
Xinhe Wang
|
Jacob Sansom
|
Yao Fu
|
Jongwook Choi
|
Sungryull Sohn
|
Jaekyeom Kim
|
Honglak Lee
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Large Language Models (LLMs) demonstrate strong abilities in common-sense reasoning and interactive decision-making, but often struggle with complex, long-horizon planning tasks. Recent techniques have sought to structure LLM outputs using control flow and code to improve planning performance. However, code-based approaches can be error-prone and insufficient for handling ambiguous or unstructured data. To address these challenges, we propose REPL-Plan, an LLM planning approach that is fully code-expressive (it can utilize all the benefits of code) while also being dynamic (it can flexibly adapt from errors and use the LLM for soft reasoning). In REPL-Plan, an LLM solves tasks by interacting with a Read-Eval-Print Loop (REPL), which iteratively executes and evaluates code, similar to language shells or interactive code notebooks, allowing the model to flexibly correct errors and handle tasks dynamically. We demonstrate that REPL-Plan achieves strong results across various planning domains compared to previous methods.
Search
Fix author
Co-authors
- Jongwook Choi 1
- Yao Fu 1
- Jaekyeom Kim 1
- Honglak Lee 1
- Anthony Zhe Liu 1
- show all...
Venues
- acl1