Hossein Mobahi
2025
PlanGEN: A Multi-Agent Framework for Generating Planning and Reasoning Trajectories for Complex Problem Solving
Mihir Parmar
|
Xin Liu
|
Palash Goyal
|
Yanfei Chen
|
Long Le
|
Swaroop Mishra
|
Hossein Mobahi
|
Jindong Gu
|
Zifeng Wang
|
Hootan Nakhost
|
Chitta Baral
|
Chen-Yu Lee
|
Tomas Pfister
|
Hamid Palangi
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Recent agent frameworks and inference-time algorithms often struggle with natural planning problems due to limitations in verifying generated plans or reasoning and varying complexity of instances within a single task. Many existing methods for these tasks either perform task-level verification without considering constraints or apply inference-time algorithms without adapting to instance-level complexity. To address these limitations, we propose PlanGEN, a model-agnostic and easily scalable agent framework with three key components: constraint, verification, and selection agents. Specifically, our approach proposes constraint-guided iterative verification to enhance performance of inference-time algorithms–Best of 𝒩, Tree-of-Thought, and REBASE. In PlanGEN framework, the selection agent optimizes algorithm choice based on instance complexity, ensuring better adaptability to complex planning problems. Experimental results demonstrate significant improvements over the strongest baseline across multiple benchmarks, achieving state-of-the-art results on NATURAL PLAN (~8%↑), OlympiadBench (~4%↑), DocFinQA (~7%↑), and GPQA (~1%↑). Our key finding highlights that constraint-guided iterative verification improves inference-time algorithms, and adaptive selection further boosts performance on complex planning and reasoning problems.
2022
Sharpness-Aware Minimization Improves Language Model Generalization
Dara Bahri
|
Hossein Mobahi
|
Yi Tay
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
The allure of superhuman-level capabilities has led to considerable interest in language models like GPT-3 and T5, wherein the research has, by and large, revolved around new model architectures, training tasks, and loss objectives, along with substantial engineering efforts to scale up model capacity and dataset size. Comparatively little work has been done to improve the generalization of these models through better optimization. In this work, we show that Sharpness-Aware Minimization (SAM), a recently proposed optimization procedure that encourages convergence to flatter minima, can substantially improve the generalization of language models without much computational overhead. We show that SAM is able to boost performance on SuperGLUE, GLUE, Web Questions, Natural Questions, Trivia QA, and TyDiQA, with particularly large gains when training data for these tasks is limited.
Search
Fix author
Co-authors
- Dara Bahri 1
- Chitta Baral 1
- Yanfei Chen 1
- Palash Goyal 1
- Jindong Gu 1
- show all...