Exploiting Edited Large Language Models as General Scientific Optimizers

Qitan Lv, Tianyu Liu, Hong Wang


Abstract
Large language models (LLMs) have been widely adopted in mathematical optimization in scientific scenarios for their extensive knowledge and advanced reasoning capabilities. Existing methods mainly focus on utilizing LLMs to solve optimization problems in a prompt-based manner, which takes observational feedback as additional textual descriptions. However, due to LLM’s **high sensitivity to the prompts** and **tendency to get lost in lengthy prompts**, these methods struggle to effectively utilize the observational feedback from each optimization step, which severely hinders the applications for real-world scenarios. To address these challenges, we propose a conceptually simple and general bi-level optimization method, namely **G**eneral **S**cientific **O**ptimizers (GSO).Specifically, GSO first utilizes inner-level simulators as experimental platforms to evaluate the current solution and provide observational feedback. Then, LLMs serve as knowledgeable and versatile scientists, generating new solutions by refining potential errors from the feedback as the outer-level optimization.Finally, simulations together with the expert knowledge in LLMs are jointly updated with bi-level interactions via model editing.Extensive experiments show that GSO consistently outperforms existing state-of-the-art methods using *six* different LLM backbone on *seven* different tasks, demonstrating the effectiveness and a wide range of applications.
Anthology ID:
2025.naacl-long.270
Volume:
Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)
Month:
April
Year:
2025
Address:
Albuquerque, New Mexico
Editors:
Luis Chiruzzo, Alan Ritter, Lu Wang
Venue:
NAACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
5212–5237
Language:
URL:
https://preview.aclanthology.org/Author-page-Marten-During-lu/2025.naacl-long.270/
DOI:
Bibkey:
Cite (ACL):
Qitan Lv, Tianyu Liu, and Hong Wang. 2025. Exploiting Edited Large Language Models as General Scientific Optimizers. In Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers), pages 5212–5237, Albuquerque, New Mexico. Association for Computational Linguistics.
Cite (Informal):
Exploiting Edited Large Language Models as General Scientific Optimizers (Lv et al., NAACL 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/Author-page-Marten-During-lu/2025.naacl-long.270.pdf