Guojun Xiong
2025
FLAG-TRADER: Fusion LLM-Agent with Gradient-based Reinforcement Learning for Financial Trading
Guojun Xiong
|
Zhiyang Deng
|
Keyi Wang
|
Yupeng Cao
|
Haohang Li
|
Yangyang Yu
|
Xueqing Peng
|
Mingquan Lin
|
Kaleb E Smith
|
Xiao-Yang Liu
|
Jimin Huang
|
Sophia Ananiadou
|
Qianqian Xie
Findings of the Association for Computational Linguistics: ACL 2025
Large language models (LLMs) fine-tuned on multimodal financial data have demonstrated impressive reasoning capabilities in various financial tasks. However, they often struggle with multi-step, goal-oriented scenarios in interactive financial markets, such as trading, where complex agentic approaches are required to improve decision-making. To address this, we propose FLAG-Trader, a unified architecture integrating linguistic processing (via LLMs) with gradient-driven reinforcement learning (RL) policy optimization, in which a partially fine-tuned LLM acts as the policy network, leveraging pre-trained knowledge while adapting to the financial domain through parameter-efficient fine-tuning. Through policy gradient optimization driven by trading rewards, our framework not only enhances LLM performance in trading but also improves results on other financial-domain tasks. We present extensive empirical evidence to validate these enhancements.
Search
Fix author
Co-authors
- Sophia Ananiadou 1
- Yupeng Cao 1
- Zhiyang Deng 1
- Jimin Huang 1
- Haohang Li 1
- show all...