E. Alejandro Alanis


2025

pdf bib
LLM-Mixer: Multiscale Mixing in LLMs for Time Series Forecasting
Md Kowsher | Md. Shohanur Islam Sobuj | Nusrat Jahan Prottasha | E. Alejandro Alanis | Ozlem Garibay | Niloofar Yousefi
Proceedings of the 4th Table Representation Learning Workshop

Time series forecasting is a challenging task, especially when dealing with data that contains both short-term variations and long-term trends. In this study, we introduce LLM-Mixer, a novel framework that combines multiscale time-series decomposition with the power of pre-trained Large Language Models (LLMs). LLM-Mixer breaks down time-series data into multiple temporal resolutions using downsampling and processes these multiscale representations with a frozen LLM, guided by a carefully designed text prompt that encodes information about the dataset’s features and structure. To understand the role of downsampling, we conduct a detailed analysis using Neural Tangent Kernel (NTK) distance, showing that incorporating multiple scales improves the model’s learning dynamics.We evaluate LLM-Mixer across a diverse set of forecasting tasks, including long-term multivariate, short-term multivariate, and long-term univariate scenarios. Experimental results demonstrate that LLM-Mixer achieves competitive performance compared to recent state-of-the-art models across various forecasting horizons.