Multi-Modal View Enhanced Large Vision Models for Long-Term Time Series Forecasting

Publication Date: 12/7/2025

Event: The Thirty-ninth Annual Conference on Neural Information Processing Systems (NeurIPS 2025)

Reference: pp. 1-28, 2025

Authors: ChengAo Shen, Houston University; Wenchao Yu, NEC Laboratories America, Inc.; Ziming Zhao, Houston University; Dongjin Song, University of Connecticut; Wei Cheng, NEC Laboratories America, Inc.; Haifeng Chen, NEC Laboratories America, Inc.; Jingchao Ni, Houston University

Abstract: Time series, typically represented as numerical sequences, can also be transformed into images and texts, offering multi-modal views (MMVs) of the same underlying signal. These MMVs can reveal complementary patterns and enable the use of powerful pre-trained large models, such as large vision models (LVMs), for long-term time series forecasting (LTSF). However, as we identified in this work, the state-of-the-art (SOTA) LVM-based forecaster poses an inductive bias towards “forecasting periods”. To harness this bias, we propose DMMV, a novel decomposition-based multi-modal view framework that leverages trend-seasonal decomposition and a novel backcast residual based adaptive decomposition to integrate MMVs for LTSF. Comparative evaluations against 14 SOTA models across diverse datasets show that DMMV outperforms single-view and existing multi-modal baselines, achieving the best mean squared error (MSE) on 6 out of 8 benchmark datasets. The code for this paper is available at: https://github.com/D2I-Group/dmmv.

Publication Link: