TimeCAP: Learning to Contextualize, Augment, and Predict Time Series Events with Large Language Model Agents

Publication Date: 3/4/2025

Event: The 39th Annual AAAI Conference on Artificial Intelligence (AAAI 2025)

Reference: pp. 1-9-2025

Authors: Geon Lee, KAIST; Wenchao Yu, NEC Laboratories America, Inc.; Kijung Shin, KAIST; Wei Cheng, NEC Laboratories America, Inc.; Haifeng Chen, NEC Laboratories America, Inc.

Abstract: Time series data is essential in various applications, including climate modeling, healthcare monitoring, and financial analytics. Understanding the contextual information associated with real-world time series data is often essential for accurate and reliable event predictions. In this paper, we introduce TimeCAP, a time-series processing framework that creatively employs Large Language Models (LLMs) as contextualizers of time series data, extending their typical usage as predictors. TimeCAP incorporates two independent LLM agents: one generates a textual summary capturing the context of the time series, while the other uses this enriched summary to make more informed predictions. In addition, TimeCAP employs a multi-modal encoder that synergizes with the LLM agents, enhancing predictive performance through mutual augmentation of inputs with in-context examples. Experimental results on real-world datasets demonstrate that TimeCAP outperforms state-of-the-art methods for time series event prediction, including those utilizing LLMs as predictors, achieving an average improvement of 28.75% in F1 score.

Publication Link: https://ojs.aaai.org/index.php/AAAI/article/view/33989