Time Series Contrastive Learning with Information-Aware Augmentations

Publication Date: 2/14/2023

Event: Thirty-Seventh AAAI Conference on Artificial Intelligence (AAAI-23)

Reference: 1-9, 2023

Authors: Dongsheng Luo, Pennsylvania State University; Wei Cheng, NEC Laboratories America, Inc.; Jingchao Ni, AWS AI Labs; Wenchao Yu, NEC Laboratories America, Inc.; Xuchao Zhang, NEC Laboratories America, Inc.; Yanchi Liu, NEC Laboratories America, Inc.; Yuncong Chen, NEC Laboratories America, Inc.; Haifeng Chen, NEC Laboratories America, Inc.

Abstract: Various contrastive learning approaches have been proposed in recent years and achieve significant empirical success. While effective and prevalent, contrastive learning has been less explored for time series data. A key component of contrastive learning is to select appropriate augmentations imposing some priors to construct feasible positive samples, such that an encoder can be trained to learn robust and discriminative representations. Unlike image and language domains where “desired” augmented samples can be generated with the rule of thumb guided by prefabricated human priors, the ad-hoc manual selection of time series augmentations is hindered by their diverse and human-unrecognizable temporal structures. How to find the desired augmentations of time series data that are meaningful for given contrastive learning tasks and datasets remains an open question. In this work, we address the problem by encouraging both high fidelity and variety based on information theory. A theoretical analysis leads to the criteria for selecting feasible data augmentations. On top of that, we propose a new contrastive learning approach with information-aware augmentations, InfoTS, that adaptively selects optimal augmentations for time series representation learning. Experiments on various datasets show highly competitive performance with up to a 12.0% reduction in MSE on forecasting tasks and up to 3.7% relative improvement in accuracy on classification tasks over the leading baselines.

Publication Link: https://ojs.aaai.org/index.php/AAAI/article/view/25575