AutoTCL: Automated Time Series Contrastive Learning with Adaptive Augmentations

Publication Date: 8/20/2023

Event: The 32nd International Joint Conference on Artificial Intelligence (IJCAI 2023)

Reference: pp. 1-19, 2023

Authors: Xu Zheng, Florida International University; Tianchun Wang, Pennsylvania State University; Wei Cheng, NEC Laboratories America, Inc.; Aitian Ma, Florida International University; Haifeng Chen, NEC Laboratories America, Inc.; Mo Sha, Florida International University; Dongsheng Luo, Florida International University

Abstract: Modern techniques like contrastive learning have been effectively used in many areas, including computer vision, natural language processing, and graph-structured data. Creating positive examples that assist the model in learning robust and discriminative representations is a crucial stage in contrastive learning approaches. Usually, preset human intuition directs the selection of relevant data augmentations. Due to patterns that are easily recognized by humans, this rule of thumb works well in the vision and language domains. However, it is impractical to visually inspect the temporal structures in time series. The diversity of time series augmentations at both the dataset and instance levels makes it difficult to choose meaningful augmentations on the fly. Thus, although prevalent, contrastive learning with data augmentation has been less studied in the time series domain. In this study, we address this gap by analyzing time series data augmentation using information theory and summarizing the most commonly adopted augmentations in a unified format. We then propose a parameterized augmentation method, AutoTCL, which can be adaptively employed to support time series representation learning. The proposed approach is encoder-agnostic, allowing it to be seamlessly integrated with different backbone encoders. Experiments on benchmark datasets demonstrate the highly competitive results of our method, with an average 10.3% reduction in MSE and 7.0% in MAE over the leading baselines.

Publication Link: