Towards Learning Disentangled Representations for Time Series

Publication Date: 8/18/2022

Event: 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD 2022)

Reference: pp. 3270-3278, 2022

Authors: Yuening Li, Texas A&M University ; Zhengzhang Chen, NEC Laboratories America, Inc.; Daochen Zha, Rice University; Mengnan Du, Texas A&M University; Jingchao Ni, NEC Laboratories America, Inc.; Denghui Zhang, Rutgers University; Haifeng Chen, NEC Laboratories America, Inc.; Xia Hu, Rice University

Abstract: Promising progress has been made toward learning efficient time series representations in recent years, but the learned representations often lack interpretability and do not encode semantic meanings by the complex interactions of many latent factors. Learning representations that disentangle these latent factors can bring semantic-rich representations of time series and further enhance interpretability. However, directly adopting the sequential models, such as Long Short-Term Memory Variational AutoEncoder (LSTM-VAE), would encounter a Kullback?Leibler (KL) vanishing problem: the LSTM decoder often generates sequential data without efficiently using latent representations, and the latent spaces sometimes could even be independent of the observation space. And traditional disentanglement methods may intensify the trend of KL vanishing along with the disentanglement process, because they tend to penalize the mutual information between the latent space and the observations. In this paper, we propose Disentangle Time-Series, a novel disentanglement enhancement framework for time series data. Our framework achieves multi-level disentanglement by covering both individual latent factors and group semantic segments. We propose augmenting the original VAE objective by decomposing the evidence lower-bound and extracting evidence linking factorial representations to disentanglement. Additionally, we introduce a mutual information maximization term between the observation space to the latent space to alleviate the KL vanishing problem while preserving the disentanglement property. Experimental results on five real-world IoT datasets demonstrate that the representations learned by DTS achieve superior performance in various tasks with better interpretability.

Publication Link: