Unsupervised Anomaly Detection with Self-Training and Knowledge Distillation

Publication Date: 10/16/2022

Event: IEEE International Conference in Image Processing

Reference: pp. 2102-2106, 2022

Authors: Hongbo Liu, Tsinghua University; Kai Li, NEC Laboratories America, Inc.; Xiu Li, Tsinghua University; Yulun Zhang, ETH Zurich

Abstract: Anomaly Detection (AD) aims to find defective patterns or abnormal samples among data, and has been a hot research topic due to various real-world applications. While various AD methods have been proposed, most of them assume the availability of a clean (anomaly-free) training set, which however may be hard to guarantee in many real-world industry applications. This motivates us to investigate Unsupervised Anomaly Detection (UAD) in which the training set includes both normal and abnormal samples. In this paper, we address the UAD problem by proposing a Self-Training and Knowledge Distillation (STKD) model. STKD combats anomalies in the training set by iteratively alternating between excluding samples of high anomaly probabilities and training the model with the purified training set. Despite that the model is trained with a cleaner training set, the inevitably existing anomalies may still cause negative impact. STKD alleviates this by regularizing the model to respond similarly to a teacher model which has not been trained with noisy data. Experiments show that STKD consistently produces more robust performance with different levels of anomalies.

Publication Link: https://ieeexplore.ieee.org/document/9897777