Neural Architecture Search (NAS) refers to the process of automating the design of neural network architectures. Instead of manually crafting the architecture, NAS employs search algorithms, often guided by optimization techniques, to explore a predefined search space and discover neural network architectures that perform well on a given task.

Posts

Automated Anomaly Detection via Curiosity-Guided Search and Self-Imitation Learning

Automated Anomaly Detection via Curiosity-Guided Search and Self-Imitation Learning Anomaly detection is an important data mining task with numerous applications, such as intrusion detection, credit card fraud detection, and video surveillance. However, given a specific complicated task with complicated data, the process of building an effective deep learning-based system for anomaly detection still highly relies on human expertise and laboring trials. Also, while neural architecture search (NAS) has shown its promise in discovering effective deep architectures in various domains, such as image classification, object detection, and semantic segmentation, contemporary NAS methods are not suitable for anomaly detection due to the lack of intrinsic search space, unstable search process, and low sample efficiency. To bridge the gap, in this article, we propose AutoAD, an automated anomaly detection framework, which aims to search for an optimal neural network model within a predefined search space. Specifically, we first design a curiosity-guided search strategy to overcome the curse of local optimality. A controller, which acts as a search agent, is encouraged to take actions to maximize the information gain about the controller’s internal belief. We further introduce an experience replay mechanism based on self-imitation learning to improve the sample efficiency. Experimental results on various real-world benchmark datasets demonstrate that the deep model identified by AutoAD achieves the best performance, comparing with existing handcrafted models and traditional search methods.

AutoOD: Neural Architecture Search for Outlier Detection

AutoOD: Neural Architecture Search for Outlier Detection Outlier detection is an important data mining task with numerous applications such as intrusion detection, credit card fraud detection, and video surveillance. However, given a specific task with complex data, the process of building an effective deep learning based system for outlier detection still highly relies on human expertise and laboring trials. Moreover, while Neural Architecture Search (NAS) has shown its promise in discovering effective deep architectures in various domains, such as image classification, object detection and semantic segmentation, contemporary NAS methods are not suitable for outlier detection due to the lack of intrinsic search space and low sample efficiency. To bridge the gap, in this paper, we propose AutoOD, an automated outlier detection framework, which aims to search for an optimal neural network model within a predefined search space. Specifically, we introduce an experience replay mechanism based on self-imitation learning to improve the sample efficiency. Experimental results on various real-world benchmark datasets demonstrate that the deep model identified by AutoOD achieves the best performance, comparing with existing handcrafted models and traditional search methods.