Adversarial Attack, in the context of machine learning and artificial intelligence, is a deliberate and carefully crafted manipulation of input data with the intent to deceive or compromise the performance of a machine learning model. Adversarial attacks aim to exploit vulnerabilities or weaknesses in the model’s behavior, making it produce incorrect or unexpected outputs.

Posts

Towards Robustness of Deep Neural Networks via Networks via Regularization

Towards Robustness of Deep Neural Networks via Networks via Regularization Recent studies have demonstrated the vulnerability of deep neural networks against adversarial examples. In-spired by the observation that adversarial examples often lie outside the natural image data manifold and the intrinsic dimension of image data is much smaller than its pixel space dimension, we propose to embed high-dimensional input images into a low-dimensional space and apply regularization on the embedding space to push the adversarial examples back to the manifold. The proposed framework is called Embedding Regularized Classifier (ER-Classifier), which improves the adversarial robustness of the classifier through embedding regularization. Besides improving classification accuracy against adversarial examples, the framework can be combined with detection methods to detect adversarial examples. Experimental results on several benchmark datasets show that, our proposed framework achieves good performance against strong adversarial at-tack methods.

Improving neural network robustness through neighborhood preserving layers

Improving neural network robustness through neighborhood preserving layers One major source of vulnerability of neural nets in classification tasks is from overparameterized fully connected layers near the end of the network. In this paper, we propose a new neighborhood preserving layer which can replace these fully connected layers to improve the network robustness. Networks including these neighborhood preserving layers can be trained efficiently. We theoretically prove that our proposed layers are more robust against distortion because they effectively control the magnitude of gradients. Finally, we empirically show that networks with our proposed layers are more robust against state-of-the-art gradient descent-based attacks, such as a PGD attack on the benchmark image classification datasets MNIST and CIFAR10.