Improving neural network robustness through neighborhood preserving layers

Publication Date: 1/15/2021

Event: Manifold Learning from Euclid to Riemann: Workshop at ICPR 2021

Reference: pp. 1-8, 2021

Authors: Bingyuan Liu, NEC Laboratories America, Inc., Penn State University; Christopher Malon, NEC Laboratories America, Inc.; Lingzhou Xue, Penn State University; Erik Kruus, NEC Laboratories America, Inc.

Abstract: One major source of vulnerability of neural nets in classification tasks is from overparameterized fully connected layers near the end of the network. In this paper, we propose a new neighborhood preserving layer which can replace these fully connected layers to improve the network robustness. Networks including these neighborhood preserving layers can be trained efficiently. We theoretically prove that our proposed layers are more robust against distortion because they effectively control the magnitude of gradients. Finally, we empirically show that networks with our proposed layers are more robust against state-of-the-art gradient descent-based attacks, such as a PGD attack on the benchmark image classification datasets MNIST and CIFAR10.

Publication Link: