Mask-Guided Divergence Loss Improves the Generalization and Robustness of Deep Neural Network

06/02/2022
by   Xiangyuan Yang, et al.
0

Deep neural network (DNN) with dropout can be regarded as an ensemble model consisting of lots of sub-DNNs (i.e., an ensemble sub-DNN where the sub-DNN is the remaining part of the DNN after dropout), and through increasing the diversity of the ensemble sub-DNN, the generalization and robustness of the DNN can be effectively improved. In this paper, a mask-guided divergence loss function (MDL), which consists of a cross-entropy loss term and an orthogonal term, is proposed to increase the diversity of the ensemble sub-DNN by the added orthogonal term. Particularly, the mask technique is introduced to assist in generating the orthogonal term for avoiding overfitting of the diversity learning. The theoretical analysis and extensive experiments on 4 datasets (i.e., MNIST, FashionMNIST, CIFAR10, and CIFAR100) manifest that MDL can improve the generalization and robustness of standard training and adversarial training. For CIFAR10 and CIFAR100, in standard training, the maximum improvement of accuracy is 1.38% on natural data, 30.97% on FGSM (i.e., Fast Gradient Sign Method) attack, 38.18% on PGD (i.e., Projected Gradient Descent) attack. While in adversarial training, the maximum improvement is 1.68% on natural data, 4.03% on FGSM attack and 2.65% on PGD attack.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset