Model Agnostic Dual Quality Assessment for Adversarial Machine Learning and an Analysis of Current Neural Networks and Defenses

06/14/2019 ∙ by Danilo Vasconcellos Vargas, et al. ∙ 2

In adversarial machine learning, there are a huge number of attacks of various types which makes the evaluation of robustness for new models and defenses a daunting task. To make matters worse, there is an inherent bias in attacks and defenses. Here, we organize the problems faced (model dependence, insufficient evaluation, unreliable adversarial samples and perturbation dependent results) and propose a dual quality assessment method together with the concept of robustness levels to tackle them. We validate the dual quality assessment on state-of-the-art models (WideResNet, ResNet, AllConv, DenseNet, NIN, LeNet and CapsNet) as well as the current hardest defenses proposed at ICLR 2018 as well as the widely known adversarial training, showing that current models and defenses are vulnerable in all levels of robustness. Moreover, we show that robustness to L_0 and L_∞ attacks differ greatly and therefore duality should be taken into account for a correct assessment. Interestingly, a by-product of the assessment proposed is a novel L_∞ black-box method which requires even less perturbation than the One-Pixel Attack (only 12% of One-Pixel Attack's amount of perturbation) to achieve similar results. Thus, this paper elucidates the problems of robustness evaluation, proposes a dual quality assessment to tackle them as well as analyze the robustness of current models and defenses. Hopefully, the current analysis and proposed methods would aid the development of more robust deep neural networks and hybrids alike. Code available at: http://bit.ly/DualQualityAssessment

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 3

page 4

page 7

page 8

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.