Vulnerability Under Adversarial Machine Learning: Bias or Variance?

by   Hossein Aboutalebi, et al.

Prior studies have unveiled the vulnerability of the deep neural networks in the context of adversarial machine learning, leading to great recent attention into this area. One interesting question that has yet to be fully explored is the bias-variance relationship of adversarial machine learning, which can potentially provide deeper insights into this behaviour. The notion of bias and variance is one of the main approaches to analyze and evaluate the generalization and reliability of a machine learning model. Although it has been extensively used in other machine learning models, it is not well explored in the field of deep learning and it is even less explored in the area of adversarial machine learning. In this study, we investigate the effect of adversarial machine learning on the bias and variance of a trained deep neural network and analyze how adversarial perturbations can affect the generalization of a network. We derive the bias-variance trade-off for both classification and regression applications based on two main loss functions: (i) mean squared error (MSE), and (ii) cross-entropy. Furthermore, we perform quantitative analysis with both simulated and real data to empirically evaluate consistency with the derived bias-variance tradeoffs. Our analysis sheds light on why the deep neural networks have poor performance under adversarial perturbation from a bias-variance point of view and how this type of perturbation would change the performance of a network. Moreover, given these new theoretical findings, we introduce a new adversarial machine learning algorithm with lower computational complexity than well-known adversarial machine learning strategies (e.g., PGD) while providing a high success rate in fooling deep neural networks in lower perturbation magnitudes.



page 8

page 13

page 14

page 15

page 16


Robust Decentralized Learning for Neural Networks

In decentralized learning, data is distributed among local clients which...

Generalized Negative Correlation Learning for Deep Ensembling

Ensemble algorithms offer state of the art performance in many machine l...

A Modern Take on the Bias-Variance Tradeoff in Neural Networks

We revisit the bias-variance tradeoff for neural networks in light of mo...

Local Intrinsic Dimensionality Signals Adversarial Perturbations

The vulnerability of machine learning models to adversarial perturbation...

A unifying approach on bias and variance analysis for classification

Standard bias and variance (B V) terminologies were originally defined...

Bias-Variance Games

Firms engaged in electronic commerce increasingly rely on machine learni...

Fooling the classifier: Ligand antagonism and adversarial examples

Machine learning algorithms are sensitive to so-called adversarial pertu...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.