Adversarial Sampling for Fairness Testing in Deep Neural Network

03/06/2023
by   Tosin Ige, et al.
0

In this research, we focus on the usage of adversarial sampling to test for the fairness in the prediction of deep neural network model across different classes of image in a given dataset. While several framework had been proposed to ensure robustness of machine learning model against adversarial attack, some of which includes adversarial training algorithm. There is still the pitfall that adversarial training algorithm tends to cause disparity in accuracy and robustness among different group. Our research is aimed at using adversarial sampling to test for fairness in the prediction of deep neural network model across different classes or categories of image in a given dataset. We successfully demonstrated a new method of ensuring fairness across various group of input in deep neural network classifier. We trained our neural network model on the original image, and without training our model on the perturbed or attacked image. When we feed the adversarial samplings to our model, it was able to predict the original category/ class of the image the adversarial sample belongs to. We also introduced and used the separation of concern concept from software engineering whereby there is an additional standalone filter layer that filters perturbed image by heavily removing the noise or attack before automatically passing it to the network for classification, we were able to have accuracy of 93.3 dataset, and so, in order to account for fairness, we applied our hypothesis across each categories of dataset and were able to get a consistent result and accuracy.

READ FULL TEXT

page 1

page 2

page 5

research
09/15/2022

Improving Robust Fairness via Balance Adversarial Training

Adversarial training (AT) methods are effective against adversarial atta...
research
11/22/2018

Parametric Noise Injection: Trainable Randomness to Improve Deep Neural Network Robustness against Adversarial Attack

Recent development in the field of Deep Learning have exposed the underl...
research
12/17/2018

Spartan Networks: Self-Feature-Squeezing Neural Networks for increased robustness in adversarial settings

Deep learning models are vulnerable to adversarial examples which are in...
research
05/30/2019

Robust Sparse Regularization: Simultaneously Optimizing Neural Network Robustness and Compactness

Deep Neural Network (DNN) trained by the gradient descent method is know...
research
11/14/2022

An Inter-observer consistent deep adversarial training for visual scanpath prediction

The visual scanpath is a sequence of points through which the human gaze...
research
01/27/2019

Acceleration of the NVT-flash calculation for multicomponent mixtures using deep neural network models

Phase equilibrium calculation, also known as flash calculation, has been...
research
07/02/2020

Deep Learning Defenses Against Adversarial Examples for Dynamic Risk Assessment

Deep Neural Networks were first developed decades ago, but it was not un...

Please sign up or login with your details

Forgot password? Click here to reset