A Generative Model based Adversarial Security of Deep Learning and Linear Classifier Models

10/17/2020
by   erhat Ozgur Catak, et al.
8

In recent years, machine learning algorithms have been applied widely in various fields such as health, transportation, and the autonomous car. With the rapid developments of deep learning techniques, it is critical to take the security concern into account for the application of the algorithms. While machine learning offers significant advantages in terms of the application of algorithms, the issue of security is ignored. Since it has many applications in the real world, security is a vital part of the algorithms. In this paper, we have proposed a mitigation method for adversarial attacks against machine learning models with an autoencoder model that is one of the generative ones. The main idea behind adversarial attacks against machine learning models is to produce erroneous results by manipulating trained models. We have also presented the performance of autoencoder models to various attack methods from deep neural networks to traditional algorithms by using different methods such as non-targeted and targeted attacks to multi-class logistic regression, a fast gradient sign method, a targeted fast gradient sign method and a basic iterative method attack to neural networks for the MNIST dataset.

READ FULL TEXT

page 10

page 19

page 23

page 24

page 28

page 29

page 32

page 33

research
03/12/2021

Adversarial Machine Learning Security Problems for 6G: mmWave Beam Prediction Use-Case

6G is the next generation for the communication systems. In recent years...
research
05/09/2021

Security Concerns on Machine Learning Solutions for 6G Networks in mmWave Beam Prediction

6G – sixth generation – is the latest cellular technology currently unde...
research
08/23/2021

Kryptonite: An Adversarial Attack Using Regional Focus

With the Rise of Adversarial Machine Learning and increasingly robust ad...
research
12/07/2018

Combatting Adversarial Attacks through Denoising and Dimensionality Reduction: A Cascaded Autoencoder Approach

Machine Learning models are vulnerable to adversarial attacks that rely ...
research
01/06/2023

Linear and non-linear machine learning attacks on physical unclonable functions

In this thesis, several linear and non-linear machine learning attacks o...
research
11/18/2019

Hacking Neural Networks: A Short Introduction

A large chunk of research on the security issues of neural networks is f...
research
12/08/2020

A Deep Marginal-Contrastive Defense against Adversarial Attacks on 1D Models

Deep learning algorithms have been recently targeted by attackers due to...

Please sign up or login with your details

Forgot password? Click here to reset