Defending Against Adversarial Machine Learning

11/26/2019
by   Alison Jenkins, et al.
37

An Adversarial System to attack and an Authorship Attribution System (AAS) to defend itself against the attacks are analyzed. Defending a system against attacks from an adversarial machine learner can be done by randomly switching between models for the system, by detecting and reacting to changes in the distribution of normal inputs, or by using other methods. Adversarial machine learning is used to identify a system that is being used to map system inputs to outputs. Three types of machine learners are using for the model that is being attacked. The machine learners that are used to model the system being attacked are a Radial Basis Function Support Vector Machine, a Linear Support Vector Machine, and a Feedforward Neural Network. The feature masks are evolved using accuracy as the fitness measure. The system defends itself against adversarial machine learning attacks by identifying inputs that do not match the probability distribution of normal inputs. The system also defends itself against adversarial attacks by randomly switching between the feature masks being used to map system inputs to outputs.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/14/2020

Extending Adversarial Attacks to Produce Adversarial Class Probability Distributions

Despite the remarkable performance and generalization levels of deep lea...
research
03/19/2021

Attribution of Gradient Based Adversarial Attacks for Reverse Engineering of Deceptions

Machine Learning (ML) algorithms are susceptible to adversarial attacks ...
research
01/14/2021

Adversarial Machine Learning in Text Analysis and Generation

The research field of adversarial machine learning witnessed a significa...
research
01/30/2013

Learning by Transduction

We describe a method for predicting a classification of an object given ...
research
11/19/2019

Machine Learning Classification Informed by a Functional Biophysical System

We present a novel machine learning architecture for classification sugg...
research
11/23/2022

Adversarial Attacks are a Surprisingly Strong Baseline for Poisoning Few-Shot Meta-Learners

This paper examines the robustness of deployed few-shot meta-learning sy...
research
12/07/2022

Support Vector Machine for Determining Euler Angles in an Inertial Navigation System

The paper discusses the improvement of the accuracy of an inertial navig...

Please sign up or login with your details

Forgot password? Click here to reset