Exploring Adversarial Examples for Efficient Active Learning in Machine Learning Classifiers

09/22/2021
by   Honggang Yu, et al.
0

Machine learning researchers have long noticed the phenomenon that the model training process will be more effective and efficient when the training samples are densely sampled around the underlying decision boundary. While this observation has already been widely applied in a range of machine learning security techniques, it lacks theoretical analyses of the correctness of the observation. To address this challenge, we first add particular perturbation to original training examples using adversarial attack methods so that the generated examples could lie approximately on the decision boundary of the ML classifiers. We then investigate the connections between active learning and these particular training examples. Through analyzing various representative classifiers such as k-NN classifiers, kernel methods as well as deep neural networks, we establish a theoretical foundation for the observation. As a result, our theoretical proofs provide support to more efficient active learning methods with the help of adversarial examples, contrary to previous works where adversarial examples are often used as destructive solutions. Experimental results show that the established theoretical foundation will guide better active learning strategies based on adversarial examples.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/27/2018

Adversarial Active Learning for Deep Networks: a Margin Based Approach

We propose a new active learning strategy designed for deep neural netwo...
research
09/03/2021

A Synergetic Attack against Neural Network Classifiers combining Backdoor and Adversarial Examples

In this work, we show how to jointly exploit adversarial perturbation an...
research
06/27/2019

Using Intuition from Empirical Properties to Simplify Adversarial Training Defense

Due to the surprisingly good representation power of complex distributio...
research
06/21/2018

Detecting Adversarial Examples Based on Steganalysis

Deep Neural Networks (DNNs) have recently led to significant improvement...
research
03/27/2021

Improving Model Robustness by Adaptively Correcting Perturbation Levels with Active Queries

In addition to high accuracy, robustness is becoming increasingly import...
research
12/01/2016

A Theoretical Framework for Robustness of (Deep) Classifiers against Adversarial Examples

Most machine learning classifiers, including deep neural networks, are v...
research
07/10/2018

Fooling the classifier: Ligand antagonism and adversarial examples

Machine learning algorithms are sensitive to so-called adversarial pertu...

Please sign up or login with your details

Forgot password? Click here to reset