Intriguing Usage of Applicability Domain: Lessons from Cheminformatics Applied to Adversarial Learning

05/02/2021
by   Luke Chang, et al.
10

Defending machine learning models from adversarial attacks is still a challenge: none of the robust models is utterly immune to adversarial examples to date. Different defences have been proposed; however, most of them are tailored to particular ML models and adversarial attacks, therefore their effectiveness and applicability are strongly limited. A similar problem plagues cheminformatics: Quantitative Structure-Activity Relationship (QSAR) models struggle to predict biological activity for the entire chemical space because they are trained on a very limited amount of compounds with known effects. This problem is relieved with a technique called Applicability Domain (AD), which rejects the unsuitable compounds for the model. Adversarial examples are intentionally crafted inputs that exploit the blind spots which the model has not learned to classify, and adversarial defences try to make the classifier more robust by covering these blind spots. There is an apparent similarity between AD and adversarial defences. Inspired by the concept of AD, we propose a multi-stage data-driven defence that is testing for: Applicability: abnormal values, namely inputs not compliant with the intended use case of the model; Reliability: samples far from the training data; and Decidability: samples whose predictions contradict the predictions of their neighbours.It can be applied to any classification model and is not limited to specific types of adversarial attacks. With an empirical analysis, this paper demonstrates how Applicability Domain can effectively reduce the vulnerability of ML models to adversarial examples.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/17/2023

Analyzing the Impact of Adversarial Examples on Explainable Machine Learning

Adversarial attacks are a type of attack on machine learning models wher...
research
03/28/2018

The Effects of JPEG and JPEG2000 Compression on Attacks using Adversarial Examples

Adversarial examples are known to have a negative effect on the performa...
research
02/07/2022

On The Empirical Effectiveness of Unrealistic Adversarial Hardening Against Realistic Adversarial Attacks

While the literature on security attacks and defense of Machine Learning...
research
11/17/2017

How Wrong Am I? - Studying Adversarial Examples and their Impact on Uncertainty in Gaussian Process Machine Learning Models

Machine learning models are vulnerable to adversarial examples: minor, i...
research
10/26/2019

Detection of Adversarial Attacks and Characterization of Adversarial Subspace

Adversarial attacks have always been a serious threat for any data-drive...
research
02/19/2020

Variational Encoder-based Reliable Classification

Machine learning models provide statistically impressive results which m...
research
08/25/2019

Adversarial Edit Attacks for Tree Data

Many machine learning models can be attacked with adversarial examples, ...

Please sign up or login with your details

Forgot password? Click here to reset