URET: Universal Robustness Evaluation Toolkit (for Evasion)

08/03/2023
by   Kevin Eykholt, et al.
0

Machine learning models are known to be vulnerable to adversarial evasion attacks as illustrated by image classification models. Thoroughly understanding such attacks is critical in order to ensure the safety and robustness of critical AI tasks. However, most evasion attacks are difficult to deploy against a majority of AI systems because they have focused on image domain with only few constraints. An image is composed of homogeneous, numerical, continuous, and independent features, unlike many other input types to AI systems used in practice. Furthermore, some input types include additional semantic and functional constraints that must be observed to generate realistic adversarial inputs. In this work, we propose a new framework to enable the generation of adversarial inputs irrespective of the input type and task domain. Given an input and a set of pre-defined input transformations, our framework discovers a sequence of transformations that result in a semantically correct and functional adversarial input. We demonstrate the generality of our approach on several diverse machine learning tasks with various input representations. We also show the importance of generating adversarial examples as they enable the deployment of mitigation techniques.

READ FULL TEXT

page 4

page 9

page 19

research
08/26/2018

Adversarially Regularising Neural NLI Models to Integrate Logical Background Knowledge

Adversarial examples are inputs to machine learning models designed to c...
research
05/18/2021

On the Robustness of Domain Constraints

Machine learning is vulnerable to adversarial examples-inputs designed t...
research
08/16/2016

Towards Evaluating the Robustness of Neural Networks

Neural networks provide state-of-the-art results for most machine learni...
research
11/23/2021

Adversarial machine learning for protecting against online manipulation

Adversarial examples are inputs to a machine learning system that result...
research
01/05/2022

ROOM: Adversarial Machine Learning Attacks Under Real-Time Constraints

Advances in deep learning have enabled a wide range of promising applica...
research
02/24/2021

Adversarial Robustness with Non-uniform Perturbations

Robustness of machine learning models is critical for security related a...
research
11/19/2019

Deep Detector Health Management under Adversarial Campaigns

Machine learning models are vulnerable to adversarial inputs that induce...

Please sign up or login with your details

Forgot password? Click here to reset