Foolbox v0.8.0: A Python toolbox to benchmark the robustness of machine learning models

07/13/2017
by   Jonas Rauber, et al.
0

Even todays most advanced machine learning models are easily fooled by almost imperceptible perturbations of their inputs. Foolbox is a new Python package to generate such adversarial perturbations and to quantify and compare the robustness of machine learning models. It is build around the idea that the most comparable robustness measure is the minimum perturbation needed to craft an adversarial example. To this end, Foolbox provides reference implementations of most published adversarial attack methods alongside some new ones, all of which perform internal hyperparameter tuning to find the minimum adversarial perturbation. Additionally, Foolbox interfaces with most popular deep learning frameworks such as PyTorch, Keras, TensorFlow, Theano and MXNet, provides a straight forward way to add support for other frameworks and allows different adversarial criteria such as targeted misclassification and top-k misclassification as well as different distance measures. The code is licensed under the MIT license and is openly available at https://github.com/bethgelab/foolbox .

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/13/2022

Perturbation Inactivation Based Adversarial Defense for Face Recognition

Deep learning-based face recognition models are vulnerable to adversaria...
research
03/07/2022

ImageNet-Patch: A Dataset for Benchmarking Machine Learning Robustness against Adversarial Patches

Adversarial patches are optimized contiguous pixel blocks in an input im...
research
05/31/2020

Evaluations and Methods for Explanation through Robustness Analysis

Among multiple ways of interpreting a machine learning model, measuring ...
research
07/16/2020

Learning perturbation sets for robust machine learning

Although much progress has been made towards robust deep learning, a sig...
research
03/23/2023

Optimization and Optimizers for Adversarial Robustness

Empirical robustness evaluation (RE) of deep learning models against adv...
research
06/19/2018

Maximally Invariant Data Perturbation as Explanation

While several feature scoring methods are proposed to explain the output...
research
08/29/2021

CrossedWires: A Dataset of Syntactically Equivalent but Semantically Disparate Deep Learning Models

The training of neural networks using different deep learning frameworks...

Please sign up or login with your details

Forgot password? Click here to reset