advertorch v0.1: An Adversarial Robustness Toolbox based on PyTorch

02/20/2019
by   Gavin Weiguang Ding, et al.
0

advertorch is a toolbox for adversarial robustness research. It contains various implementations for attacks, defenses and robust training methods. advertorch is built on PyTorch (Paszke et al., 2017), and leverages the advantages of the dynamic computational graph to provide concise and efficient reference implementations. The code is licensed under the LGPL license and is open sourced at https://github.com/BorealisAI/advertorch .

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/01/2019

Disentangling Improves VAEs' Robustness to Adversarial Attacks

This paper is concerned with the robustness of VAEs to adversarial attac...
research
03/01/2020

Understanding the Intrinsic Robustness of Image Distributions using Conditional Generative Models

Starting with Gilmer et al. (2018), several works have demonstrated the ...
research
09/21/2023

How Robust is Google's Bard to Adversarial Image Attacks?

Multimodal Large Language Models (MLLMs) that integrate text and other m...
research
07/01/2019

Avoiding Implementation Pitfalls of "Matrix Capsules with EM Routing" by Hinton et al

The recent progress on capsule networks by Hinton et al. has generated c...
research
02/24/2020

Triple Wins: Boosting Accuracy, Robustness and Efficiency Together by Enabling Input-Adaptive Inference

Deep networks were recently suggested to face the odds between accuracy ...
research
01/28/2022

REET: Robustness Evaluation and Enhancement Toolbox for Computational Pathology

Motivation: Digitization of pathology laboratories through digital slide...
research
04/05/2021

Rethinking Perturbations in Encoder-Decoders for Fast Training

We often use perturbations to regularize neural models. For neural encod...

Please sign up or login with your details

Forgot password? Click here to reset