Torchattacks : A Pytorch Repository for Adversarial Attacks

09/24/2020
by   Hoki Kim, et al.
0

Torchattacks is a PyTorch library that contains adversarial attacks to generate adversarial examples and to verify the robustness of deep learning models. The code can be found at https://github.com/Harry24k/adversarial-attacks-pytorch.

READ FULL TEXT
research
11/08/2019

Imperceptible Adversarial Attacks on Tabular Data

Security of machine learning models is a concern as they may face advers...
research
01/30/2021

Cortical Features for Defense Against Adversarial Audio Attacks

We propose using a computational model of the auditory cortex as a defen...
research
09/07/2023

DiffDefense: Defending against Adversarial Attacks via Diffusion Models

This paper presents a novel reconstruction method that leverages Diffusi...
research
03/15/2020

Output Diversified Initialization for Adversarial Attacks

Adversarial examples are often constructed by iteratively refining a ran...
research
10/01/2022

Adversarial Attacks on Transformers-Based Malware Detectors

Signature-based malware detectors have proven to be insufficient as even...
research
04/29/2020

TextAttack: A Framework for Adversarial Attacks in Natural Language Processing

TextAttack is a library for running adversarial attacks against natural ...
research
05/13/2020

DeepRobust: A PyTorch Library for Adversarial Attacks and Defenses

DeepRobust is a PyTorch adversarial learning library which aims to build...

Please sign up or login with your details

Forgot password? Click here to reset