A Simple Structure For Building A Robust Model

04/25/2022
by   Xiao Tan, et al.
0

As deep learning applications, especially programs of computer vision, are increasingly deployed in our lives, we have to think more urgently about the security of these applications.One effective way to improve the security of deep learning models is to perform adversarial training, which allows the model to be compatible with samples that are deliberately created for use in attacking the model.Based on this, we propose a simple architecture to build a model with a certain degree of robustness, which improves the robustness of the trained network by adding an adversarial sample detection network for cooperative training.At the same time, we design a new data sampling strategy that incorporates multiple existing attacks, allowing the model to adapt to many different adversarial attacks with a single training.We conducted some experiments to test the effectiveness of this design based on Cifar10 dataset, and the results indicate that it has some degree of positive effect on the robustness of the model.Our code could be found at https://github.com/dowdyboy/simple_structure_for_robust_model.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/08/2023

RobArch: Designing Robust Architectures against Adversarial Attacks

Adversarial Training is the most effective approach for improving the ro...
research
06/15/2019

Robust or Private? Adversarial Training Makes Models More Vulnerable to Privacy Attacks

Adversarial training was introduced as a way to improve the robustness o...
research
01/30/2023

Identifying Adversarially Attackable and Robust Samples

This work proposes a novel perspective on adversarial attacks by introdu...
research
07/14/2023

Frequency Domain Adversarial Training for Robust Volumetric Medical Segmentation

It is imperative to ensure the robustness of deep learning models in cri...
research
09/10/2023

Outlier Robust Adversarial Training

Supervised learning models are challenged by the intrinsic complexities ...
research
03/27/2023

Learning the Unlearnable: Adversarial Augmentations Suppress Unlearnable Example Attacks

Unlearnable example attacks are data poisoning techniques that can be us...
research
12/25/2020

Robustness, Privacy, and Generalization of Adversarial Training

Adversarial training can considerably robustify deep neural networks to ...

Please sign up or login with your details

Forgot password? Click here to reset