Adversarial Attack for Asynchronous Event-based Data

12/27/2021
by   Wooju Lee, et al.
2

Deep neural networks (DNNs) are vulnerable to adversarial examples that are carefully designed to cause the deep learning model to make mistakes. Adversarial examples of 2D images and 3D point clouds have been extensively studied, but studies on event-based data are limited. Event-based data can be an alternative to a 2D image under high-speed movements, such as autonomous driving. However, the given adversarial events make the current deep learning model vulnerable to safety issues. In this work, we generate adversarial examples and then train the robust models for event-based data, for the first time. Our algorithm shifts the time of the original events and generates additional adversarial events. Additional adversarial events are generated in two stages. First, null events are added to the event-based data to generate additional adversarial events. The perturbation size can be controlled with the number of null events. Second, the location and time of additional adversarial events are set to mislead DNNs in a gradient-based attack. Our algorithm achieves an attack success rate of 97.95% on the N-Caltech101 dataset. Furthermore, the adversarial training model improves robustness on the adversarial event data compared to the original model.

READ FULL TEXT

page 1

page 3

page 5

page 6

research
09/19/2018

Generating 3D Adversarial Point Clouds

Machine learning models especially deep neural networks (DNNs) have been...
research
01/23/2020

Towards Robust DNNs: An Taylor Expansion-Based Method for Generating Powerful Adversarial Examples

Although deep neural networks (DNNs) have achieved successful applicatio...
research
04/08/2018

Adaptive Spatial Steganography Based on Probability-Controlled Adversarial Examples

Deep learning model is vulnerable to adversarial attack, which generates...
research
11/06/2019

Reversible Adversarial Examples based on Reversible Image Transformation

Recent studies show that widely used deep neural networks (DNNs) are vul...
research
06/19/2019

SemanticAdv: Generating Adversarial Examples via Attribute-conditional Image Editing

Deep neural networks (DNNs) have achieved great success in various appli...
research
07/04/2022

Hessian-Free Second-Order Adversarial Examples for Adversarial Learning

Recent studies show deep neural networks (DNNs) are extremely vulnerable...
research
04/17/2023

Subduction zone fault slip from seismic noise and GPS data

In Geosciences a class of phenomena that is widely studied given its rea...

Please sign up or login with your details

Forgot password? Click here to reset