Sardino: Ultra-Fast Dynamic Ensemble for Secure Visual Sensing at Mobile Edge

04/18/2022
by   Qun Song, et al.
0

Adversarial example attack endangers the mobile edge systems such as vehicles and drones that adopt deep neural networks for visual sensing. This paper presents Sardino, an active and dynamic defense approach that renews the inference ensemble at run time to develop security against the adaptive adversary who tries to exfiltrate the ensemble and construct the corresponding effective adversarial examples. By applying consistency check and data fusion on the ensemble's predictions, Sardino can detect and thwart adversarial inputs. Compared with the training-based ensemble renewal, we use HyperNet to achieve one million times acceleration and per-frame ensemble renewal that presents the highest level of difficulty to the prerequisite exfiltration attacks. Moreover, the robustness of the renewed ensembles against adversarial examples is enhanced with adversarial learning for the HyperNet. We design a run-time planner that maximizes the ensemble size in favor of security while maintaining the processing frame rate. Beyond adversarial examples, Sardino can also address the issue of out-of-distribution inputs effectively. This paper presents extensive evaluation of Sardino's performance in counteracting adversarial examples and applies it to build a real-time car-borne traffic sign recognition system. Live on-road tests show the built system's effectiveness in maintaining frame rate and detecting out-of-distribution inputs due to the false positives of a preceding YOLO-based traffic sign detector.

READ FULL TEXT

page 1

page 10

research
06/11/2022

Defending Adversarial Examples by Negative Correlation Ensemble

The security issues in DNNs, such as adversarial examples, have attracte...
research
09/14/2020

Robust Deep Learning Ensemble against Deception

Deep neural network (DNN) models are known to be vulnerable to malicious...
research
12/11/2017

Training Ensembles to Detect Adversarial Examples

We propose a new ensemble method for detecting and classifying adversari...
research
03/22/2021

ExAD: An Ensemble Approach for Explanation-based Adversarial Detection

Recent research has shown Deep Neural Networks (DNNs) to be vulnerable t...
research
01/25/2019

Improving Adversarial Robustness via Promoting Ensemble Diversity

Though deep neural networks have achieved significant progress on variou...
research
01/28/2021

Increasing the Confidence of Deep Neural Networks by Coverage Analysis

The great performance of machine learning algorithms and deep neural net...
research
08/26/2019

A Statistical Defense Approach for Detecting Adversarial Examples

Adversarial examples are maliciously modified inputs created to fool dee...

Please sign up or login with your details

Forgot password? Click here to reset