Blind Adversarial Network Perturbations

02/16/2020
by   Milad Nasr, et al.
0

Deep Neural Networks (DNNs) are commonly used for various traffic analysis problems, such as website fingerprinting and flow correlation, as they outperform traditional (e.g., statistical) techniques by large margins. However, deep neural networks are known to be vulnerable to adversarial examples: adversarial inputs to the model that get labeled incorrectly by the model due to small adversarial perturbations. In this paper, for the first time, we show that an adversary can defeat DNN-based traffic analysis techniques by applying adversarial perturbations on the patterns of live network traffic.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/03/2017

Adversarial Examples for Semantic Image Segmentation

Machine learning methods in general and Deep Neural Networks in particul...
research
02/20/2022

Real-time Over-the-air Adversarial Perturbations for Digital Communications using Deep Neural Networks

Deep neural networks (DNNs) are increasingly being used in a variety of ...
research
01/04/2018

Facial Attributes: Accuracy and Adversarial Robustness

Facial attributes, emerging soft biometrics, must be automatically and r...
research
01/22/2019

Sensitivity Analysis of Deep Neural Networks

Deep neural networks (DNNs) have achieved superior performance in variou...
research
11/24/2015

The Limitations of Deep Learning in Adversarial Settings

Deep learning takes advantage of large datasets and computationally effi...
research
12/04/2018

Adversarial Example Decomposition

Research has shown that widely used deep neural networks are vulnerable ...
research
04/19/2022

CorrGAN: Input Transformation Technique Against Natural Corruptions

Because of the increasing accuracy of Deep Neural Networks (DNNs) on dif...

Please sign up or login with your details

Forgot password? Click here to reset