UNBUS: Uncertainty-aware Deep Botnet Detection System in Presence of Perturbed Samples

04/18/2022
by   Rahim Taheri, et al.
0

A rising number of botnet families have been successfully detected using deep learning architectures. While the variety of attacks increases, these architectures should become more robust against attacks. They have been proven to be very sensitive to small but well constructed perturbations in the input. Botnet detection requires extremely low false-positive rates (FPR), which are not commonly attainable in contemporary deep learning. Attackers try to increase the FPRs by making poisoned samples. The majority of recent research has focused on the use of model loss functions to build adversarial examples and robust models. In this paper, two LSTM-based classification algorithms for botnet classification with an accuracy higher than 98 adversarial attack is proposed, which reduces the accuracy to about 30 by examining the methods for computing the uncertainty, the defense method is proposed to increase the accuracy to about 70 stochastic weight averaging quantification methods it has been investigated the uncertainty of the accuracy in the proposed methods.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/08/2021

Exploiting epistemic uncertainty of the deep learning models to generate adversarial samples

Deep neural network architectures are considered to be robust to random ...
research
08/29/2022

Towards Adversarial Purification using Denoising AutoEncoders

With the rapid advancement and increased use of deep learning models in ...
research
05/29/2021

Detecting Backdoor in Deep Neural Networks via Intentional Adversarial Perturbations

Recent researches show that deep learning model is susceptible to backdo...
research
05/13/2020

Adversarial examples are useful too!

Deep learning has come a long way and has enjoyed an unprecedented succe...
research
12/11/2020

Closeness and Uncertainty Aware Adversarial Examples Detection in Adversarial Machine Learning

Deep neural network (DNN) architectures are considered to be robust to r...
research
09/29/2020

Inverse Classification with Limited Budget and Maximum Number of Perturbed Samples

Most recent machine learning research focuses on developing new classifi...
research
12/08/2020

A Deep Marginal-Contrastive Defense against Adversarial Attacks on 1D Models

Deep learning algorithms have been recently targeted by attackers due to...

Please sign up or login with your details

Forgot password? Click here to reset