Walling up Backdoors in Intrusion Detection Systems

09/17/2019
by   Maximilian Bachl, et al.
0

Interest in poisoning attacks and backdoors recently resurfaced for Deep Learning (DL) applications. Several successful defense mechanisms have been recently proposed for Convolutional Neural Networks (CNNs), for example in the context of autonomous driving. We show that visualization approaches can aid in identifying a backdoor independent of the used classifier. Surprisingly, we find that common defense mechanisms fail utterly to remove backdoors in DL for Intrusion Detection Systems (IDSs). Finally, we devise pruning-based approaches to remove backdoors for Decision Trees (DTs) and Random Forests (RFs) and demonstrate their effectiveness for two different network security datasets.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/04/2018

Provenance-based Intrusion Detection: Opportunities and Challenges

Intrusion detection is an arms race; attackers evade intrusion detection...
research
12/04/2020

Review: Deep Learning Methods for Cybersecurity and Intrusion Detection Systems

As the number of cyber-attacks is increasing, cybersecurity is evolving ...
research
02/07/2018

New Use Cases for Snort: Cloud and Mobile Environments

First, this case study explores an Intrusion Detection System package ca...
research
09/18/2020

Experimental Review of Neural-based approaches for Network Intrusion Management

The use of Machine Learning (ML) techniques in Intrusion Detection Syste...
research
01/23/2018

Secure Mobile Crowdsensing with Deep Learning

In order to stimulate secure sensing for Internet of Things (IoT) applic...
research
08/01/2019

Modeling and Analysis of Integrated Proactive Defense Mechanisms for Internet-of-Things

As a solution to protect and defend a system against inside attacks, man...

Please sign up or login with your details

Forgot password? Click here to reset