Adversarial Examples for Deep Learning Cyber Security Analytics

09/23/2019
by   Alesia Chernikova, et al.
0

As advances in Deep Neural Networks demonstrate unprecedented levels of performance in many critical applications, their vulnerability to attacks is still an open question. Adversarial examples are small modifications of legitimate data points, resulting in mis-classification at testing time. As Deep Neural Networks found a wide range of applications to cyber security analytics, it becomes important to study the robustness of these models in this setting. We consider adversarial testing-time attacks against Deep Learning models designed for cyber security applications. In security applications, machine learning models are not typically trained directly on the raw network traffic or security logs, but on intermediate features defined by domain experts. Existing attacks applied directly to the intermediate feature representation result in violation of feature constraints, leading to invalid adversarial examples. We propose a general framework for crafting adversarial attacks that takes into consideration the mathematical dependencies between intermediate features in model input vector, as well as physical constraints imposed by the applications. We apply our methods on two security applications, a malicious connection and a malicious domain classifier, to generate feasible adversarial examples in these domains. We show that with minimal effort (e.g., generating 12 network connections), an attacker can change the prediction of a model from Malicious to Benign. We extensively evaluate the success of our attacks, and how they depend on several optimization objectives and imbalance ratios in the training data.

READ FULL TEXT
research
11/08/2019

Imperceptible Adversarial Attacks on Tabular Data

Security of machine learning models is a concern as they may face advers...
research
07/25/2019

Semisupervised Adversarial Neural Networks for Cyber Security Transfer Learning

On the path to establishing a global cybersecurity framework where each ...
research
12/23/2020

Generating Comprehensive Data with Protocol Fuzzing for Applying Deep Learning to Detect Network Attacks

Network attacks have become a major security concern for organizations w...
research
06/26/2020

A Unified Framework for Analyzing and Detecting Malicious Examples of DNN Models

Deep Neural Networks are well known to be vulnerable to adversarial atta...
research
07/16/2023

On the Robustness of Split Learning against Adversarial Attacks

Split learning enables collaborative deep learning model training while ...
research
06/04/2020

Characterizing the Weight Space for Different Learning Models

Deep Learning has become one of the primary research areas in developing...
research
08/01/2017

Adversarial-Playground: A Visualization Suite Showing How Adversarial Examples Fool Deep Learning

Recent studies have shown that attackers can force deep learning models ...

Please sign up or login with your details

Forgot password? Click here to reset