MaskDGA: A Black-box Evasion Technique Against DGA Classifiers and Adversarial Defenses

02/24/2019
by   Lior Sidi, et al.
0

Domain generation algorithms (DGAs) are commonly used by botnets to generate domain names through which bots can establish a resilient communication channel with their command and control servers. Recent publications presented deep learning, character-level classifiers that are able to detect algorithmically generated domain (AGD) names with high accuracy, and correspondingly, significantly reduce the effectiveness of DGAs for botnet communication. In this paper we present MaskDGA, a practical adversarial learning technique that adds perturbation to the character-level representation of algorithmically generated domain names in order to evade DGA classifiers, without the attacker having any knowledge about the DGA classifier's architecture and parameters. MaskDGA was evaluated using the DMD-2018 dataset of AGD names and four recently published DGA classifiers, in which the average F1-score of the classifiers degrades from 0.977 to 0.495 when applying the evasion technique. An additional evaluation was conducted using the same classifiers but with adversarial defenses implemented: adversarial re-training and distillation. The results of this evaluation show that MaskDGA can be used for improving the robustness of the character-level DGA classifiers against adversarial attacks, but that ideally DGA classifiers should incorporate additional features alongside character-level features that are demonstrated in this study to be vulnerable to adversarial attacks.

READ FULL TEXT

page 3

page 5

research
05/03/2019

CharBot: A Simple and Effective Method for Evading DGA Classifiers

Domain generation algorithms (DGAs) are commonly leveraged by malware to...
research
10/27/2020

It's All in the Name: A Character Based Approach To Infer Religion

Demographic inference from text has received a surge of attention in the...
research
03/12/2020

Inline Detection of DGA Domains Using Side Information

Malware applications typically use a command and control (C C) server ...
research
02/10/2021

RoBIC: A benchmark suite for assessing classifiers robustness

Many defenses have emerged with the development of adversarial attacks. ...
research
03/08/2022

Adaptative Perturbation Patterns: Realistic Adversarial Learning for Robust Intrusion Detection

Adversarial attacks pose a major threat to machine learning and to the s...
research
01/22/2021

Generating Black-Box Adversarial Examples in Sparse Domain

Applications of machine learning (ML) models and convolutional neural ne...
research
05/24/2018

Detecting Homoglyph Attacks with a Siamese Neural Network

A homoglyph (name spoofing) attack is a common technique used by adversa...

Please sign up or login with your details

Forgot password? Click here to reset