amsqr at MLSEC-2021: Thwarting Adversarial Malware Evasion with a Defense-in-Depth

10/06/2021
by   Alejandro Mosquera, et al.
0

This paper describes the author's participation in the 3rd edition of the Machine Learning Security Evasion Competition (MLSEC-2021) sponsored by CUJO AI, VM-Ray, MRG-Effitas, Nvidia and Microsoft. As in the previous year the goal was not only developing measures against adversarial attacks on a pre-defined set of malware samples but also finding ways of bypassing other teams' defenses in a simulated cloud environment. The submitted solutions were ranked second in both defender and attacker tracks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/15/2022

StratDef: a strategic defense against adversarial attacks in malware detection

Over the years, most research towards defenses against adversarial attac...
research
12/16/2017

Attack and Defense of Dynamic Analysis-Based, Adversarial Neural Malware Classification Models

Recently researchers have proposed using deep learning-based systems for...
research
10/19/2020

Against All Odds: Winning the Defense Challenge in an Evasion Competition with Diversification

Machine learning-based systems for malware detection operate in a hostil...
research
08/03/2022

Design of secure and robust cognitive system for malware detection

Machine learning based malware detection techniques rely on grayscale im...
research
06/15/2018

Non-Negative Networks Against Adversarial Attacks

Adversarial attacks against Neural Networks are a problem of considerabl...
research
03/31/2018

Adversarial Attacks and Defences Competition

To accelerate research on adversarial examples and robustness of machine...
research
07/12/2022

Practical Attacks on Machine Learning: A Case Study on Adversarial Windows Malware

While machine learning is vulnerable to adversarial examples, it still l...

Please sign up or login with your details

Forgot password? Click here to reset