Interpreting Machine Learning Malware Detectors Which Leverage N-gram Analysis

01/27/2020
by   William Briguglio, et al.
0

In cyberattack detection and prevention systems, cybersecurity analysts always prefer solutions that are as interpretable and understandable as rule-based or signature-based detection. This is because of the need to tune and optimize these solutions to mitigate and control the effect of false positives and false negatives. Interpreting machine learning models is a new and open challenge. However, it is expected that an interpretable machine learning solution will be domain-specific. For instance, interpretable solutions for machine learning models in healthcare are different than solutions in malware detection. This is because the models are complex, and most of them work as a black-box. Recently, the increased ability for malware authors to bypass antimalware systems has forced security specialists to look to machine learning for creating robust detection systems. If these systems are to be relied on in the industry, then, among other challenges, they must also explain their predictions. The objective of this paper is to evaluate the current state-of-the-art ML models interpretability techniques when applied to ML-based malware detectors. We demonstrate interpretability techniques in practice and evaluate the effectiveness of existing interpretability techniques in the malware analysis domain.

READ FULL TEXT
research
01/15/2021

Towards interpreting ML-based automated malware detection models: a survey

Malware is being increasingly threatening and malware detectors based on...
research
10/22/2020

Getting Passive Aggressive About False Positives: Patching Deployed Malware Detectors

False positives (FPs) have been an issue of extreme importance for anti-...
research
09/15/2019

I-MAD: A Novel Interpretable Malware Detector Using Hierarchical Transformer

Malware imposes tremendous threats to computer users nowadays. Since sig...
research
10/19/2020

Interpretable Machine Learning – A Brief History, State-of-the-Art and Challenges

We present a brief history of the field of interpretable machine learnin...
research
04/24/2020

Why an Android App is Classified as Malware? Towards Malware Classification Interpretation

Machine learning (ML) based approach is considered as one of the most pr...
research
09/30/2019

MonoNet: Towards Interpretable Models by Learning Monotonic Features

Being able to interpret, or explain, the predictions made by a machine l...
research
12/27/2019

Towards Deep Federated Defenses Against Malware in Cloud Ecosystems

In cloud computing environments with many virtual machines, containers, ...

Please sign up or login with your details

Forgot password? Click here to reset