Attention-Based Self-Supervised Feature Learning for Security Data

03/24/2020
by   I-ta Lee, et al.
0

While applications of machine learning in cyber-security have grown rapidly, most models use manually constructed features. This manual approach is error-prone and requires domain expertise. In this paper, we design a self-supervised sequence-to-sequence model with attention to learn an embedding for data routinely used in cyber-security applications. The method is validated on two real world public data sets. The learned features are used in an anomaly detection model and perform better than learned features from baseline methods.

READ FULL TEXT
research
06/17/2020

Self-Supervised Representation Learning for Visual Anomaly Detection

Self-supervised learning allows for better utilization of unlabelled dat...
research
12/19/2018

Machine Learning in Cyber-Security - Problems, Challenges and Data Sets

We present cyber-security problems of high importance. We show that in o...
research
12/14/2021

Out-of-Distribution Detection without Class Labels

Anomaly detection methods identify samples that deviate from the normal ...
research
02/21/2022

Improving Radioactive Material Localization by Leveraging Cyber-Security Model Optimizations

One of the principal uses of physical-space sensors in public safety app...
research
03/19/2022

No Shifted Augmentations (NSA): compact distributions for robust self-supervised Anomaly Detection

Unsupervised Anomaly detection (AD) requires building a notion of normal...
research
08/31/2020

Connecting Web Event Forecasting with Anomaly Detection: A Case Study on Enterprise Web Applications Using Self-Supervised Neural Networks

Recently web applications have been widely used in enterprises to assist...
research
12/01/2019

An Anomaly Contribution Explainer for Cyber-Security Applications

In this paper, we introduce Anomaly Contribution Explainer or ACE, a too...

Please sign up or login with your details

Forgot password? Click here to reset