Linking Common Vulnerabilities and Exposures to the MITRE ATT CK Framework: A Self-Distillation Approach

08/03/2021
by   Benjamin Ampel, et al.
0

Due to the ever-increasing threat of cyber-attacks to critical cyber infrastructure, organizations are focusing on building their cybersecurity knowledge base. A salient list of cybersecurity knowledge is the Common Vulnerabilities and Exposures (CVE) list, which details vulnerabilities found in a wide range of software and hardware. However, these vulnerabilities often do not have a mitigation strategy to prevent an attacker from exploiting them. A well-known cybersecurity risk management framework, MITRE ATT CK, offers mitigation techniques for many malicious tactics. Despite the tremendous benefits that both CVEs and the ATT CK framework can provide for key cybersecurity stakeholders (e.g., analysts, educators, and managers), the two entities are currently separate. We propose a model, named the CVE Transformer (CVET), to label CVEs with one of ten MITRE ATT CK tactics. The CVET model contains a fine-tuning and self-knowledge distillation design applied to the state-of-the-art pre-trained language model RoBERTa. Empirical results on a gold-standard dataset suggest that our proposed novelties can increase model performance in F1-score. The results of this research can allow cybersecurity stakeholders to add preliminary MITRE ATT CK information to their collected CVEs.

READ FULL TEXT
research
10/30/2018

DARKMENTION: A Deployed System to Predict Enterprise-Targeted External Cyberattacks

Recent incidents of data breaches call for organizations to proactively ...
research
10/24/2020

Pre-trained Summarization Distillation

Recent state-of-the-art approaches to summarization utilize large pre-tr...
research
10/01/2020

BRON – Linking Attack Tactics, Techniques, and Patterns with Defensive Weaknesses, Vulnerabilities and Affected Platform Configurations

Many public sources of cyber threat and vulnerability information exist ...
research
06/29/2022

Knowledge Distillation of Transformer-based Language Models Revisited

In the past few years, transformer-based pre-trained language models hav...
research
11/11/2022

FAN-Trans: Online Knowledge Distillation for Facial Action Unit Detection

Due to its importance in facial behaviour analysis, facial action unit (...
research
02/12/2022

Perspectives on risk prioritization of data center vulnerabilities using rank aggregation and multi-objective optimization

Nowadays, data has become an invaluable asset to entities and companies,...
research
07/31/2023

Introducing and Interfacing with Cybersecurity – A Cards Approach

Cybersecurity is an important topic which is often viewed as one that is...

Please sign up or login with your details

Forgot password? Click here to reset