Language Models for Novelty Detection in System Call Traces

09/05/2023
by   Quentin Fournier, et al.
0

Due to the complexity of modern computer systems, novel and unexpected behaviors frequently occur. Such deviations are either normal occurrences, such as software updates and new user activities, or abnormalities, such as misconfigurations, latency issues, intrusions, and software bugs. Regardless, novel behaviors are of great interest to developers, and there is a genuine need for efficient and effective methods to detect them. Nowadays, researchers consider system calls to be the most fine-grained and accurate source of information to investigate the behavior of computer systems. Accordingly, this paper introduces a novelty detection methodology that relies on a probability distribution over sequences of system calls, which can be seen as a language model. Language models estimate the likelihood of sequences, and since novelties deviate from previously observed behaviors by definition, they would be unlikely under the model. Following the success of neural networks for language models, three architectures are evaluated in this work: the widespread LSTM, the state-of-the-art Transformer, and the lower-complexity Longformer. However, large neural networks typically require an enormous amount of data to be trained effectively, and to the best of our knowledge, no massive modern datasets of kernel traces are publicly available. This paper addresses this limitation by introducing a new open-source dataset of kernel traces comprising over 2 million web requests with seven distinct behaviors. The proposed methodology requires minimal expert hand-crafting and achieves an F-score and AuROC greater than 95 The source code and trained models are publicly available on GitHub while the datasets are available on Zenodo.

READ FULL TEXT
research
09/10/2019

An Evalutation of Programming Language Models' performance on Software Defect Detection

This dissertation presents an evaluation of several language models on s...
research
07/31/2020

Language Modelling for Source Code with Transformer-XL

It has been found that software, like natural language texts, exhibits "...
research
04/21/2023

Emergent and Predictable Memorization in Large Language Models

Memorization, or the tendency of large language models (LLMs) to output ...
research
03/17/2020

Big Code != Big Vocabulary: Open-Vocabulary Models for Source Code

Statistical language modeling techniques have successfully been applied ...
research
04/18/2023

HeRo: RoBERTa and Longformer Hebrew Language Models

In this paper, we fill in an existing gap in resources available to the ...
research
03/24/2022

Ensembling and Knowledge Distilling of Large Sequence Taggers for Grammatical Error Correction

In this paper, we investigate improvements to the GEC sequence tagging a...
research
03/20/2019

Sundials in the Shade: An Internet-wide Perspective on ICMP Timestamps

ICMP timestamp request and response packets have been standardized for n...

Please sign up or login with your details

Forgot password? Click here to reset