Anomalicious: Automated Detection of Anomalous and Potentially Malicious Commits on GitHub

by   Danielle Gonzalez, et al.

Security is critical to the adoption of open source software (OSS), yet few automated solutions currently exist to help detect and prevent malicious contributions from infecting open source repositories. On GitHub, a primary host of OSS, repositories contain not only code but also a wealth of commit-related and contextual metadata - what if this metadata could be used to automatically identify malicious OSS contributions? In this work, we show how to use only commit logs and repository metadata to automatically detect anomalous and potentially malicious commits. We identify and evaluate several relevant factors which can be automatically computed from this data, such as the modification of sensitive files, outlier change properties, or a lack of trust in the commit's author. Our tool, Anomalicious, automatically computes these factors and considers them holistically using a rule-based decision model. In an evaluation on a data set of 15 malware-infected repositories, Anomalicious showed promising results and identified 53.33 commits, while flagging less than 1 Additionally, the tool found other interesting anomalies that are not related to malicious commits in an analysis of repositories with no known malicious commits.



There are no comments yet.


page 1


Towards an Automated Pipeline for Detecting and Classifying Malware through Machine Learning

The constant growth in the number of malware - software or code fragment...

Practical Automated Detection of Malicious npm Packages

The npm registry is one of the pillars of the JavaScript and TypeScript ...

Identifying malicious accounts in Blockchains using Domain Names and associated temporal properties

The rise in the adoption of blockchain technology has led to increased i...

Large-Scale-Exploit of GitHub Repository Metadata and Preventive Measures

When working with Git, a popular version-control system, email addresses...

EMBER: An Open Dataset for Training Static PE Malware Machine Learning Models

This paper describes EMBER: a labeled benchmark dataset for training mac...

CLAUDETTE: an Automated Detector of Potentially Unfair Clauses in Online Terms of Service

Terms of service of on-line platforms too often contain clauses that are...

SOREL-20M: A Large Scale Benchmark Dataset for Malicious PE Detection

In this paper we describe the SOREL-20M (Sophos/ReversingLabs-20 Million...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.