On labeling Android malware signatures using minhashing and further classification with Structural Equation Models

Multi-scanner Antivirus systems provide insightful information on the nature of a suspect application; however there is often a lack of consensus and consistency between different Anti-Virus engines. In this article, we analyze more than 250 thousand malware signatures generated by 61 different Anti-Virus engines after analyzing 82 thousand different Android malware applications. We identify 41 different malware classes grouped into three major categories, namely Adware, Harmful Threats and Unknown or Generic signatures. We further investigate the relationships between such 41 classes using community detection algorithms from graph theory to identify similarities between them; and we finally propose a Structure Equation Model to identify which Anti-Virus engines are more powerful at detecting each macro-category. As an application, we show how such models can help in identifying whether Unknown malware applications are more likely to be of Harmful or Adware type.



There are no comments yet.


page 10


DroidMorph: Are We Ready to Stop the Attack of Android Malware Clones?

The number of Android malware variants (clones) are on the rise and, to ...

Android Malware Category and Family Detection and Identification using Machine Learning

Android malware is one of the most dangerous threats on the internet, an...

IntelliAV: Building an Effective On-Device Android Malware Detector

The importance of employing machine learning for malware detection has b...

Malware Detection and Analysis: Challenges and Research Opportunities

Malwares are continuously growing in sophistication and numbers. Over th...

EvilModel: Hiding Malware Inside of Neural Network Models

Delivering malware covertly and evasively is critical to advanced malwar...

Bio-inspired data mining: Treating malware signatures as biosequences

The application of machine learning to bioinformatics problems is well e...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1. Introduction

Smartphones and tablets have become part of our daily life. The number of such smart devices keeps growing year after year111See http://www.smartinsights.com/mobile-marketing/mobile-marketing-analytics/mobile-marketing-statistics/, last access: March 2017. Android is the most popular mobile operating system and has grown into a diverse ecosystem worldwide.

Unfortunately, the success of Android has also attracted malware developers: it is estimated that about a 12% of apps in the Google play market are ”low quality apps”

222See http://www.appbrain.com/stats/number-of-android-apps, last access: July 2017, many of them represent a real risk for the smartphone owner.

There exist in the literature many systems to detect Android malware using classical detection approaches. For example, in (Lindorfer et al., 2014), the authors have developed an Android Malware analysis tool and review malware behavior based on a 1-Million sample of Android applications, highlighting differences between malware and goodware. Elish et al (Elish et al., 2015) have proposed a single-feature classification system based on user behavior profiling. In general, Android permissions have had a wide coverage and works like (Sanz et al., 2013; Felt et al., 2011) analyze them in detail.

Concerning the use of Machine Learning (ML) techniques in the detection of malware, the authors in

(Arp et al., 2014)

have gathered features from application code and manifest (permissions, API calls, etc) and use Support Vector Machines (SVMs) to identify different types of malware families. In a different approach, the authors of

(Chen et al., 2015) have proposed a system based on the differential-intersection analysis of applications in order to spot duplicates.

Antivirus Software has been persistently analyzed and tested. For instance, the authors in (Huang et al., 2014) have reviewed the key points in designing AV engines for mobile devices as well as how to avoid detection. In a different approach, Rastogi et al (Rastogi et al., 2014) have assessed whether AV engines fall for obfuscation attacks, finding many to be vulnerable to some kind of transformation attack. The authors in (Martín et al., 2016) have performed data analytics on multi-scanner outputs for Android Applications to find their behavior patterns.

With the advent of AV multi-scanner tools, such as Meta-Scan, Virustotal or Androguard, any application can easily be analyzed by many different AV engines at once. For each detected application, these tools typically identify the AV engines who flagged the application as malware, its type and other meta-data regarding the nature of the threat. Hence, multi-scanner tools enable simultaneous analysis of suspicious applications and provides some information to identify and deal with many types of malware.

The authors in (Bishop et al., 2011) perform a comparison of AV engines from VirusTotal by modeling AV confidence using a hyper-exponential curve. In (Gashi et al., 2013), AV labels from VirusTotal are subject to temporal analysis using a collection of malware applications obtained through a honeypot network. Additionally, other studies (Cukier et al., 2013; Bishop et al., 2012) have shown the advantages of using more than one AV engine to improve malware decisions, by means, for example, of multi-scanner tools.

Nevertheless, the authors in (Hurier et al., 2016) recall the lack of agreement on which application each AV considers as malware. Besides, Maggi et al. (Maggi et al., 2011) extensively review the inconsistencies when assigning identifiers to similar threats across engines. In this light, the authors in (Kantchelian et al., 2015)

propose a combination scheme for multi-scanner detections based on a Generative Bayesian model to infer the probability of being malware for every sample, however no specific label analysis is performed, and thus all threats are treated equally.

Several authors have analyzed and proposed categorization schemes for Android malware applications. In (Zhou and Jiang, 2012) the authors find up to 49 distinct malware families whilst the authors in (Suarez-Tangil et al., 2014)

propose a text mining approach to obtain and classify malware families according to application code. Similarly, Zheng et al propose in

(Zheng et al., 2013) a system for the collection and categorization of zero-day malware samples into different families. Also, the authors in (Deshotels et al., 2014) propose a system to classify malware samples according to their families.

Sebastián et al. (Sebastián et al., 2016) propose AVClass, a system to normalize AV labels from different vendors and determine the actual class out different detection outputs for the same applications. Nevertheless, AVClass does not link AV engines with their detections. Instead, it provides the frequency for each token and chooses the most probable one. Besides, AVClass removes common malware-related tokens. This way, tokens such as Adware or Trojan are removed and the information they carry is missed. Consequently, the output of AVClass gives a final malware class output, but loses information on (AV, class) pairs in the process.

In this light, we develop an alternative label normalization methodology based on the well-known minhashing technique (Leskovec et al., 2014). This system relies on the user to finally assign normalized labels by using python regular expressions over signatures. This way, unsupervised aggregation of signatures can be achieved, considerably reducing the supervising effort of the researcher.

Then, such methodology will enable cross-engine analysis of malware classes to improve malware classification. In a nutshell, This work contributes to this aim with a twofold effort:

  1. We develop a methodology for signature normalization; that is, group together identifiers referring to the same threat but differing on the actual labels because of AV engine inconsistencies.

  2. We model AV engine relationships using Structural Equation Models (SEM) across malware categories aiming at the improvement malware classification.

The rest of this paper is structured as follows: Section 2 describes the data collection and our AV signature normalization methodology. Section 3 inspects engines and signature tokens using correlations to unveil consensual subsets of entities. Section 4 develops different weighting models to evaluate engine performance of distinct malware categories. Finally, Section 5 summarizes the main findings of this work and highlights the most relevant conclusions.

2. Dataset Description and Signature Normalization

In this article, the dataset under study comprises a total of different Android applications collected from Google Play by TACYT333See https://www.elevenpaths.com/es/tecnologia/tacyt/index.html for further details in May 2015. All these applications are considered suspicious, as they have been flagged by at least one out of antivirus (AV) engines, including some of the most popular ones (e.g. McAffee, Trend Micro, etc.) as well as many others. These engines have been anonymized to preserve privacy, i.e. every engine has been substituted consistently by one of the names in the range throughout the paper.

When a malware engine detects a suspicious application, it provides a signature containing some meta-data, like its last scan date or malware class identifier. A total of signatures are obtained in our dataset (i.e. signatures per application on average).

As an example, consider application no. , flagged by AV27, AV28 and AV58. Each AV engine provides different signatures, namely:

  • AV27: a variant of Android/AdDisplay.Startapp.B

  • AV28: Adware/Startapp.A

  • AV58: Adware.AndroidOS.Youmi.Startapp (v)

Clearly, all three engines consider app no. as an adware-like application, but the signature name convention is different for each engine. Thus, text processing and text mining techniques are necessary to convert signatures into a common format for analysis.

2.1. Cleaning and classification of AV signatures with Minhashing

Fig. 1 shows a wordcloud with the most popular AV-generated raw signatures and their frequencies (most popular keywords are shown with large font sizes). Apart from some common understandable signatures, most of them include different names which account for different types of malware and other non-malicious names (i.e. AndroidOS).

Figure 1. Wordcloud image of different raw signatures across the dataset

Some signatures contain common substrings across different AVs, including related chunks of text regarding very common malware types such as ”PUA” or ”Trojan” to more specific types such as ”GingerMaster” or ”FakeFlash” together with some related terms which do not refer to malware, namely AndroidOS or win32.

To extract meaningful information from signatures, we have developed a methodology to clean, unify and normalize detection identifiers into a fixed subset of ”identifier tokens” representing the most frequent keywords contained within the signatures. This process starts with conventional text-mining cleaning techniques of raw strings, including lower-casing, removing punctuation and domain-specific stop-words (i.e. tokens providing no malware information) and splitting each signature into tokens separated by dots. Up to this point, our methodology follows the steps of AVClass (Sebastián et al., 2016).

Next, we use the well-known minhashing algorithm to group signatures together. The hashing trick or minhashing is a very fast algorithm for estimating how similar (in terms of Jaccard similarity) two sets are. Minhashing relies on splitting strings into several chunks of the same length and computing a unique-output function (i.e. common hash functions like MD5 or SHA1) for each chunk. Consequently, each signature produces a set of numbers, the minimum of which is selected as the minhash. Finally, elements are grouped according to their minhashing values. Once the minhashing values are computed, the probability of two signatures falling in the same group is shown to approximate the Jaccard distance between them. The Jaccard distance between two sets A and B follows:

In other words, similar items will likely fall into similar minhash buckets. A detailed explanation of Minhashing, Jaccard distance and all these terms may be found in  (Leskovec et al., 2014).

We manually checked the resulting groups and developed a set of Python regular expressions to transform signatures into malware classes according to the unveiled patterns. Since different signatures might contain different classes of malware, collisions may eventually occur within these rules. In this light, we established rule priority following first match criteria over the sorted rules (in terms of specificity). For instance, consider the signature: ”Adware.Android.AirPush.K”. This signature would fall into the category Airpush, since it is more specific than Adware.

As a result, the generated classes group together similar pattern signatures into a representative set of malware classes. In contrast to AVClass, our approach keeps track of the relationship between the AV and the malware class associated to the signatures.

2.2. Normalized Signatures

# Regexp rule Class Category Det. Count No. Apps AVs
S1 .*a[ir]*push?.* Airpush Adware 35,850 12,802 26
S2 .*leadbolt.* Leadbolt 17,414 4,045 21
S3 .*revmob.* Revmob 38,693 13,680 18
S4 .*startapp.* StartApp 29,443 11,963 13
S5 [os]*apperhand.* |.*counterclank.* Apperhand/Counterclank 1,606 716 12
S6 .*kuguo.* Kuguo 2,127 1,893 23
S7 wapsx? WAPS 1,546 344 6
S8 .*dowgin.*|dogwin Dogwin 1,098 421 23
S9 .*cauly.* Cauly 1,143 626 3
S10 [os]*wooboo Wooboo 220 120 14
S11 [os]*mobwin Mobwin 1,284 249 3
S12 .*droidkungfu.* DroidKungFu 105 54 3
S13 .*plankton.* Plankton 4,557 741 25
S14 [os]*you?mi Youmi 1,472 370 22
S15 [osoneclick]*fraud Fraud 736 382 19
S16 multiads Multiads 560 555 3
S17 .*adware.*|ad.+ Adware (gen) 33,133 24,515 46
S18 riskware Riskware Harmful Threats 1841 1353 14
S19 spr SPR 1,789 1,789 2
S20 .*deng.* Deng 2,926 2,926 1
S21 .*smsreg SMSreg 649 440 16
S22 [os]*covav? Cova 1,564 1,296 5
S23 .*denofow.* Denofow 1,224 610 11
S24 [os]*fakeflash FakeFlash 1,381 510 15
S25 .*fakeapp.* FakeApp 518 420 14
S26 .*fakeinst.* FakeInst 493 401 22
S27 .*appinventor.* Appinventor 4,025 3,113 6
S28 .*swf.* SWF 4,651 4,566 10
S29 .*troj.* Trojan (gen) 23,775 16,851 49
S30 .*mobi.* Mobidash 981 796 16
S31 .*spy.* Spy 1483 1,221 26
S32 .*gin[ger]*master Gingermaster 58 36 10
S33 unclassifiedmalware UnclassifiedMalware Unknown/Generic 857 855 1
S34 .*virus.* Virus 959 896 15
S35 .*heur.* Heur 182 179 15
S36 .*gen.* GEN 9,827 9,118 25
S37 [osgen]*pua PUA 1,249 1,152 2
S38 [ws]*reputation Reputation 2,886 2,885 1
S39 .*applicunwnt.* AppUnwanted 4,863 4,860 1
S40 .*artemi.* Artemis 9,662 6,175 2
S41 .* (Default Case) Other 10,778 7,880 57
TOTAL 259,608
Table 1. Regular Expressions in Python syntax to normalize signatures into standardized classes

Table 1 shows malware signature-based classes () obtained using the previous methodology. The table contains the predicate of the regular expression for each rule, the class and a broader category of malware, along with the detection and application counts of each rule. For instance, contains all the cases of AirPush class, which belongs to the Adware category. The AirPush class has been found in 12,802 Android apps and received 35,850 detections from 26 different AV engines.

The following lists a short summary of the three broad categories, namely emphAdware, Harmful Threats and Unknown/Generic, along with an explanation of the classes in each category.

  • Adware This category includes those malware classes showing abusive advertisements for profit. There are in total applications tagged with at least one adware class. The Adware category involves most apps in the collection, suggesting than most malicious applications inside Google Play are adware-related apps. Leadbolt, Revmob, Startapp, WAPSX, Dowgin/dogwin, Cauly, Modwin and Apperhand/Counterclank are well-known advertisement networks maliciously used to perform full screen and invasive advertising. Kuguo, is an advertisement library also known due to the abuses committed by their developers. Youmi and DroidKungFu are advertising services which have been involved in data ex-filtration problems. Airpush is another advertisement network company known for the abuse of its developer of adbar pushing notifications. Some AVs just mark as Multiads applications that contain different advertisement libraries capable of displaying invasive ads. Fraud/osoneclick refers to a fraudulent application which attempts to increase number of ad clicks in the app by stealthily settling ads in the background of user interactive applications. Finally, the Adware (gen) tag is a generic reference assigned to those samples only containing that known class.

  • Harmful Threats: This category includes more dangerous threats than simple adware, which may enrol the user in premium services or ex-filtrate data through permission abuses or other exploits. There are applications labelled at least once in this category. Deng, SPR (Security and Privacy Risk) and Riskware are generic names given by different engines to flag apps that may unjustifiably require potentially harmful permissions or include malicious code threatening user privacy. Denofow and Cova are generic references to trojan programs which attempts to enroll users in premium SMS services. SMSReg is a generic way for some engines to flag applications that require SMS related permissions for ex-filtration or premium subscription. FakeFlash, FakeInst or Fakeapp are names for applications that replicate the functionalities of other popular apps adding to their code malicious code or actions. Appinventor is a developer platform used to build and generate applications extensively preferred by malware developers. SWF stands for different versions of Shockwave Flash Player Exploits. Trojan (gen) is the generic reference of engines to trojan applications. GingerMaster is a well-known family of rooting exploits. Spy is a generic reference to applications incurring in data ex-filtration or similar spyware threats.

  • Unknown/Generic: This category includes AV detections which do not include class-related information, either due to generic signatures from AVs or signatures not matching any rule in the dataset. There are applications within this group. UnclassifiedMalware, Virus, Heur

    (from heuristics),

    GEN (Generic Malware), PUA (Potentially Unwanted Application), Reputation, AppUnwanted (Application Unwanted) and Artemis are generic tags given by different engines in order to flag applications that are detected as not-specified threats. Other includes the remaining applications which have not been classified due to the lack of signature patterns.

As shown in the table, most common malware detection classes are typically those regarding Adware, in particular Revmob, Airpush and Adware with many AVs involved. Trojan detections are also very popular with 49 engines involved. In general, many malware classes are spotted by more than a single engine, with some exceptions in the Unknown/Category category classes, which are often exclusive from a small subset of engines, namely , , with only one AV engine involved.

2.3. Comparison with AVClass (Sebastián et al., 2016)

We cloned the AVClass (Sebastián et al., 2016) repository from Github444Available at https://github.com/malicialab/avclass, last access May, 2017 and checked its performance in our dataset. We observed that the AVClass system returned undetermined class (SINGLETON output) for cases in our dataset, which is more than 50% of the signatures. Oppositely, our methodology and AVClass agree on applications, roughly 29% of the dataset. In this light, both approaches provide some level of agreement, as most specific classes match frequently within the clearly defined detections.

However, AVClass returns a single class per application, but does not specify which AV engine is behind such decision. In our methodology, we keep the (AV engine, Malware class) pair to allow further analysis since, in some cases, different AVs disagree on some application (some may consider it as Adware while others consider the same application as Harmful Threat for instance).

2.4. Some insights from detections

Let denote an indicator matrix of size whose elements are set to 1 if the -th Android app has been flagged by the -th engine or 0 otherwise. Matrix A is indeed very sparse with only 5% of all the entries set to one. On average, each application is detected engines, showing that the variability of application detection counts is enormous. The most active AV engines are AV27, AV58, AV7, AV2, AV30 and AV32, accounting for more than detections each.

Figure 2. AV detection count per application

Fig. 2 is a histogram of every application detection count in matrix (a histogram of the row-sums). The histogram shows a heavy-tail like distribution where most malware applications account for a small number of detections whilst some few get much higher counts. Single-detection applications represent the majority of cases with a total of (46.9% of the total). In fact, no single application is flagged by the AV engines at once, being the highest detection count for application no. with hits.

Now, let denote an indicator matrix of size whose elements are set to 1 if the -th Android app has been flagged in the -th malware category or 0 otherwise.

Figure 3. Frequency of detections per malware class

Fig. 3 represents the occurrence of each malware class. At a glance, the most common classes are adware-related classes: generic adware applications and some precise libraries, namely Airpush, Leadbolt, Revmob and StartApp. The reminding detections are more infrequent, with the exception of generic Trojan applications. Therefore, most malware applications in this collection appear to be adware cases.

Concerning matrix , we observe that out of the multi-detection applications, 63.26% of them are assigned to more than one class. In particular, these applications receive between 2 and 12 different class labels, showing some level of disagreement between AV engines.

3. Analysis of Malware classes and categories

As stated before, there are Android apps flagged by a single AV engine. Of the rest (those with two AV detections or more), in cases all AV detections agree on the same malware class, while the remiainng apps show some kind of disagreement between AVs. In fact, some authors have proven the existing lack of consensus of engines (Hurier et al., 2016) as well as severe class naming inconsistencies among engines (Maggi et al., 2011).

In this section, we analyze whether any of the inferred classes differ just because of naming inconsistencies or they represent a set of independent classes.

3.1. Correlation of malware categories

Remark that the 41 malware classes have been classified into three large malware categories, namely Adware, Harmful Threats and Unknown/Generic. The three categories are very broad and the nature of the malware involved can be different. Nevertheless, the detections in both Adware and Harmful categories separate malware into low-risk and high-risk malware classes, as Adware samples are typically controversial and not detected by all the engines in the same way whilst harmful classes indicate potentially major security risks, such as data leakage or economic loss.

In addition, the Unknown/General category integrates all those malware classes which do not refer to any specific malware type, being just an indicator of undesired behaviors.

Let refer to an matrix where is an integer which accounts for the number of times the -th application has received a detection in category Adware (), Harmful () or Unknown (). Table 2 shows the correlation of such matrix .

Adware Harmful Unknown
Adware 1 0.06 0.3
Harmful 0.06 1 0.44
Unknown 0.3 0.44 1
Table 2. Correlation of matrix (Malware Categories)

As shown, Harmful and Adware categories show little correlation, only 0.06 which may refer to Android apps both presenting Adware and being potentially Harmful. On the other hand, Unknown/Generic apps show 0.3 correlation with Adware and 0.44 correlation with Harmful Threats.

Interestingly, it seems that Unknown detections flagged by some AV engines appear more often with Harmful Threats by other AV engines than with Adware cases, showing that Unknown detections are probably cases of Harmful Threats. This shall be further investigated in Section 4.

3.2. Identifying relationships with classes with graph community algorithms

Graph theory provides useful algorithms to study the relationships between objects within a network of entities. In our case, starting from matrix defined in Section 2.4 we compute its correlation matrix, i.e. and define a Graph whose adjacency matrix is . Thus, graph G has 41 nodes (malware classes) and the weights of the edges are equal to the correlation values between malware classes.

Using node edge betweenness (Girvan and Newman, 2002), we group together nodes according to their correlation values to see which malware classes are close together. In order to avoid generating communities out of noise, we force all correlation values below some threshold to be equal to 0.

Figure 4. Communities of malware classes for different correlation threshold

Fig. 4 depicts the resulting communities obtained. Essentially, most malware classes appear isolated with little relationships with others, especially when the correlation threshold . In such a case, only two new communities are created: FakeFlash-FakeApp and Plankton-Apperhand, the former in the Harmful category, the latter in the Adware category. In case of a lower correlation threshold, a new community is identified: three classes belonging to the Unknown/Generic category are aggregated with a Harmful class, creating the community Trojan-Artemis-AppUnwanted-Other. This is consistent with the previous experiment where we observed that most Unknown cases are more correlated with Harmful than with Adware cases. Finally, when the correlation threshold we observe other interesting communities with weak correlation.

4. Modelling Consensus

In this section, we further investigate on AV engines and malware categories using Structure Equation Models (SEM) to identify which AVs are more powerful at detecting Adware, Harmful Threats and Unknown/Generic categories.

4.1. On weighting AV engines

In order to obtain a performance score per engine within our dataset, a collective AV model must consider how AV engines behave collectively in the sense that which AV engines are consistent with other and which ones typically disagrees with the rest. This idea is at the heart of the well-known Latent Variable Models (LVM) which assume the existence of some unobservable ”latent” or ”hidden” variable (i.e. whether an app is of malware category or not) which explains the underlying relation among the observed variables (i.e. the output of the AV engines).

There exist different approaches to Latent Variable Modeling in the literature, such as generative models or Structural Equation Models (SEM). We have chosen the later due to its ease of use and approach, based on covariance approximation, which weight engines according on how consensual their detections are.

Typically, SEM assumes a linear regression model on the latent or hidden variable, namely



where refers to the observed variables AVi weighted by coefficients . In order to shape values to a probabilistic scale, we use the logistic function to translate the score into a probabilistic value (between 0 and 1), following:


4.2. Inference and Results

Figure 5. coefficients for each category

We generate three 0/1 matrices, one per category (Adware, Harmful Threats and Unknown/Generic), of size . These matrices are used for training three models using the R-library ”lavaan” (Rosseel, 2012), which estimates the coefficients by minimizing the difference between the dataset covariance matrix and the covariance matrix from the generated model. The coefficients are shown in Fig. 5 for the three models.

The figure clearly unveils the existing differences across engines: some AVs rank high scores at specific categories, while others are terrible on all three categories. For instance, AV6 excels at harmful applications (coefficient 0.8) but has null properties (coefficient 0) for Adware or Unknown malware; AV6 is very good at Harmful Threats (coefficients 0.8 for harmful threats and 0 for the other two categories) and AV41 is excellent with unknown categories (0.7). Other AV engines have acceptable coefficients for more than one category, such as AV1 or AV15. In fact, adware-detecting engines, appear with very high coefficients whereas unknown detections occur notably across most engines.

The picture also shows clearly that there is no AV engine in this collection which excels in the three categories at the same time, even though, engines such as AV31 (0.99, 0.31, 0.2) or AV42 (0.99, 0.37, 0.25) show some of the best balances above all engines. Indeed, AV1, AV15, AV31 and AV42 present strong agreements and strong correlation values, providing high support from one another.

It is also worth noting the spikes of some AV engines at the Unknown category (see AV13, AV46, AV54, AV58 or AV60). Essentially, these engines do not provide information on Adware or Harmful Threats, instead they output a generic malware signature, and therefore receive low weights on such categories.

Finally, as an example of application of the models, consider app no. . This app has been flagged as Adware by 20 AVs, as Harmful by AV47, and Unknown by AV22, AV39 and AV40. We can then apply eq. 1 to obtain the values for the three categories which yields:

After applying the logistic transformation of eq. 2, we obtain the following probabilities:

In this case, it would be safe to say that this application lies in all categories of malware: Adware, Harmful and Generic.

On the other hand, Android app no. has been flagged as Adware by AV7, AV14 and AV36, as Harmful by AV4, and Unknown by AV36. According to the SEM model, the following probabilities for each category are obtained:

Checking the coefficients of these AV engines on each category in Fig. 5, we observe that AV7, AV14 and AV36 have low coefficients for Adware, while AV4 and AV36 also have low coefficient values for Harmful and Unknown respectively. Perhaps, the Harmful category is more likely according to the estimation provided by the SEM mmodel despite the app has more Adware detections. However, it is not clear whether or not this application represents a real risk to the user.

As a final example, Android app , which accounts 14 detections clearly votes on favor for the Adware category:

5. Summary and conclusions

This paper has analyzed malware signatures produced by 61 AV engines to different Android applications. With this dataset, we have:

  • Presented a novel signature normalization methodology capable of mapping different AV signatures into standardized classes.

  • Analyzed the most frequent keywords and signature categories using text mining and minhashing techniques, and classified malware signatures into three categories: Adware, Harmful threats and Unknown/Generic.

  • Identified groups of similar malware classes within the data using Community detection algorithms from Graph Theory.

  • Used Structural Equation Models to find most powerful AV engines for each of the three malware category.

  • Shown an application on how to use such SEM model to infer which Unknown-type applications are closer to Adware or Harmful type.


The authors would like to acknowledge the support of the national project TEXEO (TEC2016-80339-R), funded by the Ministerio de Economía y Competitividad of SPAIN through, and the EU-funded H2020 TYPES project (grant no. H2020-653449).

Similarly, the authors would like to remark the support provided by the tacyt system (https://www.elevenpaths.com/es/tecnologia/tacyt/index.html) for the collection and labelling of AV information.


  • (1)
  • Al-Saleh et al. (2013) Mohammed Ibrahim Al-Saleh, Antonio M Espinoza, and Jedediah R Crandall. 2013. Antivirus performance characterisation: system-wide view. IET Information Security 7, 2 (2013), 126–133.
  • Arp et al. (2014) Daniel Arp, Michael Spreitzenbarth, Malte Hübner, Hugo Gascon, Konrad Rieck, and CERT Siemens. 2014. Drebin: Effective and explainable detection of android malware in your pocket. Proc. of Symp. Network and Distributed System Security (2014).
  • Bishop et al. (2011) P. Bishop, R. Bloomfield, I. Gashi, and V. Stankovic. 2011. Diversity for Security: A Study with Off-the-Shelf AntiVirus Engines. In 2011 IEEE 22nd International Symposium on Software Reliability Engineering. 11–19. https://doi.org/10.1109/ISSRE.2011.15
  • Bishop et al. (2012) PG Bishop, RE Bloomfield, Ilir Gashi, and Vladimir Stankovic. 2012. Diverse protection systems for improving security: a study with AntiVirus engines. (2012).
  • Chen et al. (2015) Kai Chen, Peng Wang, Yeonjoon Lee, XiaoFeng Wang, Nan Zhang, Heqing Huang, Wei Zou, and Peng Liu. 2015. Finding unknown malice in 10 seconds: Mass vetting for new threats at the google-play scale. In 24th USENIX Security Symposium (USENIX Security 15). 659–674.
  • Cukier et al. (2013) Michel Cukier, Ilir Gashi, Betrand Sobesto, and Vladimir Stankovic. 2013. Does malware detection improve with diverse antivirus products? An empirical study. In 32nd International Conference on Computer Safety, Reliability and Security. IEEE.
  • Deshotels et al. (2014) Luke Deshotels, Vivek Notani, and Arun Lakhotia. 2014. DroidLegacy: Automated Familial Classification of Android Malware. In Proceedings of ACM SIGPLAN on Program Protection and Reverse Engineering Workshop 2014 (PPREW’14). ACM, New York, NY, USA, Article 3, 12 pages. https://doi.org/10.1145/2556464.2556467
  • Elish et al. (2015) Karim O. Elish, Xiaokui Shu, Danfeng (Daphne) Yao, Barbara G. Ryder, and Xuxian Jiang. 2015. Profiling user-trigger dependence for Android malware detection. Computers & Security 49 (2015), 255 – 273. https://doi.org/10.1016/j.cose.2014.11.001
  • Felt et al. (2011) Adrienne Porter Felt, Erika Chin, Steve Hanna, Dawn Song, and David Wagner. 2011. Android permissions demystified. In Proceedings of the 18th ACM conference on Computer and communications security. ACM, 627–638.
  • Gashi et al. (2013) I. Gashi, B. Sobesto, S. Mason, V. Stankovic, and M. Cukier. 2013. A study of the relationship between antivirus regressions and label changes. In 2013 IEEE 24th International Symposium on Software Reliability Engineering (ISSRE). 441–450. https://doi.org/10.1109/ISSRE.2013.6698897
  • Girvan and Newman (2002) Michelle Girvan and Mark EJ Newman. 2002. Community structure in social and biological networks. Proceedings of the national academy of sciences 99, 12 (2002), 7821–7826.
  • Huang et al. (2014) Heqing Huang, Kai Chen, Peng Liu, Sencun Zhu, and Dinghao Wu. 2014. Uncovering the Dilemmas on Antivirus Software Design in Modern Mobile Platforms. In International Conference on Security and Privacy in Communication Systems. Springer, 359–366.
  • Huang et al. (2015) Heqing Huang, Kai Chen, Chuangang Ren, Peng Liu, Sencun Zhu, and Dinghao Wu. 2015. Towards Discovering and Understanding Unexpected Hazards in Tailoring Antivirus Software for Android. In Proceedings of the 10th ACM Symposium on Information, Computer and Communications Security (ASIA CCS ’15). ACM, New York, NY, USA, 7–18. https://doi.org/10.1145/2714576.2714589
  • Hurier et al. (2016) Médéric Hurier, Kevin Allix, Tegawendé F Bissyandé, Jacques Klein, and Yves Le Traon. 2016. On the lack of consensus in anti-virus decisions: metrics and insights on building ground truths of android malware. In Detection of Intrusions and Malware, and Vulnerability Assessment. Springer, 142–162.
  • Kantchelian et al. (2015) Alex Kantchelian, Michael Carl Tschantz, Sadia Afroz, Brad Miller, Vaishaal Shankar, Rekha BachI. wani, Anthony D. Joseph, and J. D. Tygar. 2015. Better Malware Ground Truth: Techniques for Weighting Anti-Virus Vendor Labels. In

    Proceedings of the 8th ACM Workshop on Artificial Intelligence and Security

    (AISec ’15). ACM, New York, NY, USA, 45–56.
  • Leskovec et al. (2014) Jure Leskovec, Anand Rajaraman, and Jeffrey David Ullman. 2014. Mining of massive datasets. Cambridge university press.
  • Lindorfer et al. (2014) M. Lindorfer, M. Neugschwandtner, L. Weichselbaum, Y. Fratantonio, V. v. d. Veen, and C. Platzer. 2014. ANDRUBIS – 1,000,000 Apps Later: A View on Current Android Malware Behaviors. In 2014 Third International Workshop on Building Analysis Datasets and Gathering Experience Returns for Security (BADGERS). 3–17. https://doi.org/10.1109/BADGERS.2014.7
  • Maggi et al. (2011) Federico Maggi, Andrea Bellini, Guido Salvaneschi, and Stefano Zanero. 2011. Finding non-trivial malware naming inconsistencies. In International Conference on Information Systems Security. Springer, 144–159.
  • Martín et al. (2016) I. Martín, J. A. Hernández, S. Santos, and A. Guzmán. 2016. Insights of antivirus relationships when detecting Android malware: A data analytics approach. In Proceedings of the ACM Conf. on Computer and Communications Security (CCS ’16). ACM, 3. https://doi.org/10.1145/2976749.2989038
  • Oberheide et al. (2007) Jon Oberheide, Evan Cooke, and Farnam Jahanian. 2007. Rethinking Antivirus: Executable Analysis in the Network Cloud.. In HotSec.
  • Rastogi et al. (2014) V. Rastogi, Y. Chen, and X. Jiang. 2014. Catch Me If You Can: Evaluating Android Anti-Malware Against Transformation Attacks. IEEE Transactions on Information Forensics and Security 9, 1 (Jan 2014), 99–108. https://doi.org/10.1109/TIFS.2013.2290431
  • Rosseel (2012) Yves Rosseel. 2012. lavaan: An R Package for Structural Equation Modeling. Journal of Statistical Software 48, 2 (2012), 1–36. http://www.jstatsoft.org/v48/i02/
  • Sanz et al. (2013) Borja Sanz, Igor Santos, Carlos Laorden, Xabier Ugarte-Pedrero, PabloGarcia Bringas, and Gonzalo Álvarez. 2013. PUMA: Permission Usage to Detect Malware in Android. In Proc. Int. Conference CISIS’12-ICEUTE’12-SOCO’12. Advances in Intelligent Systems and Computing, Vol. 189. 289–298.
  • Sebastián et al. (2016) Marcos Sebastián, Richard Rivera, Platon Kotzias, and Juan Caballero. 2016. Avclass: A tool for massive malware labeling. In International Symposium on Research in Attacks, Intrusions, and Defenses. Springer, 230–253.
  • Suarez-Tangil et al. (2014) Guillermo Suarez-Tangil, Juan E. Tapiador, Pedro Peris-Lopez, and Jorge Blasco. 2014. Dendroid: A text mining approach to analyzing and classifying code structures in Android malware families. Expert Systems with Applications 41, 4, Part 1 (2014), 1104 – 1117. https://doi.org/10.1016/j.eswa.2013.07.106
  • Uluski et al. (2005) Derek Uluski, Micha Moffie, and David Kaeli. 2005. Characterizing Antivirus Workload Execution. SIGARCH Comput. Archit. News 33, 1 (March 2005), 90–98. https://doi.org/10.1145/1055626.1055639
  • Wang et al. (2016) Zhaoguo Wang, Chenglong Li, Zhenlong Yuan, Yi Guan, and Yibo Xue. 2016. DroidChain: A novel Android malware detection method based on behavior chains. Pervasive and Mobile Computing 32 (2016), 3 – 14. https://doi.org/10.1016/j.pmcj.2016.06.018 Mobile Security, Privacy and Forensics.
  • Zheng et al. (2013) Min Zheng, Mingshen Sun, and J. C. S. Lui. 2013. Droid Analytics: A Signature Based Analytic System to Collect, Extract, Analyze and Associate Android Malware. In Trust, Security and Privacy in Computing and Communications (TrustCom), 2013 12th IEEE International Conference on. 163–171. https://doi.org/10.1109/TrustCom.2013.25
  • Zhou and Jiang (2012) Y. Zhou and X. Jiang. 2012. Dissecting Android Malware: Characterization and Evolution. In Proc. of Symp. Security and Privacy. 95–109. https://doi.org/10.1109/SP.2012.16