Adversarial Malware Binaries: Evading Deep Learning for Malware Detection in Executables

03/12/2018 ∙ by Bojan Kolosnjaji, et al. ∙ Technische Universität München Universita Cagliari 0

Machine-learning methods have already been exploited as useful tools for detecting malicious executable files. They leverage data retrieved from malware samples, such as header fields, instruction sequences, or even raw bytes, to learn models that discriminate between benign and malicious software. However, it has also been shown that machine learning and deep neural networks can be fooled by evasion attacks (also referred to as adversarial examples), i.e., small changes to the input data that cause misclassification at test time. In this work, we investigate the vulnerability of malware detection methods that use deep networks to learn from raw bytes. We propose a gradient-based attack that is capable of evading a recently-proposed deep network suited to this purpose by only changing few specific bytes at the end of each malware sample, while preserving its intrusive functionality. Promising results show that our adversarial malware binaries evade the targeted network with high probability, even though less than 1

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Detection of malicious binaries still constitutes one of the major quests in computer security [21]. To counter their growing number, sophistication and variability, machine learning-based solutions are becoming increasingly adopted also by anti-malware companies [13].

Although past research work on binary malware detection has explored the use of traditional learning algorithms on -gram-based, system-call-based, or behavior-based features [20, 1, 18, 25], more recent work has considered the possibility of using deep-learning algorithms on raw bytes as an effective way to improve accuracy on a wide range of samples [17]. The rationale is that such algorithms should automatically learn the relationships among the various sections of the executable file, thus extracting a number of features that correctly represent the role of specific byte groups in specific sections (e.g., if a byte belongs to the code section or simply to a section pointer).

While machine learning can be used to map the features from malware analysis to a decision on classifying programs as benign or malicious, this process is also vulnerable to adversaries that may manipulate the programs in order to bypass detection. It has been shown that deep-learning methods and neural networks are particularly vulnerable to these evasion attacks, also known as

adversarial examples, i.e., input samples specifically manipulated to be misclassified [3, 22]

. While the existence of adversarial examples has been widely demonstrated on computer-vision tasks (see, e.g., a recent survey on the topic 

[5]), it is common to consider that it is not trivial to practically implement the same attack on executable files [24, 2, 17]. This is because one mistake at changing the code section or the headers may completely compromise the file functionality.

In this work, we show that, despite the various challenges required to modify binary sections, it is still possible to compromise the detection of deep-learning systems for malware detection by performing few changes to malware binaries, while preserving their functionality. In particular, we introduce a gradient-based attack to generate adversarial malware binaries

, i.e., evasive variants of malware binaries. The underlying idea of our attack is to manipulate some bytes in each malware to maximally increase the probability that the input sample is classified as benign. Although our attack can ideally manipulate every byte in the file, in this work we only consider the manipulation of padding bytes appended at the end of the file, to guarantee that the intrusive functionality of the malware binary is preserved. We nevertheless discuss throughout the paper which other bytes and sections of the file can be modified while still preserving its functionality. Our attack is conceived against

MalConv, i.e., a deep neural network trained on raw bytes for malware binary detection, recently proposed by Raff et al. [17]. To our knowledge, this is the first time that such an attack is proposed at the byte-level

scale, as most work in adversarial machine learning for malware detection has considered injection and removal of API calls or similar characteristics 

[26, 9, 11, 3, 15, 27, 6, 23, 12].

We perform our experiments on 13,195 Windows Portable Executable (PE) samples, showing that the accuracy of MalConv is decreased by over 50% after injecting only padding bytes in each malware sample, i.e., less than 1% of the bytes passed as input to the deep network. We also show that our attack outperforms random byte injections, and explain why being capable of manipulating even fewer bytes within the file content (rather than appending them at the end) may drastically increase the success of the attack.

Fig. 1: Architecture of the MalConv deep network for malware binary detection [17].

With this paper, we aim to claim that it may be very difficult to deploy a robust detection methodology that blindly analyzes the executable bytes. Learning algorithms can not automatically learn the hard-to-manipulate, invariant information that reliably characterizes malware, if not proactively designed to keep that into account [4], either by providing proper training examples or encoding a-priori knowledge of which bytes can be maliciously manipulated. Robustness against adversarial attacks provided by well-motivated miscreants is thus a crucial design characteristic. This work provides preliminary evidence of this issue, which we aim to further investigate in the future.

Ii Portable Executable (PE) Format

We provide here a brief description of the structure of PE files, and the prominent approaches that can be used to practically change their bytes.

Ii-a PE File Basics

PE files are executables that are characterized by an organized structure, which will be briefly described in the following (more details can be found in [16]).

Header. A data structure that contains basic information on the executable, such as the number and size of its sections, the operating system type, and the role performed by the file itself (e.g., a dynamically-linked library). Such header is organized in three sub-sections: (i) a DOS header, as the first bytes of a PE executable essentially represent a DOS program; (ii) the true PE header; (iii) an optional header which contains information such as the entry point of the file (e.g., the address of the first loaded instruction), the size of the code sections, the magic number, etc.

Section Table. A table that describes the characteristics of each file section, with a special focus on a virtual address range that represents how that section will be mapped in memory once the process is loaded. It also contain clear references to where the data generated by the compiler/assembler are stored for each section.

Data. The actual data related to each section. The most important ones are .text (which contains code instructions), .data (which contains the initialized global and static variables), .rdata (which contains constants and additional directories such as debug), and .idata (which contains information about the used imports in the file).

Ii-B Manipulating PE Files

Manipulating PE files with the goal of preserving their functionality is in general a non-trivial task, as it can be quite easy to compromise them by even changing one byte. As reported by Anderson et al. [2], possible and simple solutions to perform manipulations include either injecting bytes in part of the files that are not used (e.g., adding new sections that are never reached by the code), or directly appending them at the end of the file. Of course, these strategies are prone to detection by simply inspecting the file header or the section table (in the simplest case of byte appending), or by checking if such sections are accessed by the code itself (in case of more complex injections).

There are some special cases in which it is possible to directly perform changes to the executable without compromising its functionality. A popular example is changing bytes related to debug information, which are simply used as reference by code developers. Packing (i.e., compressing part of the executable that is then decompressed at runtime) is another possibility, which is however not adequate to perform fine-grained modifications to the file.

More complex changes require precise knowledge of the architecture of the file, and may be not always feasible. For instance, changing the .text section may entirely break the program. However, more trivial changes can be quite dangerous for the file integrity; for example, adding bytes to an existing section would require changing the header and section table accordingly. For the sake of simplicity, in this paper we only refer to byte appending as modification strategy.

Iii Deep Learning for Malware Binary Detection

The deep neural network attacked in this paper is the MalConv network proposed by Raff et al. [17], depicted in Fig. 1. Let us denote with the set of possible integer values corresponding to a byte. Then, the aforementioned network works as follows. The bytes

extracted from the input file are padded with zeros to form an input vector

of elements (if , otherwise the first bytes are only considered without padding). This ensures that the input vector provided to the network has a fixed dimensionality regardless of the length of the input file. Each byte is then embedded as a vector of elements (through a fixed mapping learned by the network during training). This amounts to encoding as a matrix

. This matrix is then fed to two convolutional layers, respectively using Rectified Linear Unit (ReLU) and sigmoidal activation functions, which are subsequently combined through

gating  [8]

. This mechanism multiplies element-wise the matrices outputted by the two layers, to avoid the vanishing gradient problem caused by sigmoidal activation functions. The obtained values are then fed to a temporal max pooling layer which performs a

-dimensional max pooling, followed by a fully-connected layer with ReLU activations. To avoid overfitting, Raff et al. [17] use DeCov regularization [7], which encourages a non-redundant data representation by minimizing the cross-covariance of the fully-connected layer outputs. The deep network eventually outputs the probability of being malware, denoted in the following with . If , the input file is thus classified as malware (and as benign, otherwise).

Iv Adversarial Malware Binaries

We discuss here how to manipulate a source malware binary into an adversarial malware binary by appending a set of carefully-selected bytes after the end of file. As in previous work on evasion of machine-learning algorithms [3], our attack aims to minimize the confidence associated to the malicious class (i.e., it maximizes the probability of the adversarial malware sample being classified as benign), under the constraint that bytes can be injected. Note that, to append bytes to , we have to ensure that , where is the size of (i.e., the number of informative bytes it contains) without considering the padding zeros. This means that the maximum number of bytes that can be injected by the attack is .111Note that if , which means that no byte can be manipulated by this attack. This can be characterized as the following constrained optimization problem:

(1)
(2)

where the distance function counts the number of padding bytes in that are modified in .

Fig. 2: Representation of an exemplary two-dimensional byte embedding space, showing the distance and the projection length of each byte with respect to the line . In this case, the padding byte will be modified by the attack algorithm to , as , i.e., is the closest byte with a projection on aligned with .
1:, the input malware (with informative bytes, and padding bytes); , the maximum number of padding bytes that can be injected (such that ); , the maximum number of attack iterations.
2:: the adversarial malware example.
3:Set .
4:Randomly set the first padding bytes in .
5:Initialize the iteration counter .
6:repeat
7:     Increase the iteration counter .
8:     for  do
9:         Set to index the padding bytes.
10:         Compute the gradient .
11:         Set .
12:         for  do
13:              Compute .
14:              Compute .
15:         end for
16:         Set to .
17:     end for
18:until  or
19:return
Algorithm 1 Adversarial Malware Binaries

We solve this problem with a gradient-descent algorithm similar to that originally proposed in [3], by optimizing the padding bytes one at a time. Ideally, we would like to compute the gradient of the objective function with respect to the padding byte under optimization. However, the MalConv architecture is not differentiable in an end-to-end manner, as the embedding layer is essentially a lookup table that maps each input byte to an 8-dimensional vector . We denote the embedding matrix containing all bytes with , where the row represents the embedding of byte , for . To overcome the non-differentiability issue of the embedding layer, we first compute the (negative) gradient of (as we aim to minimize its value) with respect to embedded representation , denoted with . We then define a line , where is the normalized (negative) gradient direction. This line is parallel to and passes through . The parameter characterizes its geometric locus, i.e., by varying one obtains all the points belonging to this line. Ideally, assuming that the gradient remains constant, the point will be gradually shifted towards the direction while minimizing

. We thus consider a good heuristic to replace the padding byte

with that corresponding to the embedded byte closest to the line , provided that its projection on the line is aligned with , i.e., that . Recall that the distance of each embedded byte to the line can be computed as . A conceptual representation of this discretization process is shown in Fig. 2. This procedure is then repeated for each modifiable padding byte (starting from a random initialization), and up to a maximum number of iterations , as described in Algorithm 1.

Generation of Adversarial Malware Binaries. Although the padding bytes are generated by manipulating the input vector , creating the corresponding executable file without corrupting the malicious functionality of the source file is quite easy, as also explained in Sect. II and in [2]. It is however worth mentioning that our attack is general, i.e., it can be used to manipulate any byte within the input file. To this end, it suffices to identify which bytes can be manipulated without affecting the file functionality, and optimize them (instead of optimizing only the padding bytes).

V Experiments

We practically reproduced the deep neural network proposed in [17], and performed the evasion attacks according to the algorithm described in Sect. IV. In the following, we first describe the employed setup, and then we discuss the results obtained by comparing the efficiency of the proposed gradient-based method with trivial random byte addition.

Dataset. We employed a dataset composed of malware samples, which were retrieved from a number of sources including VirusShare, Citadel and APT1. Additionally, to evaluate the performances of the network we employed benign samples, randomly retrieved and downloaded from popular search engines.

Network Performances. We evaluated the performances of the deep neural network by splitting our dataset into a training and a test set, each of them containing of the samples of the initial dataset. To avoid results that could be biased by a specific training-test division, we repeated this process three times and averaged the results. Under this setting, we obtained an average precision of % and an average recall of

% (mean and standard deviation).

V-a Results on Evasion Attacks

We performed our tests by modifying randomly-chosen malicious test samples with Algorithm 1 to generate the corresponding adversarial malware binaries. As for Algorithm 1, we set the maximum number of attack iterations , and the maximum number of injected bytes . As a result, we chose all malware samples that satisfied the condition , where is the file size and . The attack was performed by appending, at the end of each file, bytes that were chosen according to two different strategies: a random attack injecting random byte values, and our gradient-based attack strategy. To verify the efficacy of the attack, we measured for each amount of added bytes the average evasion rate, i.e., the percentage of malicious samples that managed to evade the network. Fig. 3 provides the attained results as the number of bytes progressively increases, averaged on the three aforementioned training-test splits. Notably, adding random bytes is not really effective to evade the network. Conversely, our gradient-based attack allows evading MalConv in of the cases when padding bytes are modified, even if this amounts to manipulating less than 1% of the input bytes.

Fig. 3: Evasion rate against number of injected bytes.

The success of our gradient-based approach relies on the fact that it guides the decision of which bytes to add, thus creating an organized padding byte pattern specific to each sample. To better clarify this concept, in Fig. 4 we consider a sample that successfully evaded the network, and show the distribution of the bytes added by the two attacks. Note how, in the optimized case, only a small group of byte values is consistently injected. This shows that the gradient guides the choice of specific byte values that are repeatedly injected, identifying a clear padding byte pattern for evasion.

Fig. 4: Distribution of the padding byte values injected by the random (left) and gradient-based (right) attacks into a randomly-picked malware sample.

V-B Limitations of Our Analysis

We discuss here some limitations related to our analysis. First, in comparison to [17], we employed a smaller dataset, and we considered an input file size of rather than . These are both factors that may facilitate evasion of MalConv. Conversely, we found that appending bytes to the end of the file reduces the effectiveness of the gradient-based approach. To better realize this, in Fig. 5 we show that the average norm of the gradient computed over all attack samples is much higher for the first bytes in the file. This is reasonable, as files have different lengths, and the probability of finding informative (non-padding) bytes for discriminating malware and benign files decreases as we move away from the first bytes. From the attacker’s perspective, this also means that modifying the first bytes may cause a much larger decrease of and, consequently, a much higher probability of evasion. However, as described in Sect. II, modifying bytes within the file may be quite complex, depending on the specific file and the content of its sections. This is definitely an interesting avenue for future research in this area.

Fig. 5: Mean gradient norm (per byte) over all attack samples.

Vi Conclusions and Future Work

In this work, we evaluated the robustness of neural network-based malware detection methods that use raw bytes as input. We proposed a general gradient-based approach that chooses which bytes should be modified in order to change the classifier decision. We applied it by injecting a small number of optimized bytes at the end of a set of malicious samples, and we used them to attack the MalConv network architecture, attaining a maximum evasion rate of .

These results question the adequateness of byte-based analysis from an adversarial perspective. In particular, the use of deep learning on raw byte sequences may give rise to novel security vulnerabilities. Binary-based approaches are usually based on the hypothesis that all sections have the same importance from the learning perspective. However, such claim is challenged by the fact that there are typically strong semantic differences between sections containing instructions (e.g., text) and those containing, for example, debug information. Hence, performing manipulations directly on the targeted files might be easier than expected.

In future work, we plan to particularly investigate this issue, by exploring fine-grained, automatic changes to executables that may be more difficult to counter than the injection of padding bytes at the end of file. We also plan to repeat the assessment of this paper on a larger dataset, more representative of recent malware trends (as advocated by Rossow et al. [19]). We anyway believe that our work highlights a severe vulnerability of deep learning-based malware detectors trained on raw bytes, highlighting the need for developing more robust and principled detection methods. Notably, recent research on the interpretability of machine-learning algorithms may also offer interesting insights towards this goal [14, 10].

References

  • [1] T. Abou-Assaleh, N. Cercone, V. Keselj, and R. Sweidan. N-gram-based detection of new malicious code. In 28th Annual Int’l Computer Software and Applications Conf. - Workshops and Fast Abstracts - vol. 02, COMPSAC ’04, pp. 41–42, Washington, DC, USA, 2004. IEEE CS.
  • [2] H. S. Anderson, A. Kharkar, B. Filar, and P. Roth. Evading machine learning malware detection. In Black Hat, 2017.
  • [3] B. Biggio, I. Corona, D. Maiorca, B. Nelson, N. Šrndić, P. Laskov, G. Giacinto, and F. Roli. Evasion attacks against machine learning at test time. In ECML PKDD, Part III, vol. 8190 of LNCS, pp. 387–402. Springer Berlin Heidelberg, 2013.
  • [4] B. Biggio, G. Fumera, and F. Roli. Security evaluation of pattern classifiers under attack. IEEE Transactions on Knowledge and Data Engineering, 26(4):984–996, April 2014.
  • [5] B. Biggio and F. Roli. Wild patterns: Ten years after the rise of adversarial machine learning. ArXiv e-prints, 2018.
  • [6] L. Chen, S. Hou, and Y. Ye. Securedroid: Enhancing security of machine learning-based detection against adversarial android malware attacks. In ACSAC, pp. 362–372. ACM, 2017.
  • [7] M. Cogswell, F. Ahmed, R. Girshick, L. Zitnick, and D. Batra. Reducing Overfitting in Deep Networks by Decorrelating Representations. arXiv:1511.06068 [cs, stat], Nov. 2015. arXiv: 1511.06068.
  • [8] Y. N. Dauphin, A. Fan, M. Auli, and D. Grangier. Language Modeling with Gated Convolutional Networks. arXiv:1612.08083 [cs], Dec. 2016. arXiv: 1612.08083.
  • [9] A. Demontis, M. Melis, B. Biggio, D. Maiorca, D. Arp, K. Rieck, I. Corona, G. Giacinto, and F. Roli. Yes, machine learning can be more secure! a case study on android malware detection. IEEE Trans. Dependable and Secure Computing, In press.
  • [10] F. Doshi-Velez and B. Kim. Towards A Rigorous Science of Interpretable Machine Learning. ArXiv e-prints, 2017.
  • [11] K. Grosse, N. Papernot, P. Manoharan, M. Backes, and P. D. McDaniel. Adversarial examples for malware detection. In ESORICS (2), volume 10493 of LNCS, pp. 62–79. Springer, 2017.
  • [12] A. Huang, A. Al-Dujaili, E. Hemberg, and U.-M. O’Reilly. Adversarial Deep Learning for Robust Detection of Binary Encoded Malware. ArXiv e-prints, 2018.
  • [13] Kaspersky. Machine learning for malware detection, 2017.
  • [14] Z. C. Lipton. The mythos of model interpretability. In ICML Workshop on Human Interpretability in Machine Learning, pp. 96–100, 2016.
  • [15] D. Maiorca, B. Biggio, M. E. Chiappe, and G. Giacinto. Adversarial detection of flash malware: Limitations and open issues. CoRR, abs/1710.10225, 2017.
  • [16] M. Pietrek. Peering inside the pe: A tour of the win32 portable executable file format, 1994.
  • [17] E. Raff, J. Barker, J. Sylvester, R. Brandon, B. Catanzaro, and C. Nicholas. Malware detection by eating a whole exe. arXiv preprint arXiv:1710.09435, 2017.
  • [18] K. Rieck, T. Holz, C. Willems, P. Düssel, and P. Laskov. Learning and classification of malware behavior. In Proceedings of the 5th Int’l Conf. on Detection of Intrusions and Malware, and Vulnerability Assessment, DIMVA ’08, pp. 108–125, Berlin, Heidelberg, 2008. Springer-Verlag.
  • [19] C. Rossow, C. J. Dietrich, C. Grier, C. Kreibich, V. Paxson, N. Pohlmann, H. Bos, and M. Van Steen. Prudent practices for designing malware experiments: Status quo and outlook. In IEEE Symp. Security and Privacy, pp. 65–79. IEEE, 2012.
  • [20] M. G. Schultz, E. Eskin, E. Zadok, and S. J. Stolfo. Data mining methods for detection of new malicious executables. In IEEE Symp. Security and Privacy, SP ’01, pp. 38–, Washington, DC, USA, 2001. IEEE CS.
  • [21] Symantec. Internet security threat report, 2017.
  • [22] C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus. Intriguing properties of neural networks. In Int’l Conf. Learn. Repr., 2014.
  • [23] L. Tong, B. Li, C. Hajaj, C. Xiao, and Y. Vorobeychik. Hardening classifiers against evasion: the good, the bad, and the ugly. CoRR, abs/1708.08327, 2017.
  • [24] W. Xu, Y. Qi, and D. Evans. Automatically evading classifiers. In 23rd Annual Network & Distributed System Security Symposium (NDSS). The Internet Society, 2016.
  • [25] G. Yan, N. Brown, and D. Kong. Exploring discriminatory features for automated malware classification. In 10th Int’l Conf. on Detection of Intrusions and Malware, and Vulnerability Assessment, DIMVA’13, pp. 41–61, Berlin, Heidelberg, 2013. Springer-Verlag.
  • [26] W. Yang, D. Kong, T. Xie, and C. A. Gunter. Malware detection in adversarial settings: Exploiting feature evolutions and confusions in android apps. In ACSAC, pp. 288–302. ACM, 2017.
  • [27] F. Zhang, P. Chan, B. Biggio, D. Yeung, and F. Roli.

    Adversarial feature selection against evasion attacks.

    IEEE Transactions on Cybernetics, 46(3):766–777, 2016.