Low Resource Black-Box End-to-End Attack Against State of the Art API Call Based Malware Classifiers
In this paper, we present a black-box attack against API call based machine learning malware classifiers. We generate adversarial examples combining API call sequences and static features (e.g., printable strings) that will be misclassified by the classifier without affecting the malware functionality. Our attack only requires access to the predicted label of the attacked model (without the confidence level) and minimizes the number of target classifier queries. We evaluate the attack's effectiveness against many classifiers such as RNN variants, DNN, SVM, GBDT, etc. We show that the attack requires fewer queries and less knowledge about the attacked model's architecture than other existing black-box attacks. We also implement BADGER, a software framework to recraft any malware binary so that it won't be detected by classifiers, without access to the malware source code. Finally, we discuss the robustness of this attack to existing defense mechanisms.
READ FULL TEXT