DeepCloak: Adversarial Crafting As a Defensive Measure to Cloak Processes
Over the past decade, side-channels have proven to be significant and practical threats to modern computing systems. Recent attacks have all exploited the underlying shared hardware. While practical, mounting such a complicated attack is still akin to listening on a private conversation in a crowded train station. The attacker has to either perform significant manual labor or use AI systems to automate the process. The recent academic literature points to the latter option. With the abundance of cheap computing power and the improvements made in AI, it is quite advantageous to automate such tasks. By using AI systems however, malicious parties also inherit their weaknesses. One such weakness is undoubtedly the vulnerability to adversarial samples. In contrast to the previous literature, for the first time, we propose the use of adversarial learning as a defensive tool to obfuscate and mask private information. We demonstrate the viability of this approach by first training CNNs and other machine learning classifiers on leakage trace of different processes. After training highly accurate models (99+ investigate their resolve against adversarial learning methods. By applying minimal perturbations to input traces, the adversarial traffic by the defender can run as an attachment to the original process and cloak it against a malicious classifier. Finally, we investigate whether an attacker can protect her classifier model by employing adversarial defense methods, namely adversarial re-training and defensive distillation. Our results show that even in the presence of an intelligent adversary that employs such techniques, all 10 of the tested adversarial learning methods still manage to successfully craft adversarial perturbations and the proposed cloaking methodology succeeds.
READ FULL TEXT