DeepCloak: Adversarial Crafting As a Defensive Measure to Cloak Processes

08/03/2018
by   Mehmet Sinan Inci, et al.
0

Over the past decade, side-channels have proven to be significant and practical threats to modern computing systems. Recent attacks have all exploited the underlying shared hardware. While practical, mounting such a complicated attack is still akin to listening on a private conversation in a crowded train station. The attacker has to either perform significant manual labor or use AI systems to automate the process. The recent academic literature points to the latter option. With the abundance of cheap computing power and the improvements made in AI, it is quite advantageous to automate such tasks. By using AI systems however, malicious parties also inherit their weaknesses. One such weakness is undoubtedly the vulnerability to adversarial samples. In contrast to the previous literature, for the first time, we propose the use of adversarial learning as a defensive tool to obfuscate and mask private information. We demonstrate the viability of this approach by first training CNNs and other machine learning classifiers on leakage trace of different processes. After training highly accurate models (99+ investigate their resolve against adversarial learning methods. By applying minimal perturbations to input traces, the adversarial traffic by the defender can run as an attachment to the original process and cloak it against a malicious classifier. Finally, we investigate whether an attacker can protect her classifier model by employing adversarial defense methods, namely adversarial re-training and defensive distillation. Our results show that even in the presence of an intelligent adversary that employs such techniques, all 10 of the tested adversarial learning methods still manage to successfully craft adversarial perturbations and the proposed cloaking methodology succeeds.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/01/2020

Fundamental Limits of Adversarial Learning

Robustness of machine learning methods is essential for modern practical...
research
04/04/2022

Experimental quantum adversarial learning with programmable superconducting qubits

Quantum computing promises to enhance machine learning and artificial in...
research
06/22/2020

Learning to Generate Noise for Robustness against Multiple Perturbations

Adversarial learning has emerged as one of the successful techniques to ...
research
02/21/2022

A Tutorial on Adversarial Learning Attacks and Countermeasures

Machine learning algorithms are used to construct a mathematical model f...
research
05/16/2022

Diffusion Models for Adversarial Purification

Adversarial purification refers to a class of defense methods that remov...
research
11/22/2019

Privacy-Aware Recommendation with Private-Attribute Protection using Adversarial Learning

Recommendation is one of the critical applications that helps users find...
research
03/25/2023

No more Reviewer #2: Subverting Automatic Paper-Reviewer Assignment using Adversarial Learning

The number of papers submitted to academic conferences is steadily risin...

Please sign up or login with your details

Forgot password? Click here to reset