Towards Privacy and Security of Deep Learning Systems: A Survey

11/28/2019
by   Yingzhe He, et al.
0

Deep learning has gained tremendous success and great popularity in the past few years. However, recent research found that it is suffering several inherent weaknesses, which can threaten the security and privacy of the stackholders. Deep learning's wide use further magnifies the caused consequences. To this end, lots of research has been conducted with the purpose of exhaustively identifying intrinsic weaknesses and subsequently proposing feasible mitigation. Yet few is clear about how these weaknesses are incurred and how effective are these attack approaches in assaulting deep learning. In order to unveil the security weaknesses and aid in the development of a robust deep learning system, we are devoted to undertaking a comprehensive investigation on attacks towards deep learning, and extensively evaluating these attacks in multiple views. In particular, we focus on four types of attacks associated with security and privacy of deep learning: model extraction attack, model inversion attack, poisoning attack and adversarial attack. For each type of attack, we construct its essential workflow as well as adversary capabilities and attack goals. Many pivot metrics are devised for evaluating the attack approaches, by which we perform a quantitative and qualitative analysis. From the analysis, we have identified significant and indispensable factors in an attack vector, , how to reduce queries to target models, what distance used for measuring perturbation. We spot light on 17 findings covering these approaches' merits and demerits, success probability, deployment complexity and prospects. Moreover, we discuss other potential security weaknesses and possible mitigation which can inspire relevant researchers in this area.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/23/2021

Kryptonite: An Adversarial Attack Using Regional Focus

With the Rise of Adversarial Machine Learning and increasingly robust ad...
research
09/13/2022

PINCH: An Adversarial Extraction Attack Framework for Deep Learning Models

Deep Learning (DL) models increasingly power a diversity of applications...
research
12/15/2017

Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning

Deep learning models have achieved high performance on many tasks, and t...
research
08/30/2018

Backdoor Embedding in Convolutional Neural Network Models via Invisible Perturbation

Deep learning models have consistently outperformed traditional machine ...
research
12/07/2022

A Systematic Literature Review On Privacy Of Deep Learning Systems

The last decade has seen a rise of Deep Learning with its applications r...
research
02/23/2021

Oriole: Thwarting Privacy against Trustworthy Deep Learning Models

Deep Neural Networks have achieved unprecedented success in the field of...
research
02/21/2022

Poisoning Attacks and Defenses on Artificial Intelligence: A Survey

Machine learning models have been widely adopted in several fields. Howe...

Please sign up or login with your details

Forgot password? Click here to reset