Hacking Neural Networks: A Short Introduction

11/18/2019
by   Michael Kissner, et al.
0

A large chunk of research on the security issues of neural networks is focused on adversarial attacks. However, there exists a vast sea of simpler attacks one can perform both against and with neural networks. In this article, we give a quick introduction on how deep learning in security works and explore the basic methods of exploitation, but also look at the offensive capabilities deep learning enabled tools provide. All presented attacks, such as backdooring, GPU-based buffer overflows or automated bug hunting, are accompanied by short open-source exercises for anyone to try out.

READ FULL TEXT
research
08/07/2023

A reading survey on adversarial machine learning: Adversarial attacks and their understanding

Deep Learning has empowered us to train neural networks for complex data...
research
06/19/2017

Towards Deep Learning Models Resistant to Adversarial Attacks

Recent work has demonstrated that neural networks are vulnerable to adve...
research
09/21/2021

Introduction to Neural Network Verification

Deep learning has transformed the way we think of software and what it c...
research
10/17/2020

A Generative Model based Adversarial Security of Deep Learning and Linear Classifier Models

In recent years, machine learning algorithms have been applied widely in...
research
02/08/2021

Introduction to Machine Learning for the Sciences

This is an introductory machine learning course specifically developed w...
research
02/28/2019

Deep learning in bioinformatics: introduction, application, and perspective in big data era

Deep learning, which is especially formidable in handling big data, has ...
research
09/13/2021

The mathematics of adversarial attacks in AI – Why deep learning is unstable despite the existence of stable neural networks

The unprecedented success of deep learning (DL) makes it unchallenged wh...

Please sign up or login with your details

Forgot password? Click here to reset