Adversarial Examples, Uncertainty, and Transfer Testing Robustness in Gaussian Process Hybrid Deep Networks

07/08/2017
by   John Bradshaw, et al.
0

Deep neural networks (DNNs) have excellent representative power and are state of the art classifiers on many tasks. However, they often do not capture their own uncertainties well making them less robust in the real world as they overconfidently extrapolate and do not notice domain shift. Gaussian processes (GPs) with RBF kernels on the other hand have better calibrated uncertainties and do not overconfidently extrapolate far from data in their training set. However, GPs have poor representational power and do not perform as well as DNNs on complex domains. In this paper we show that GP hybrid deep networks, GPDNNs, (GPs on top of DNNs and trained end-to-end) inherit the nice properties of both GPs and DNNs and are much more robust to adversarial examples. When extrapolating to adversarial examples and testing in domain shift settings, GPDNNs frequently output high entropy class probabilities corresponding to essentially "don't know". GPDNNs are therefore promising as deep architectures that know when they don't know.

READ FULL TEXT

page 13

page 17

page 18

page 20

page 21

page 30

page 31

page 32

research
11/28/2020

Generalized Adversarial Examples: Attacks and Defenses

Most of the works follow such definition of adversarial example that is ...
research
12/11/2014

Towards Deep Neural Network Architectures Robust to Adversarial Examples

Recent work has shown deep neural networks (DNNs) to be highly susceptib...
research
05/29/2018

Lightweight Probabilistic Deep Networks

Even though probabilistic treatments of neural networks have a long hist...
research
04/10/2020

Reinforcement Learning via Gaussian Processes with Neural Network Dual Kernels

While deep neural networks (DNNs) and Gaussian Processes (GPs) are both ...
research
09/20/2021

Trust Your Robots! Predictive Uncertainty Estimation of Neural Networks with Sparse Gaussian Processes

This paper presents a probabilistic framework to obtain both reliable an...
research
04/02/2020

Predicting the outputs of finite networks trained with noisy gradients

A recent line of studies has focused on the infinite width limit of deep...

Please sign up or login with your details

Forgot password? Click here to reset