Deep Minimax Probability Machine

11/20/2019
by   Lirong He, et al.
13

Deep neural networks enjoy a powerful representation and have proven effective in a number of applications. However, recent advances show that deep neural networks are vulnerable to adversarial attacks incurred by the so-called adversarial examples. Although the adversarial example is only slightly different from the input sample, the neural network classifies it as the wrong class. In order to alleviate this problem, we propose the Deep Minimax Probability Machine (DeepMPM), which applies MPM to deep neural networks in an end-to-end fashion. In a worst-case scenario, MPM tries to minimize an upper bound of misclassification probabilities, considering the global information (i.e., mean and covariance information of each class). DeepMPM can be more robust since it learns the worst-case bound on the probability of misclassification of future data. Experiments on two real-world datasets can achieve comparable classification performance with CNN, while can be more robust on adversarial attacks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/24/2020

ATRO: Adversarial Training with a Rejection Option

This paper proposes a classification framework with a rejection option t...
research
12/04/2020

Towards Natural Robustness Against Adversarial Examples

Recent studies have shown that deep neural networks are vulnerable to ad...
research
10/21/2020

A Distributional Robustness Certificate by Randomized Smoothing

The robustness of deep neural networks against adversarial example attac...
research
08/05/2020

Robust Deep Reinforcement Learning through Adversarial Loss

Deep neural networks, including reinforcement learning agents, have been...
research
11/09/2017

Crafting Adversarial Examples For Speech Paralinguistics Applications

Computational paralinguistic analysis is increasingly being used in a wi...
research
11/18/2016

LOTS about Attacking Deep Features

Deep neural networks provide state-of-the-art performance on various tas...
research
11/27/2022

Adversarial Rademacher Complexity of Deep Neural Networks

Deep neural networks are vulnerable to adversarial attacks. Ideally, a r...

Please sign up or login with your details

Forgot password? Click here to reset