Towards Robust Deep Learning using Entropic Losses

08/06/2022
by   David Macêdo, et al.
27

Current deep learning solutions are well known for not informing whether they can reliably classify an example during inference. One of the most effective ways to build more reliable deep learning solutions is to improve their performance in the so-called out-of-distribution detection task, which essentially consists of "know that you do not know" or "know the unknown". In other words, out-of-distribution detection capable systems may reject performing a nonsense classification when submitted to instances of classes on which the neural network was not trained. This thesis tackles the defiant out-of-distribution detection task by proposing novel loss functions and detection scores. Uncertainty estimation is also a crucial auxiliary task in building more robust deep learning systems. Therefore, we also deal with this robustness-related task, which evaluates how realistic the probabilities presented by the deep neural network are. To demonstrate the effectiveness of our approach, in addition to a substantial set of experiments, which includes state-of-the-art results, we use arguments based on the principle of maximum entropy to establish the theoretical foundation of the proposed approaches. Unlike most current methods, our losses and scores are seamless and principled solutions that produce accurate predictions in addition to fast and efficient inference. Moreover, our approaches can be incorporated into current and future projects simply by replacing the loss used to train the deep neural network and computing a rapid score for detection.

READ FULL TEXT

page 18

page 19

page 22

page 23

page 25

page 26

page 30

page 31

research
05/15/2021

An Effective Baseline for Robustness to Distributional Shift

Refraining from confidently predicting when faced with categories of inp...
research
09/06/2023

Continual Evidential Deep Learning for Out-of-Distribution Detection

Uncertainty-based deep learning models have attracted a great deal of in...
research
05/29/2017

The Principle of Logit Separation

We consider neural network training, in applications in which there are ...
research
06/07/2020

Uncertainty-Aware Deep Classifiers using Generative Models

Deep neural networks are often ignorant about what they do not know and ...
research
12/01/2018

Building robust classifiers through generation of confident out of distribution examples

Deep learning models are known to be overconfident in their predictions ...
research
10/03/2022

Automatic Assessment of Functional Movement Screening Exercises with Deep Learning Architectures

(1) Background: The success of physiotherapy depends on the regular and ...

Please sign up or login with your details

Forgot password? Click here to reset