Deep pNML: Predictive Normalized Maximum Likelihood for Deep Neural Networks

04/28/2019
by   Koby Bibas, et al.
0

The Predictive Normalized Maximum Likelihood (pNML) scheme has been recently suggested for universal learning in the individual setting, where both the training and test samples are individual data. The goal of universal learning is to compete with a "genie" or reference learner that knows the data values, but is restricted to use a learner from a given model class. The pNML minimizes the associated regret for any possible value of the unknown label. Furthermore, its min-max regret can serve as a pointwise measure of learnability for the specific training and data sample. In this work we examine the pNML and its associated learnability measure for the Deep Neural Network (DNN) model class. As shown, the pNML outperforms the commonly used Empirical Risk Minimization (ERM) approach and provides robustness against adversarial attacks. Together with its learnability measure it can detect out of distribution test examples, be tolerant to noisy labels and serve as a confidence measure for the ERM. Finally, we extend the pNML to a "twice universal" solution, that provides universality for model class selection and generates a learner competing with the best one from all model classes.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/22/2018

Universal Supervised Learning for Individual Data

Universal supervised learning is considered from an information theoreti...
research
11/20/2020

Efficient Data-Dependent Learnability

The predictive normalized maximum likelihood (pNML) approach has recentl...
research
10/18/2021

Single Layer Predictive Normalized Maximum Likelihood for Out-of-Distribution Detection

Detecting out-of-distribution (OOD) samples is vital for developing mach...
research
05/12/2019

A New Look at an Old Problem: A Universal Learning Approach to Linear Regression

Linear regression is a classical paradigm in statistics. A new look at i...
research
02/14/2021

The Predictive Normalized Maximum Likelihood for Over-parameterized Linear Regression with Norm Constraint: Regret and Double Descent

A fundamental tenet of learning theory is that a trade-off exists betwee...
research
03/06/2020

No Regret Sample Selection with Noisy Labels

Deep Neural Network (DNN) suffers from noisy labeled data because of the...
research
01/08/2019

Comparing Sample-wise Learnability Across Deep Neural Network Models

Estimating the relative importance of each sample in a training set has ...

Please sign up or login with your details

Forgot password? Click here to reset