Efficient Evaluation-Time Uncertainty Estimation by Improved Distillation

06/12/2019
by   Erik Englesson, et al.
0

In this work we aim to obtain computationally-efficient uncertainty estimates with deep networks. For this, we propose a modified knowledge distillation procedure that achieves state-of-the-art uncertainty estimates both for in and out-of-distribution samples. Our contributions include a) demonstrating and adapting to distillation's regularization effect b) proposing a novel target teacher distribution c) a simple augmentation procedure to improve out-of-distribution uncertainty estimates d) shedding light on the distillation procedure through comprehensive set of experiments.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/17/2020

Computation-Efficient Knowledge Distillation via Uncertainty-Aware Mixup

Knowledge distillation, which involves extracting the "dark knowledge" f...
research
04/10/2023

A Survey on Recent Teacher-student Learning Studies

Knowledge distillation is a method of transferring the knowledge from a ...
research
09/26/2021

Partial to Whole Knowledge Distillation: Progressive Distilling Decomposed Knowledge Boosts Student Better

Knowledge distillation field delicately designs various types of knowled...
research
12/01/2021

Information Theoretic Representation Distillation

Despite the empirical success of knowledge distillation, there still lac...
research
08/04/2020

Prime-Aware Adaptive Distillation

Knowledge distillation(KD) aims to improve the performance of a student ...
research
08/25/2020

Discriminability Distillation in Group Representation Learning

Learning group representation is a commonly concerned issue in tasks whe...
research
05/14/2021

Scaling Ensemble Distribution Distillation to Many Classes with Proxy Targets

Ensembles of machine learning models yield improved system performance a...

Please sign up or login with your details

Forgot password? Click here to reset