Ensemble Distribution Distillation

04/30/2019
by   Andrey Malinin, et al.
0

Ensemble of Neural Network (NN) models are known to yield improvements in accuracy. Furthermore, they have been empirically shown to yield robust measures of uncertainty, though without theoretical guarantees. However, ensembles come at high computational and memory cost, which may be prohibitive for certain application. There has been significant work done on the distillation of an ensemble into a single model. Such approaches decrease computational cost and allow a single model to achieve accuracy comparable to that of an ensemble. However, information about the diversity of the ensemble, which can yield estimates of knowledge uncertainty, is lost. Recently, a new class of models, called Prior Networks, has been proposed, which allows a single neural network to explicitly model a distribution over output distributions, effectively emulating an ensemble. In this work ensembles and Prior Networks are combined to yield a novel approach called Ensemble Distribution Distillation (EnD^2), which allows distilling an ensemble into a single Prior Network. This allows a single model to retain both the improved classification performance as well as measures of diversity of the ensemble. In this initial investigation the properties of EnD^2 have been investigated and confirmed on an artificial dataset.

READ FULL TEXT
research
01/14/2020

Hydra: Preserving Ensemble Diversity for Model Distillation

Ensembles of models have been empirically shown to improve predictive pe...
research
06/20/2020

Regression Prior Networks

Prior Networks are a recently developed class of models which yield inte...
research
05/19/2022

Simple Regularisation for Uncertainty-Aware Knowledge Distillation

Considering uncertainty estimation of modern neural networks (NNs) is on...
research
06/12/2020

Hypermodels for Exploration

We study the use of hypermodels to represent epistemic uncertainty and g...
research
03/29/2022

Learning Structured Gaussians to Approximate Deep Ensembles

This paper proposes using a sparse-structured multivariate Gaussian to p...
research
10/27/2021

Diversity Matters When Learning From Ensembles

Deep ensembles excel in large-scale image classification tasks both in t...
research
05/14/2021

Scaling Ensemble Distribution Distillation to Many Classes with Proxy Targets

Ensembles of machine learning models yield improved system performance a...

Please sign up or login with your details

Forgot password? Click here to reset