Anti-Distillation: Improving reproducibility of deep networks

10/19/2020
by   Gil I. Shamir, et al.
0

Deep networks have been revolutionary in improving performance of machine learning and artificial intelligence systems. Their high prediction accuracy, however, comes at a price of model irreproducibility in very high levels that do not occur with classical linear models. Two models, even if they are supposedly identical, with identical architecture and identical trained parameter sets, and that are trained on the same set of training examples, while possibly providing identical average prediction accuracies, may predict very differently on individual, previously unseen, examples. Prediction differences may be as large as the order of magnitude of the predictions themselves. Ensembles have been shown to somewhat mitigate this behavior, but without an extra push, may not be utilizing their full potential. In this work, a novel approach, Anti-Distillation, is proposed to address irreproducibility in deep networks, where ensemble models are used to generate predictions. Anti-Distillation forces ensemble components away from one another by techniques like de-correlating their outputs over mini-batches of examples, forcing them to become even more different and more diverse. Doing so enhances the benefit of ensembles, making the final predictions more reproducible. Empirical results demonstrate substantial prediction difference reductions achieved by Anti-Distillation on benchmark and real datasets.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/20/2020

Smooth activations and reproducibility in deep networks

Deep networks are gradually penetrating almost every domain in our lives...
research
02/26/2020

A general framework for ensemble distribution distillation

Ensembles of neural networks have been shown to give better performance ...
research
06/25/2020

Fast, Accurate, and Simple Models for Tabular Data via Augmented Distillation

Automated machine learning (AutoML) can produce complex model ensembles ...
research
02/21/2021

Synthesizing Irreproducibility in Deep Networks

The success and superior performance of deep networks is spreading their...
research
02/01/2023

Pathologies of Predictive Diversity in Deep Ensembles

Classical results establish that ensembles of small models benefit when ...
research
11/04/2019

Ensembles of Locally Independent Prediction Models

Many ensemble methods encourage their constituent models to be diverse, ...
research
02/05/2021

On the Reproducibility of Neural Network Predictions

Standard training techniques for neural networks involve multiple source...

Please sign up or login with your details

Forgot password? Click here to reset