Anti-Distillation: Improving reproducibility of deep networks

by   Gil I. Shamir, et al.

Deep networks have been revolutionary in improving performance of machine learning and artificial intelligence systems. Their high prediction accuracy, however, comes at a price of model irreproducibility in very high levels that do not occur with classical linear models. Two models, even if they are supposedly identical, with identical architecture and identical trained parameter sets, and that are trained on the same set of training examples, while possibly providing identical average prediction accuracies, may predict very differently on individual, previously unseen, examples. Prediction differences may be as large as the order of magnitude of the predictions themselves. Ensembles have been shown to somewhat mitigate this behavior, but without an extra push, may not be utilizing their full potential. In this work, a novel approach, Anti-Distillation, is proposed to address irreproducibility in deep networks, where ensemble models are used to generate predictions. Anti-Distillation forces ensemble components away from one another by techniques like de-correlating their outputs over mini-batches of examples, forcing them to become even more different and more diverse. Doing so enhances the benefit of ensembles, making the final predictions more reproducible. Empirical results demonstrate substantial prediction difference reductions achieved by Anti-Distillation on benchmark and real datasets.



There are no comments yet.


page 1

page 2

page 3

page 4


Smooth activations and reproducibility in deep networks

Deep networks are gradually penetrating almost every domain in our lives...

A general framework for ensemble distribution distillation

Ensembles of neural networks have been shown to give better performance ...

Fast, Accurate, and Simple Models for Tabular Data via Augmented Distillation

Automated machine learning (AutoML) can produce complex model ensembles ...

Synthesizing Irreproducibility in Deep Networks

The success and superior performance of deep networks is spreading their...

On the Reproducibility of Neural Network Predictions

Standard training techniques for neural networks involve multiple source...

Ensembles of Locally Independent Prediction Models

Many ensemble methods encourage their constituent models to be diverse, ...

Selective Ensembles for Consistent Predictions

Recent work has shown that models trained to the same objective, and whi...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.