DICE: Diversity in Deep Ensembles via Conditional Redundancy Adversarial Estimation

by   Alexandre Ramé, et al.

Deep ensembles perform better than a single network thanks to the diversity among their members. Recent approaches regularize predictions to increase diversity; however, they also drastically decrease individual members' performances. In this paper, we argue that learning strategies for deep ensembles need to tackle the trade-off between ensemble diversity and individual accuracies. Motivated by arguments from information theory and leveraging recent advances in neural estimation of conditional mutual information, we introduce a novel training criterion called DICE: it increases diversity by reducing spurious correlations among features. The main idea is that features extracted from pairs of members should only share information useful for target class prediction without being conditionally redundant. Therefore, besides the classification loss with information bottleneck, we adversarially prevent features from being conditionally predictable from each other. We manage to reduce simultaneous errors while protecting class information. We obtain state-of-the-art accuracy results on CIFAR-10/100: for example, an ensemble of 5 networks trained with DICE matches an ensemble of 7 networks trained independently. We further analyze the consequences on calibration, uncertainty estimation, out-of-distribution detection and online co-distillation.


page 1

page 2

page 3

page 4


Improving robustness and calibration in ensembles with diversity regularization

Calibration and uncertainty estimation are crucial topics in high-risk e...

Repulsive Deep Ensembles are Bayesian

Deep ensembles have recently gained popularity in the deep learning comm...

On the Usefulness of Deep Ensemble Diversity for Out-of-Distribution Detection

The ability to detect Out-of-Distribution (OOD) data is important in saf...

Prune and Tune Ensembles: Low-Cost Ensemble Learning With Sparse Independent Subnetworks

Ensemble Learning is an effective method for improving generalization in...

Why M Heads are Better than One: Training a Diverse Ensemble of Deep Networks

Convolutional Neural Networks have achieved state-of-the-art performance...

On Aggregation in Ensembles of Multilabel Classifiers

While a variety of ensemble methods for multilabel classification have b...

Greedy Bayesian Posterior Approximation with Deep Ensembles

Ensembles of independently trained neural networks are a state-of-the-ar...