When Ensembling Smaller Models is More Efficient than Single Large Models

05/01/2020
by   Dan Kondratyuk, et al.
22

Ensembling is a simple and popular technique for boosting evaluation performance by training multiple models (e.g., with different initializations) and aggregating their predictions. This approach is commonly reserved for the largest models, as it is commonly held that increasing the model size provides a more substantial reduction in error than ensembling smaller models. However, we show results from experiments on CIFAR-10 and ImageNet that ensembles can outperform single models with both higher accuracy and requiring fewer total FLOPs to compute, even when those individual models' weights and hyperparameters are highly optimized. Furthermore, this gap in improvement widens as models become large. This presents an interesting observation that output diversity in ensembling can often be more efficient than training larger models, especially when the models approach the size of what their dataset can foster. Instead of using the common practice of tuning a single large model, one can use ensembles as a more flexible trade-off between a model's inference speed and accuracy. This also potentially eases hardware design, e.g., an easier way to parallelize the model across multiple workers for real-time or distributed inference.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/26/2020

Train Large, Then Compress: Rethinking Model Size for Efficient Training and Inference of Transformers

Since hardware resources are limited, the objective of training deep lea...
research
10/17/2022

Packed-Ensembles for Efficient Uncertainty Estimation

Deep Ensembles (DE) are a prominent approach achieving excellent perform...
research
12/03/2020

Multiple Networks are More Efficient than One: Fast and Accurate Models via Ensembles and Cascades

Recent work on efficient neural network architectures focuses on discove...
research
06/24/2020

Hyperparameter Ensembles for Robustness and Uncertainty Quantification

Ensembles over neural network weights trained from different random init...
research
06/04/2023

Finding the SWEET Spot: Analysis and Improvement of Adaptive Inference in Low Resource Settings

Adaptive inference is a simple method for reducing inference costs. The ...
research
04/15/2022

Characterizing the Efficiency vs. Accuracy Trade-off for Long-Context NLP Models

With many real-world applications of Natural Language Processing (NLP) c...
research
03/03/2022

Ensembles of Vision Transformers as a New Paradigm for Automated Classification in Ecology

Monitoring biodiversity is paramount to manage and protect natural resou...

Please sign up or login with your details

Forgot password? Click here to reset