DeepAI AI Chat
Log In Sign Up

When Ensembling Smaller Models is More Efficient than Single Large Models

by   Dan Kondratyuk, et al.

Ensembling is a simple and popular technique for boosting evaluation performance by training multiple models (e.g., with different initializations) and aggregating their predictions. This approach is commonly reserved for the largest models, as it is commonly held that increasing the model size provides a more substantial reduction in error than ensembling smaller models. However, we show results from experiments on CIFAR-10 and ImageNet that ensembles can outperform single models with both higher accuracy and requiring fewer total FLOPs to compute, even when those individual models' weights and hyperparameters are highly optimized. Furthermore, this gap in improvement widens as models become large. This presents an interesting observation that output diversity in ensembling can often be more efficient than training larger models, especially when the models approach the size of what their dataset can foster. Instead of using the common practice of tuning a single large model, one can use ensembles as a more flexible trade-off between a model's inference speed and accuracy. This also potentially eases hardware design, e.g., an easier way to parallelize the model across multiple workers for real-time or distributed inference.


page 1

page 2

page 3

page 4


Train Large, Then Compress: Rethinking Model Size for Efficient Training and Inference of Transformers

Since hardware resources are limited, the objective of training deep lea...

Packed-Ensembles for Efficient Uncertainty Estimation

Deep Ensembles (DE) are a prominent approach achieving excellent perform...

Multiple Networks are More Efficient than One: Fast and Accurate Models via Ensembles and Cascades

Recent work on efficient neural network architectures focuses on discove...

Hyperparameter Ensembles for Robustness and Uncertainty Quantification

Ensembles over neural network weights trained from different random init...

Finding the SWEET Spot: Analysis and Improvement of Adaptive Inference in Low Resource Settings

Adaptive inference is a simple method for reducing inference costs. The ...

Characterizing the Efficiency vs. Accuracy Trade-off for Long-Context NLP Models

With many real-world applications of Natural Language Processing (NLP) c...

Ensembles of Vision Transformers as a New Paradigm for Automated Classification in Ecology

Monitoring biodiversity is paramount to manage and protect natural resou...