Rethinking Pareto Frontier for Performance Evaluation of Deep Neural Networks

02/18/2022
by   Vahid Partovi Nia, et al.
0

Recent efforts in deep learning show a considerable advancement in redesigning deep learning models for low-resource and edge devices. The performance optimization of deep learning models are conducted either manually or through automatic architecture search, or a combination of both. The throughput and power consumption of deep learning models strongly depend on the target hardware. We propose to use a multi-dimensional Pareto frontier to re-define the efficiency measure using a multi-objective optimization, where other variables such as power consumption, latency, and accuracy play a relative role in defining a dominant model. Furthermore, a random version of the multi-dimensional Pareto frontier is introduced to mitigate the uncertainty of accuracy, latency, and throughput variations of deep learning models in different experimental setups. These two breakthroughs provide an objective benchmarking method for a wide range of deep learning models. We run our novel multi-dimensional stochastic relative efficiency on a wide range of deep image classification models trained ImageNet data. Thank to this new approach we combine competing variables with stochastic nature simultaneously in a single relative efficiency measure. This allows to rank deep models that run efficiently on different computing hardware, and combines inference efficiency with training efficiency objectively.

READ FULL TEXT
research
08/21/2021

DeepEdgeBench: Benchmarking Deep Neural Networks on Edge Devices

EdgeAI (Edge computing based Artificial Intelligence) has been most acti...
research
11/06/2018

On the Resource Consumption of M2M Random Access: Efficiency and Pareto Optimality

The advent of Machine-to-Machine communication has sparked a new wave of...
research
04/25/2023

Optimizing Deep Learning Models For Raspberry Pi

Deep learning models have become increasingly popular for a wide range o...
research
11/29/2018

TEA-DNN: the Quest for Time-Energy-Accuracy Co-optimized Deep Neural Networks

Embedded deep learning platforms have witnessed two simultaneous improve...
research
11/24/2020

Benchmarking Inference Performance of Deep Learning Models on Analog Devices

Analog hardware implemented deep learning models are promising for compu...
research
12/22/2022

EuclidNets: An Alternative Operation for Efficient Inference of Deep Learning Models

With the advent of deep learning application on edge devices, researcher...
research
11/20/2022

MEESO: A Multi-objective End-to-End Self-Optimized Approach for Automatically Building Deep Learning Models

Deep learning has been widely used in various applications from differen...

Please sign up or login with your details

Forgot password? Click here to reset