Energy Consumption of Neural Networks on NVIDIA Edge Boards: an Empirical Model

10/04/2022
by   Seyyidahmed Lahmer, et al.
0

Recently, there has been a trend of shifting the execution of deep learning inference tasks toward the edge of the network, closer to the user, to reduce latency and preserve data privacy. At the same time, growing interest is being devoted to the energetic sustainability of machine learning. At the intersection of these trends, we hence find the energetic characterization of machine learning at the edge, which is attracting increasing attention. Unfortunately, calculating the energy consumption of a given neural network during inference is complicated by the heterogeneity of the possible underlying hardware implementation. In this work, we hence aim at profiling the energetic consumption of inference tasks for some modern edge nodes and deriving simple but realistic models. To this end, we performed a large number of experiments to collect the energy consumption of convolutional and fully connected layers on two well-known edge boards by NVIDIA, namely Jetson TX2 and Xavier. From the measurements, we have then distilled a simple, practical model that can provide an estimate of the energy consumption of a certain inference task on the considered boards. We believe that this model can be used in many contexts as, for instance, to guide the search for efficient architectures in Neural Architecture Search, as a heuristic in Neural Network pruning, or to find energy-efficient offloading strategies in a Split computing context, or simply to evaluate the energetic performance of Deep Neural Network architectures.

READ FULL TEXT

page 1

page 5

research
11/04/2016

Sparsely-Connected Neural Networks: Towards Efficient VLSI Implementation of Deep Neural Networks

Recently deep neural networks have received considerable attention due t...
research
01/21/2019

Partition Pruning: Parallelization-Aware Pruning for Deep Neural Networks

Parameters of recent neural networks require a huge amount of memory. Th...
research
09/11/2019

Optimal Machine Intelligence Near the Edge of Chaos

It has long been suggested that living systems, in particular the brain,...
research
10/17/2021

Exploring Deep Neural Networks on Edge TPU

This paper explores the performance of Google's Edge TPU on feed forward...
research
04/22/2022

Energy-efficient and Privacy-aware Social Distance Monitoring with Low-resolution Infrared Sensors and Adaptive Inference

Low-resolution infrared (IR) Sensors combined with machine learning (ML)...
research
06/03/2022

Multi-user Co-inference with Batch Processing Capable Edge Server

Graphics processing units (GPUs) can improve deep neural network inferen...
research
09/12/2021

Compute and Energy Consumption Trends in Deep Learning Inference

The progress of some AI paradigms such as deep learning is said to be li...

Please sign up or login with your details

Forgot password? Click here to reset