Hardware-Aware Machine Learning: Modeling and Optimization

09/14/2018
by   Diana Marculescu, et al.
0

Recent breakthroughs in Deep Learning (DL) applications have made DL models a key component in almost every modern computing system. The increased popularity of DL applications deployed on a wide-spectrum of platforms have resulted in a plethora of design challenges related to the constraints introduced by the hardware itself. What is the latency or energy cost for an inference made by a Deep Neural Network (DNN)? Is it possible to predict this latency or energy consumption before a model is trained? If yes, how can machine learners take advantage of these models to design the hardware-optimal DNN for deployment? From lengthening battery life of mobile devices to reducing the runtime requirements of DL models executing in the cloud, the answers to these questions have drawn significant attention. One cannot optimize what isn't properly modeled. Therefore, it is important to understand the hardware efficiency of DL models during serving for making an inference, before even training the model. This key observation has motivated the use of predictive models to capture the hardware performance or energy efficiency of DL applications. Furthermore, DL practitioners are challenged with the task of designing the DNN model, i.e., of tuning the hyper-parameters of the DNN architecture, while optimizing for both accuracy of the DL model and its hardware efficiency. Therefore, state-of-the-art methodologies have proposed hardware-aware hyper-parameter optimization techniques. In this paper, we provide a comprehensive assessment of state-of-the-art work and selected results on the hardware-aware modeling and optimization for DL applications. We also highlight several open questions that are poised to give rise to novel hardware-aware designs in the next few years, as DL applications continue to significantly impact associated hardware systems and platforms.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/21/2020

Hardware and Software Optimizations for Accelerating Deep Neural Networks: Survey of Current Trends, Challenges, and the Road Ahead

Currently, Machine Learning (ML) is becoming ubiquitous in everyday life...
research
10/15/2017

NeuralPower: Predict and Deploy Energy-Efficient Convolutional Neural Networks

"How much energy is consumed for an inference made by a convolutional ne...
research
12/08/2022

Approximations in Deep Learning

The design and implementation of Deep Learning (DL) models is currently ...
research
10/13/2020

Deep Delay Loop Reservoir Computing for Specific Emitter Identification

Current AI systems at the tactical edge lack the computational resources...
research
10/03/2022

Decompiling x86 Deep Neural Network Executables

Due to their widespread use on heterogeneous hardware devices, deep lear...
research
03/30/2023

BOLT: An Automated Deep Learning Framework for Training and Deploying Large-Scale Neural Networks on Commodity CPU Hardware

Efficient large-scale neural network training and inference on commodity...
research
12/17/2021

Exploring the Impact of Virtualization on the Usability of the Deep Learning Applications

Deep Learning-based (DL) applications are becoming increasingly popular ...

Please sign up or login with your details

Forgot password? Click here to reset