Does Form Follow Function? An Empirical Exploration of the Impact of Deep Neural Network Architecture Design on Hardware-Specific Acceleration

07/08/2021
by   Saad Abbasi, et al.
0

The fine-grained relationship between form and function with respect to deep neural network architecture design and hardware-specific acceleration is one area that is not well studied in the research literature, with form often dictated by accuracy as opposed to hardware function. In this study, a comprehensive empirical exploration is conducted to investigate the impact of deep neural network architecture design on the degree of inference speedup that can be achieved via hardware-specific acceleration. More specifically, we empirically study the impact of a variety of commonly used macro-architecture design patterns across different architectural depths through the lens of OpenVINO microprocessor-specific and GPU-specific acceleration. Experimental results showed that while leveraging hardware-specific acceleration achieved an average inference speed-up of 380 drastically depending on the macro-architecture design pattern, with the greatest speedup achieved on the depthwise bottleneck convolution design pattern at 550 correlation between FLOPs requirement, level 3 cache efficacy, and network latency with increasing architectural depth and width. Finally, we analyze the inference time reductions using hardware-specific acceleration when compared to native deep learning frameworks across a wide variety of hand-crafted deep convolutional neural network architecture designs as well as ones found via neural architecture search strategies. We found that the DARTS-derived architecture to benefit from the greatest improvement from hardware-specific software acceleration (1200 MobileNet-V2 to have the lowest overall inference time of around 2.4 ms.

READ FULL TEXT
research
09/25/2021

Profiling Neural Blocks and Design Spaces for Mobile Neural Architecture Search

Neural architecture search automates neural network design and has achie...
research
05/05/2023

Neural Architecture Search for Intel Movidius VPU

Hardware-aware Neural Architecture Search (NAS) technologies have been p...
research
06/17/2019

Hardware Aware Neural Network Architectures using FbNet

We implement a differentiable Neural Architecture Search (NAS) method in...
research
11/06/2022

A Framework for Designing Efficient Deep Learning-Based Genomic Basecallers

Nanopore sequencing generates noisy electrical signals that need to be c...
research
07/23/2018

Recent Advances in Convolutional Neural Network Acceleration

In recent years, convolutional neural networks (CNNs) have shown great p...
research
05/10/2018

Towards Budget-Driven Hardware Optimization for Deep Convolutional Neural Networks using Stochastic Computing

Recently, Deep Convolutional Neural Network (DCNN) has achieved tremendo...
research
09/17/2014

Going Deeper with Convolutions

We propose a deep convolutional neural network architecture codenamed "I...

Please sign up or login with your details

Forgot password? Click here to reset