HUGE2: a Highly Untangled Generative-model Engine for Edge-computing

07/25/2019
by   Feng Shi, et al.
0

As a type of prominent studies in deep learning, generative models have been widely investigated in research recently. Two research branches of the deep learning models, the Generative Networks (GANs, VAE) and the Semantic Segmentation, rely highly on the upsampling operations, especially the transposed convolution and the dilated convolution. However, these two types of convolutions are intrinsically different from standard convolution regarding the insertion of zeros in input feature maps or in kernels respectively. This distinct nature severely degrades the performance of the existing deep learning engine or frameworks, such as Darknet, Tensorflow, and PyTorch, which are mainly developed for the standard convolution. Another trend in deep learning realm is to deploy the model onto edge/ embedded devices, in which the memory resource is scarce. In this work, we propose a Highly Untangled Generative-model Engine for Edge-computing or HUGE2 for accelerating these two special convolutions on the edge-computing platform by decomposing the kernels and untangling these smaller convolutions by performing basic matrix multiplications. The methods we propose use much smaller memory footprint, hence much fewer memory accesses, and the data access patterns also dramatically increase the reusability of the data already fetched in caches, hence increasing the localities of caches. Our engine achieves a speedup of nearly 5x on embedded CPUs, and around 10x on embedded GPUs, and more than 50 reduction of memory access.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/27/2018

Diagonalwise Refactorization: An Efficient Training Method for Depthwise Convolutions

Depthwise convolutions provide significant performance benefits owing to...
research
05/24/2019

Generative Flow via Invertible nxn Convolution

Flow-based generative models have recently become one of the most effici...
research
10/15/2019

Alleviating Bottlenecks for DNN Execution on GPUs via Opportunistic Computing

Edge computing and IoT applications are severely constrained by limited ...
research
06/17/2020

Optimizing Grouped Convolutions on Edge Devices

When deploying a deep neural network on constrained hardware, it is poss...
research
07/15/2021

An Energy-Efficient Edge Computing Paradigm for Convolution-based Image Upsampling

A novel energy-efficient edge computing paradigm is proposed for real-ti...
research
07/03/2019

The Indirect Convolution Algorithm

Deep learning frameworks commonly implement convolution operators with G...
research
07/02/2020

Efficient Neural Network Deployment for Microcontroller

Edge computing for neural networks is getting important especially for l...

Please sign up or login with your details

Forgot password? Click here to reset