High Performance Zero-Memory Overhead Direct Convolutions

09/20/2018
by   Jiyuan Zhang, et al.
0

The computation of convolution layers in deep neural networks typically rely on high performance routines that trade space for time by using additional memory (either for packing purposes or required as part of the algorithm) to improve performance. The problems with such an approach are two-fold. First, these routines incur additional memory overhead which reduces the overall size of the network that can fit on embedded devices with limited memory capacity. Second, these high performance routines were not optimized for performing convolution, which means that the performance obtained is usually less than conventionally expected. In this paper, we demonstrate that direct convolution, when implemented correctly, eliminates all memory overhead, and yields performance that is between 10 performance implementations of convolution layers on conventional and embedded CPU architectures. We also show that a high performance direct convolution exhibits better scaling performance, i.e. suffers less performance drop, when increasing the number of threads.

READ FULL TEXT
research
06/21/2017

MEC: Memory-efficient Convolution for Deep Neural Network

Convolution is a critical component in modern deep neural networks, thus...
research
06/25/2023

Im2win: Memory Efficient Convolution On SIMD Architectures

Convolution is the most expensive operation among neural network operati...
research
08/16/2018

Anatomy Of High-Performance Deep Learning Convolutions On SIMD Architectures

Convolution layers are prevalent in many classes of deep neural networks...
research
05/13/2020

High Performance and Portable Convolution Operators for ARM-based Multicore Processors

The considerable impact of Convolutional Neural Networks on many Artific...
research
07/03/2019

The Indirect Convolution Algorithm

Deep learning frameworks commonly implement convolution operators with G...
research
03/08/2023

Advancing Direct Convolution using Convolution Slicing Optimization and ISA Extensions

Convolution is one of the most computationally intensive operations that...
research
07/26/2019

Massively Scaling Seismic Processing on Sunway TaihuLight Supercomputer

Common Midpoint (CMP) and Common Reflection Surface (CRS) are widely use...

Please sign up or login with your details

Forgot password? Click here to reset