Im2win: Memory Efficient Convolution On SIMD Architectures

06/25/2023
by   Shuai Lu, et al.
0

Convolution is the most expensive operation among neural network operations, thus its performance is critical to the overall performance of neural networks. Commonly used convolution approaches, including general matrix multiplication (GEMM)-based convolution and direct convolution, rely on im2col for data transformation or do not use data transformation at all, respectively. However, the im2col data transformation can lead to at least 2× memory footprint compared to not using data transformation at all, thus limiting the size of neural network models running on memory-limited systems. Meanwhile, not using data transformation usually performs poorly due to nonconsecutive memory access although it consumes less memory. To solve those problems, we propose a new memory-efficient data transformation algorithm, called im2win. This algorithm refactorizes a row of square or rectangle dot product windows of the input image and flattens unique elements within these windows into a row in the output tensor, which enables consecutive memory access and data reuse, and thus greatly reduces the memory overhead. Furthermore, we propose a high-performance im2win-based convolution algorithm with various optimizations, including vectorization, loop reordering, etc. Our experimental results show that our algorithm reduces the memory overhead by average to 41.6 PyTorch's convolution implementation based on im2col, and achieves average to 3.6× and 5.3× speedup in performance compared to the im2col-based convolution and not using data transformation, respectively.

READ FULL TEXT
research
06/25/2023

Im2win: An Efficient Convolution Paradigm on GPU

Convolution is the most time-consuming operation in deep neural network ...
research
06/21/2017

MEC: Memory-efficient Convolution for Deep Neural Network

Convolution is a critical component in modern deep neural networks, thus...
research
09/20/2018

High Performance Zero-Memory Overhead Direct Convolutions

The computation of convolution layers in deep neural networks typically ...
research
07/03/2019

The Indirect Convolution Algorithm

Deep learning frameworks commonly implement convolution operators with G...
research
05/05/2021

GALA: Greedy ComputAtion for Linear Algebra in Privacy-Preserved Neural Networks

Machine Learning as a Service (MLaaS) is enabling a wide range of smart ...
research
10/08/2021

Characterizing and Demystifying the Implicit Convolution Algorithm on Commercial Matrix-Multiplication Accelerators

Many of today's deep neural network accelerators, e.g., Google's TPU and...
research
09/08/2017

Low-memory GEMM-based convolution algorithms for deep neural networks

Deep neural networks (DNNs) require very large amounts of computation bo...

Please sign up or login with your details

Forgot password? Click here to reset