SIMCNN – Exploiting Computational Similarity to Accelerate CNN Training in Hardware

10/28/2021
by   Vahid Janfaza, et al.
0

Convolution neural networks (CNN) are computation intensive to train. It consists of a substantial number of multidimensional dot products between many kernels and inputs. We observe that there are notable similarities among the vectors extracted from inputs (i.e., input vectors). If one input vector is similar to another one, its computations with the kernels are also similar to those of the other and therefore, can be skipped by reusing the already-computed results. Based on this insight, we propose a novel scheme based on locality sensitive hashing (LSH) to exploit the similarity of computations during CNN training in a hardware accelerator. The proposed scheme, called SIMCNN, uses a cache (SIMCACHE) to store LSH signatures of recent input vectors along with the computed results. If the LSH signature of a new input vector matches with that of an already existing vector in the SIMCACHE, the already-computed result is reused for the new vector. SIMCNN is the first work that exploits computational similarity for accelerating CNN training in hardware. The paper presents a detailed design, workflow, and implementation of SIMCNN. Our experimental evaluation with four different deep learning models shows that SIMCNN saves a significant number of computations and therefore, improves training time up to 43

READ FULL TEXT

page 1

page 10

research
05/02/2022

VSCNN: Convolution Neural Network Accelerator With Vector Sparsity

Hardware accelerator for convolution neural network (CNNs) enables real ...
research
04/18/2018

UCNN: Exploiting Computational Reuse in Deep Neural Networks via Weight Repetition

Convolutional Neural Networks (CNNs) have begun to permeate all corners ...
research
10/12/2016

Fast Training of Convolutional Neural Networks via Kernel Rescaling

Training deep Convolutional Neural Networks (CNN) is a time consuming ta...
research
12/22/2022

Accelerating CNN inference on long vector architectures via co-design

CPU-based inference can be an alternative to off-chip accelerators, and ...
research
08/18/2020

One-pixel Signature: Characterizing CNN Models for Backdoor Detection

We tackle the convolution neural networks (CNNs) backdoor detection prob...
research
05/02/2022

Efficient Accelerator for Dilated and Transposed Convolution with Decomposition

Hardware acceleration for dilated and transposed convolution enables rea...
research
05/24/2018

Backpropagation with N-D Vector-Valued Neurons Using Arbitrary Bilinear Products

Vector-valued neural learning has emerged as a promising direction in de...

Please sign up or login with your details

Forgot password? Click here to reset