RingCNN: Exploiting Algebraically-Sparse Ring Tensors for Energy-Efficient CNN-Based Computational Imaging

04/19/2021
by   Chao-Tsung Huang, et al.
0

In the era of artificial intelligence, convolutional neural networks (CNNs) are emerging as a powerful technique for computational imaging. They have shown superior quality for reconstructing fine textures from badly-distorted images and have potential to bring next-generation cameras and displays to our daily life. However, CNNs demand intensive computing power for generating high-resolution videos and defy conventional sparsity techniques when rendering dense details. Therefore, finding new possibilities in regular sparsity is crucial to enable large-scale deployment of CNN-based computational imaging. In this paper, we consider a fundamental but yet well-explored approach – algebraic sparsity – for energy-efficient CNN acceleration. We propose to build CNN models based on ring algebra that defines multiplication, addition, and non-linearity for n-tuples properly. Then the essential sparsity will immediately follow, e.g. n-times reduction for the number of real-valued weights. We define and unify several variants of ring algebras into a modeling framework, RingCNN, and make comparisons in terms of image quality and hardware complexity. On top of that, we further devise a novel ring algebra which minimizes complexity with component-wise product and achieves the best quality using directional ReLU. Finally, we implement an accelerator, eRingCNN, in two settings, n=2 and 4 (50 advanced denoising and super-resolution at up to 4K UHD 30 fps. Layout results show that they can deliver equivalent 41 TOPS using 3.76 W and 2.22 W, respectively. Compared to the real-valued counterpart, our ring convolution engines for n=2 achieve 2.00x energy efficiency and 2.08x area efficiency with similar or even better image quality. With n=4, the efficiency gains of energy and area are further increased to 3.84x and 3.77x with 0.11 dB drop of PSNR.

READ FULL TEXT

page 1

page 9

research
07/16/2021

S2TA: Exploiting Structured Sparsity for Energy-Efficient Mobile CNN Acceleration

Exploiting sparsity is a key technique in accelerating quantized convolu...
research
01/18/2018

ECA: Energy-Efficient FPGA-based Convolutional Neural Networks Architecture for Single Image Super-Resolution

Convolutional neural networks (CNN) show the excellent performance compa...
research
10/13/2019

eCNN: A Block-Based and Highly-Parallel CNN Accelerator for Edge Inference

Convolutional neural networks (CNNs) have recently demonstrated superior...
research
01/18/2018

An Energy-Efficient FPGA-based Deconvolutional Neural Networks Accelerator for Single Image Super-Resolution

Convolutional neural networks (CNNs) demonstrate excellent performance a...
research
10/13/2019

ERNet Family: Hardware-Oriented CNN Models for Computational Imaging Using Block-Based Inference

Convolutional neural networks (CNNs) demand huge DRAM bandwidth for comp...
research
02/24/2022

Highly-Efficient Binary Neural Networks for Visual Place Recognition

VPR is a fundamental task for autonomous navigation as it enables a robo...
research
06/12/2020

AlgebraNets

Neural networks have historically been built layerwise from the set of f...

Please sign up or login with your details

Forgot password? Click here to reset