Hardware-efficient Residual Networks for FPGAs

02/02/2021
by   Olivia Weng, et al.
0

Residual networks (ResNets) employ skip connections in their networks – reusing activations from previous layers – to improve training convergence, but these skip connections create challenges for hardware implementations of ResNets. The hardware must either wait for skip connections to be processed before processing more incoming data or buffer them elsewhere. Without skip connections, ResNets would be more hardware-efficient. Thus, we present the teacher-student learning method to gradually prune away all of a ResNet's skip connections, constructing a network we call NonResNet. We show that when implemented for FPGAs, NonResNet decreases ResNet's BRAM utilization by 9 LUT utilization by 3

READ FULL TEXT

page 1

page 2

research
01/18/2023

Tailor: Altering Skip Connections for Resource-Efficient Inference

Deep neural networks use skip connections to improve training convergenc...
research
03/27/2019

Training Quantized Network with Auxiliary Gradient Module

In this paper, we seek to tackle two challenges in training low-precisio...
research
01/09/2017

Visualizing Residual Networks

Residual networks are the current state of the art on ImageNet. Similar ...
research
10/11/2016

An Empirical Exploration of Skip Connections for Sequential Tagging

In this paper, we empirically explore the effects of various kinds of sk...
research
10/09/2022

SML:Enhance the Network Smoothness with Skip Meta Logit for CTR Prediction

In light of the smoothness property brought by skip connections in ResNe...
research
09/15/2022

BadRes: Reveal the Backdoors through Residual Connection

Generally, residual connections are indispensable network components in ...
research
04/23/2021

Skip-Convolutions for Efficient Video Processing

We propose Skip-Convolutions to leverage the large amount of redundancie...

Please sign up or login with your details

Forgot password? Click here to reset