Tailor: Altering Skip Connections for Resource-Efficient Inference

01/18/2023
by   Olivia Weng, et al.
0

Deep neural networks use skip connections to improve training convergence. However, these skip connections are costly in hardware, requiring extra buffers and increasing on- and off-chip memory utilization and bandwidth requirements. In this paper, we show that skip connections can be optimized for hardware when tackled with a hardware-software codesign approach. We argue that while a network's skip connections are needed for the network to learn, they can later be removed or shortened to provide a more hardware efficient implementation with minimal to no accuracy loss. We introduce Tailor, a codesign tool whose hardware-aware training algorithm gradually removes or shortens a fully trained network's skip connections to lower their hardware cost. The optimized hardware designs improve resource utilization by up to 34 16

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/02/2021

Hardware-efficient Residual Networks for FPGAs

Residual networks (ResNets) employ skip connections in their networks – ...
research
11/06/2022

A Framework for Designing Efficient Deep Learning-Based Genomic Basecallers

Nanopore sequencing generates noisy electrical signals that need to be c...
research
06/11/2023

Efficient Skip Connections Realization for Secure Inference on Encrypted Data

Homomorphic Encryption (HE) is a cryptographic tool that allows performi...
research
10/11/2016

An Empirical Exploration of Skip Connections for Sequential Tagging

In this paper, we empirically explore the effects of various kinds of sk...
research
01/31/2017

Skip Connections Eliminate Singularities

Skip connections made the training of very deep networks possible and ha...
research
05/28/2019

Towards Efficient Neural Networks On-a-chip: Joint Hardware-Algorithm Approaches

Machine learning algorithms have made significant advances in many appli...
research
10/23/2018

Analysis of Atomistic Representations Using Weighted Skip-Connections

In this work, we extend the SchNet architecture by using weighted skip c...

Please sign up or login with your details

Forgot password? Click here to reset