Deformable Butterfly: A Highly Structured and Sparse Linear Transform

03/25/2022
by   Rui Lin, et al.
0

We introduce a new kind of linear transform named Deformable Butterfly (DeBut) that generalizes the conventional butterfly matrices and can be adapted to various input-output dimensions. It inherits the fine-to-coarse-grained learnable hierarchy of traditional butterflies and when deployed to neural networks, the prominent structures and sparsity in a DeBut layer constitutes a new way for network compression. We apply DeBut as a drop-in replacement of standard fully connected and convolutional layers, and demonstrate its superiority in homogenizing a neural network and rendering it favorable properties such as light weight and low inference complexity, without compromising accuracy. The natural complexity-accuracy tradeoff arising from the myriad deformations of a DeBut layer also opens up new rooms for analytical and practical research. The codes and Appendix are publicly available at: https://github.com/ruilin0212/DeBut.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/03/2019

Separable Layers Enable Structured Efficient Linear Substitutions

In response to the development of recent efficient dense layers, this pa...
research
02/08/2021

Learning N:M Fine-grained Structured Sparse Neural Networks From Scratch

Sparsity in Deep Neural Networks (DNNs) has been widely studied to compr...
research
07/23/2020

WeightNet: Revisiting the Design Space of Weight Networks

We present a conceptually simple, flexible and effective framework for w...
research
06/09/2020

The Curious Case of Convex Networks

In this paper, we investigate a constrained formulation of neural networ...
research
10/15/2019

Reduced-Order Modeling of Deep Neural Networks

We introduce a new method for speeding up the inference of deep neural n...
research
10/04/2021

Light-weight Deformable Registration using Adversarial Learning with Distilling Knowledge

Deformable registration is a crucial step in many medical procedures suc...
research
09/11/2023

Efficient Finite Initialization for Tensorized Neural Networks

We present a novel method for initializing layers of tensorized neural n...

Please sign up or login with your details

Forgot password? Click here to reset