Derivation and Analysis of Fast Bilinear Algorithms for Convolution

10/29/2019
by   Caleb Ju, et al.
0

The prevalence of convolution in applications within signal processing, deep neural networks, and numerical solvers has motivated the development of numerous fast convolution algorithms. In many of these problems, convolution is performed on terabytes or petabytes of data, so even constant factors of improvement can significantly reduce the computation time. We leverage the formalism of bilinear algorithms to describe and analyze all of the most popular approaches. This unified lens permits us to study the relationship between different variants of convolution as well as to derive error bounds and analyze the cost of the various algorithms. We provide new derivations, which predominantly leverage matrix and tensor algebra, to describe the Winograd family of convolution algorithms as well as reductions between 1D and multidimensional convolution. We provide cost and error bounds as well as experimental numerical studies. Our experiments for two of these algorithms, the overlap-add approach and Winograd convolution algorithm with polynomials of degree greater than one, show that fast convolution algorithms can rival the accuracy of the fast Fourier transform (FFT) without using complex arithmetic. These algorithms can be used for convolution problems with multidimensional inputs or for filters larger than size of four, extending the state-of-the-art in Winograd-based convolution algorithms.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/31/2016

An exact, cache-localized algorithm for the sub-quadratic convolution of hypercubes

Fast multidimensional convolution can be performed naively in quadratic ...
research
12/17/2020

Generalized gaussian bounds for discrete convolution powers

We prove a uniform generalized gaussian bound for the powers of a discre...
research
01/25/2016

Very Efficient Training of Convolutional Neural Networks using Fast Fourier Transform and Overlap-and-Add

Convolutional neural networks (CNNs) are currently state-of-the-art for ...
research
10/11/2021

Two-level Group Convolution

Group convolution has been widely used in order to reduce the computatio...
research
03/29/2018

Error Analysis and Improving the Accuracy of Winograd Convolution for Deep Neural Networks

Modern deep neural networks (DNNs) spend a large amount of their executi...
research
08/04/2019

The numerical approximation of the Schrödinger equation with concentrated potential

We present a family of algorithms for the numerical approximation of the...
research
01/15/2014

A Bilinear Programming Approach for Multiagent Planning

Multiagent planning and coordination problems are common and known to be...

Please sign up or login with your details

Forgot password? Click here to reset