Low-Cost Parameterizations of Deep Convolutional Neural Networks

05/20/2018
by   Eran Treister, et al.
0

Convolutional Neural Networks (CNNs) filter the input data using a series of spatial convolution operators with compactly supported stencils and point-wise nonlinearities. Commonly, the convolution operators couple features from all channels. For wide networks, this leads to immense computational cost in the training of and prediction with CNNs. In this paper, we present novel ways to parameterize the convolution more efficiently, aiming to decrease the number of parameters in CNNs and their computational complexity. We propose new architectures that use a sparser coupling between the channels and thereby reduce both the number of trainable weights and the computational cost of the CNN. Our architectures arise as new types of residual neural network (ResNet) that can be seen as discretizations of a Partial Differential Equations (PDEs) and thus have predictable theoretical properties. Our first architecture involves a convolution operator with a special sparsity structure, and is applicable to a large class of CNNs. Next, we present an architecture that can be seen as a discretization of a diffusion reaction PDE, and use it with three different convolution operators. We outline in our experiments that the proposed architectures, although considerably reducing the number of trainable weights, yield comparable accuracy to existing CNNs that are fully coupled in the channel dimension.

READ FULL TEXT
research
05/20/2018

Low-Cost Parameterizations of Deep Convolution Neural Networks

The main computational cost in the training of and prediction with Convo...
research
04/15/2019

LeanResNet: A Low-cost yet Effective Convolutional Residual Networks

Convolutional Neural Networks (CNNs) filter the input data using a serie...
research
10/24/2022

A Continuous Convolutional Trainable Filter for Modelling Unstructured Data

Convolutional Neural Network (CNN) is one of the most important architec...
research
10/29/2018

Gather-Excite: Exploiting Feature Context in Convolutional Neural Networks

While the use of bottom-up local operators in convolutional neural netwo...
research
03/06/2019

IMEXnet: A Forward Stable Deep Neural Network

Deep convolutional neural networks have revolutionized many machine lear...
research
01/13/2021

Convolutional Neural Nets: Foundations, Computations, and New Applications

We review mathematical foundations of convolutional neural nets (CNNs) w...
research
02/27/2019

Reducing Artificial Neural Network Complexity: A Case Study on Exoplanet Detection

Despite their successes in the field of self-learning AI, Convolutional ...

Please sign up or login with your details

Forgot password? Click here to reset