Depthwise Separable Convolutions Allow for Fast and Memory-Efficient Spectral Normalization

02/12/2021
by   Christina Runkel, et al.
0

An increasing number of models require the control of the spectral norm of convolutional layers of a neural network. While there is an abundance of methods for estimating and enforcing upper bounds on those during training, they are typically costly in either memory or time. In this work, we introduce a very simple method for spectral normalization of depthwise separable convolutions, which introduces negligible computational and memory overhead. We demonstrate the effectiveness of our method on image classification tasks using standard architectures like MobileNetV2.

READ FULL TEXT
research
08/16/2018

Network Decoupling: From Regular to Depthwise Separable Convolutions

Depthwise separable convolution has shown great efficiency in network de...
research
03/31/2021

Compressing 1D Time-Channel Separable Convolutions using Sparse Random Ternary Matrices

We demonstrate that 1x1-convolutions in 1D time-channel separable convol...
research
08/27/2018

Smoothed Dilated Convolutions for Improved Dense Prediction

Dilated convolutions, also known as atrous convolutions, have been widel...
research
11/19/2021

A 3D 2D convolutional Neural Network Model for Hyperspectral Image Classification

In the proposed SEHybridSN model, a dense block was used to reuse shallo...
research
01/09/2017

QuickNet: Maximizing Efficiency and Efficacy in Deep Architectures

We present QuickNet, a fast and accurate network architecture that is bo...
research
07/14/2021

Memory-Aware Fusing and Tiling of Neural Networks for Accelerated Edge Inference

A rising research challenge is running costly machine learning (ML) netw...
research
03/30/2020

Rethinking Depthwise Separable Convolutions: How Intra-Kernel Correlations Lead to Improved MobileNets

We introduce blueprint separable convolutions (BSConv) as highly efficie...

Please sign up or login with your details

Forgot password? Click here to reset