Fast Training of Convolutional Neural Networks via Kernel Rescaling

Training deep Convolutional Neural Networks (CNN) is a time consuming task that may take weeks to complete. In this article we propose a novel, theoretically founded method for reducing CNN training time without incurring any loss in accuracy. The basic idea is to begin training with a pre-train network using lower-resolution kernels and input images, and then refine the results at the full resolution by exploiting the spatial scaling property of convolutions. We apply our method to the ImageNet winner OverFeat and to the more recent ResNet architecture and show a reduction in training time of nearly 20

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/24/2017

Irregular Convolutional Neural Networks

Convolutional kernels are basic and vital components of deep Convolution...
research
02/10/2021

Two Novel Performance Improvements for Evolving CNN Topologies

Convolutional Neural Networks (CNNs) are the state-of-the-art algorithms...
research
08/28/2019

Distributed Deep Learning for Precipitation Nowcasting

Effective training of Deep Neural Networks requires massive amounts of d...
research
12/07/2019

Comparison of Neuronal Attention Models

Recent models for image processing are using the Convolutional neural ne...
research
03/27/2018

Incremental Training of Deep Convolutional Neural Networks

We propose an incremental training method that partitions the original n...
research
06/03/2015

Implementation of Training Convolutional Neural Networks

Deep learning refers to the shining branch of machine learning that is b...
research
10/28/2021

SIMCNN – Exploiting Computational Similarity to Accelerate CNN Training in Hardware

Convolution neural networks (CNN) are computation intensive to train. It...

Please sign up or login with your details

Forgot password? Click here to reset