DeepAI AI Chat
Log In Sign Up

Fast Training of Convolutional Neural Networks via Kernel Rescaling

by   Pedro Porto Buarque de Gusmão, et al.
Politecnico di Torino
Telecom Italia SpA

Training deep Convolutional Neural Networks (CNN) is a time consuming task that may take weeks to complete. In this article we propose a novel, theoretically founded method for reducing CNN training time without incurring any loss in accuracy. The basic idea is to begin training with a pre-train network using lower-resolution kernels and input images, and then refine the results at the full resolution by exploiting the spatial scaling property of convolutions. We apply our method to the ImageNet winner OverFeat and to the more recent ResNet architecture and show a reduction in training time of nearly 20


page 1

page 2

page 3

page 4


Irregular Convolutional Neural Networks

Convolutional kernels are basic and vital components of deep Convolution...

Two Novel Performance Improvements for Evolving CNN Topologies

Convolutional Neural Networks (CNNs) are the state-of-the-art algorithms...

Distributed Deep Learning for Precipitation Nowcasting

Effective training of Deep Neural Networks requires massive amounts of d...

Comparison of Neuronal Attention Models

Recent models for image processing are using the Convolutional neural ne...

Incremental Training of Deep Convolutional Neural Networks

We propose an incremental training method that partitions the original n...

Implementation of Training Convolutional Neural Networks

Deep learning refers to the shining branch of machine learning that is b...

SIMCNN – Exploiting Computational Similarity to Accelerate CNN Training in Hardware

Convolution neural networks (CNN) are computation intensive to train. It...