Committees of deep feedforward networks trained with few data

06/23/2014
by   Bogdan Miclut, et al.
0

Deep convolutional neural networks are known to give good results on image classification tasks. In this paper we present a method to improve the classification result by combining multiple such networks in a committee. We adopt the STL-10 dataset which has very few training examples and show that our method can achieve results that are better than the state of the art. The networks are trained layer-wise and no backpropagation is used. We also explore the effects of dataset augmentation by mirroring, rotation, and scaling.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/06/2017

Deep Convolutional Neural Networks as Generic Feature Extractors

Recognizing objects in natural images is an intricate problem involving ...
research
07/24/2023

Rethinking Data Distillation: Do Not Overlook Calibration

Neural networks trained on distilled data often produce over-confident o...
research
04/23/2019

DenseNet Models for Tiny ImageNet Classification

In this paper, we present two image classification models on the Tiny Im...
research
12/10/2019

Arithmetic addition of two integers by deep image classification networks: experiments to quantify their autonomous reasoning ability

The unprecedented performance achieved by deep convolutional neural netw...
research
01/17/2020

Adapting Grad-CAM for Embedding Networks

The gradient-weighted class activation mapping (Grad-CAM) method can fai...
research
04/15/2020

A Hybrid Method for Training Convolutional Neural Networks

Artificial Intelligence algorithms have been steadily increasing in popu...
research
09/25/2018

Non-Iterative Knowledge Fusion in Deep Convolutional Neural Networks

Incorporation of a new knowledge into neural networks with simultaneous ...

Please sign up or login with your details

Forgot password? Click here to reset