Large Neural Networks Learning from Scratch with Very Few Data and without Regularization

05/18/2022
by   Christoph Linse, et al.
0

Recent findings have shown that Neural Networks generalize also in over-parametrized regimes with zero training error. This is surprising, since it is completely against traditional machine learning wisdom. In our empirical study we fortify these findings in the domain of fine-grained image classification. We show that very large Convolutional Neural Networks with millions of weights do learn with only a handful of training samples and without image augmentation, explicit regularization or pretraining. We train the architectures ResNet018, ResNet101 and VGG19 on subsets of the difficult benchmark datasets Caltech101, CUB_200_2011, FGVCAircraft, Flowers102 and StanfordCars with 100 classes and more, perform a comprehensive comparative study and draw implications for the practical application of CNNs. Finally, we show that VGG19 with 140 million weights learns to distinguish airplanes and motorbikes up to 95

READ FULL TEXT
research
11/30/2015

Fine-Grained Classification via Mixture of Deep Convolutional Neural Networks

We present a novel deep convolutional neural network (DCNN) system for f...
research
07/04/2017

Zero-Shot Fine-Grained Classification by Deep Feature Learning with Semantics

Fine-grained image classification, which aims to distinguish images with...
research
02/27/2019

Disentangled Deep Autoencoding Regularization for Robust Image Classification

In spite of achieving revolutionary successes in machine learning, deep ...
research
09/13/2019

Semantic and Visual Similarities for Efficient Knowledge Transfer in CNN Training

In recent years, representation learning approaches have disrupted many ...
research
02/22/2020

Stochasticity in Neural ODEs: An Empirical Study

Stochastic regularization of neural networks (e.g. dropout) is a wide-sp...
research
09/17/2018

Déjà Vu: an empirical evaluation of the memorization properties of ConvNets

Convolutional neural networks memorize part of their training data, whic...
research
06/21/2021

Regularization is all you Need: Simple Neural Nets can Excel on Tabular Data

Tabular datasets are the last "unconquered castle" for deep learning, wi...

Please sign up or login with your details

Forgot password? Click here to reset