Convolutional Analysis Operator Learning: Dependence on Training Data

02/21/2019
by   Il Yong Chun, et al.
0

Convolutional analysis operator learning (CAOL) enables the unsupervised training of (hierachical) convolutional sparsifying operators or autoencoders from large datasets. One can use many training images for CAOL, but a precise understanding of the impact of doing so has remained an open question. This paper presents a series of results that lend insight into the impact of dataset size on the filter update in CAOL. The first result is a general deterministic bound on errors in the estimated filters that then leads to two specific bounds under particular random models. The first bound illustrates a decrease in the expected filter estimation error as the number of training samples increases, and the second bound provides high probability analogues. The bounds depend on properties of the training data, and we investigate their empirical values with real data. Taken together, these results provide evidence for the potential benefit of using more training data in CAOL.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/30/2018

Learning Structure and Strength of CNN Filters for Small Sample Size Training

Convolutional Neural Networks have provided state-of-the-art results in ...
research
10/16/2018

Downsampling leads to Image Memorization in Convolutional Autoencoders

Memorization of data in deep neural networks has become a subject of sig...
research
04/30/2019

Harmonic Networks with Limited Training Samples

Convolutional neural networks (CNNs) are very popular nowadays for image...
research
03/09/2015

Learning Co-Sparse Analysis Operators with Separable Structures

In the co-sparse analysis model a set of filters is applied to a signal ...
research
02/21/2019

Learning requirements for stealth attacks

The learning data requirements are analyzed for the construction of stea...
research
04/16/2018

Compressibility and Generalization in Large-Scale Deep Learning

Modern neural networks are highly overparameterized, with capacity to su...
research
02/18/2016

RandomOut: Using a convolutional gradient norm to rescue convolutional filters

Filters in convolutional neural networks are sensitive to their initiali...

Please sign up or login with your details

Forgot password? Click here to reset