Deep Learning: Generalization Requires Deep Compositional Feature Space Design

06/06/2017
by   Mrinal Haloi, et al.
0

Generalization error defines the discriminability and the representation power of a deep model. In this work, we claim that feature space design using deep compositional function plays a significant role in generalization along with explicit and implicit regularizations. Our claims are being established with several image classification experiments. We show that the information loss due to convolution and max pooling can be marginalized with the compositional design, improving generalization performance. Also, we will show that learning rate decay acts as an implicit regularizer in deep model training.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/10/2023

Compositional Generalization from First Principles

Leveraging the compositional nature of our world to expedite learning an...
research
04/28/2022

Toward Compositional Generalization in Object-Oriented World Modeling

Compositional generalization is a critical ability in learning and decis...
research
02/08/2021

Concepts, Properties and an Approach for Compositional Generalization

Compositional generalization is the capacity to recognize and imagine a ...
research
08/03/2021

Generalization in Multimodal Language Learning from Simulation

Neural networks can be powerful function approximators, which are able t...
research
07/05/2021

Generalization by design: Shortcuts to Generalization in Deep Learning

We take a geometrical viewpoint and present a unifying view on supervise...
research
01/26/2023

Feature space exploration as an alternative for design space exploration beyond the parametric space

This paper compares the parametric design space with a feature space gen...
research
10/09/2017

Function space analysis of deep learning representation layers

In this paper we propose a function space approach to Representation Lea...

Please sign up or login with your details

Forgot password? Click here to reset