On the Transferability of Representations in Neural Networks Between Datasets and Tasks

11/29/2018
by   Haytham M. Fayek, et al.
0

Deep networks, composed of multiple layers of hierarchical distributed representations, tend to learn low-level features in initial layers and transition to high-level features towards final layers. Paradigms such as transfer learning, multi-task learning, and continual learning leverage this notion of generic hierarchical distributed representations to share knowledge across datasets and tasks. Herein, we study the layer-wise transferability of representations in deep networks across a few datasets and tasks and note some interesting empirical observations.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/21/2017

Mimicking Ensemble Learning with Deep Branched Networks

This paper proposes a branched residual network for image classification...
research
05/31/2023

The Tunnel Effect: Building Data Representations in Deep Neural Networks

Deep neural networks are widely known for their remarkable effectiveness...
research
06/18/2021

High-level Features for Resource Economy and Fast Learning in Skill Transfer

Abstraction is an important aspect of intelligence which enables agents ...
research
05/23/2019

Hangul Fonts Dataset: a Hierarchical and Compositional Dataset for Interrogating Learned Representations

Interpretable representations of data are useful for testing a hypothesi...
research
07/07/2020

Hierarchical nucleation in deep neural networks

Deep convolutional networks (DCNs) learn meaningful representations wher...
research
10/06/2021

On The Transferability of Deep-Q Networks

Transfer Learning (TL) is an efficient machine learning paradigm that al...
research
07/05/2023

How Deep Neural Networks Learn Compositional Data: The Random Hierarchy Model

Learning generic high-dimensional tasks is notably hard, as it requires ...

Please sign up or login with your details

Forgot password? Click here to reset