The Multiverse Loss for Robust Transfer Learning

11/29/2015
by   Etai Littwin, et al.
0

Deep learning techniques are renowned for supporting effective transfer learning. However, as we demonstrate, the transferred representations support only a few modes of separation and much of its dimensionality is unutilized. In this work, we suggest to learn, in the source domain, multiple orthogonal classifiers. We prove that this leads to a reduced rank representation, which, however, supports more discriminative directions. Interestingly, the softmax probabilities produced by the multiple classifiers are likely to be identical. Experimental results, on CIFAR-100 and LFW, further demonstrate the effectiveness of our method.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/23/2017

Constrained Deep Transfer Feature Learning and its Applications

Feature learning with deep models has achieved impressive results for bo...
research
10/20/2019

Online Bagging for Anytime Transfer Learning

Transfer learning techniques have been widely used in the reality that i...
research
11/15/2016

Intrinsic Geometric Information Transfer Learning on Multiple Graph-Structured Datasets

Graphs provide a powerful means for representing complex interactions be...
research
08/18/2017

Learning to Transfer

Transfer learning borrows knowledge from a source domain to facilitate l...
research
12/30/2018

Learning to Selectively Transfer: Reinforced Transfer Learning for Deep Text Matching

Deep text matching approaches have been widely studied for many applicat...
research
05/07/2019

Towards Evaluating and Understanding Robust Optimisation under Transfer

This work evaluates the efficacy of adversarial robustness under transfe...

Please sign up or login with your details

Forgot password? Click here to reset