Blissful Ignorance: Anti-Transfer Learning for Task Invariance

06/11/2020
by   Eric Guizzo, et al.
0

We introduce the novel concept of anti-transfer learning for neural networks. While standard transfer learning assumes that the representations learned in one task will be useful for another task, anti-transfer learning avoids learning representations that have been learned for a different task, which is not relevant and potentially misleading for the new task and should be ignored. Examples of such tasks are style vs content recognition or pitch vs timbre from audio. By penalizing similarity between the second network and the previously learned features, co-incidental correlations between the target and the unrelated task can be avoided, yielding more reliable representations and better performance on the target task. We implemented anti-transfer learning with different similarity metrics and aggregation functions. We evaluate the approach in the audio domain with different tasks and setups, using four datasets in total. The results show that anti-transfer learning consistently improves accuracy in all test cases, proving that it can push the network to learn more representative features for the task at hand.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/21/2021

Do sound event representations generalize to other audio tasks? A case study in audio transfer learning

Transfer learning is critical for efficient information transfer across ...
research
08/20/2019

P2L: Predicting Transfer Learning for Images and Semantic Relations

Transfer learning enhances learning across tasks, by leveraging previous...
research
01/31/2022

Deconfounded Representation Similarity for Comparison of Neural Networks

Similarity metrics such as representational similarity analysis (RSA) an...
research
06/07/2022

Transfer learning to decode brain states reflecting the relationship between cognitive tasks

Transfer learning improves the performance of the target task by leverag...
research
04/21/2023

How good are variational autoencoders at transfer learning?

Variational autoencoders (VAEs) are used for transfer learning across va...
research
10/23/2017

Listening to the World Improves Speech Command Recognition

We study transfer learning in convolutional network architectures applie...
research
06/24/2019

Neural Transfer Learning for Cry-based Diagnosis of Perinatal Asphyxia

Despite continuing medical advances, the rate of newborn morbidity and m...

Please sign up or login with your details

Forgot password? Click here to reset