Reducing Overlearning through Disentangled Representations by Suppressing Unknown Tasks

05/20/2020
by   Naveen Panwar, et al.
30

Existing deep learning approaches for learning visual features tend to overlearn and extract more information than what is required for the task at hand. From a privacy preservation perspective, the input visual information is not protected from the model; enabling the model to become more intelligent than it is trained to be. Current approaches for suppressing additional task learning assume the presence of ground truth labels for the tasks to be suppressed during training time. In this research, we propose a three-fold novel contribution: (i) a model-agnostic solution for reducing model overlearning by suppressing all the unknown tasks, (ii) a novel metric to measure the trust score of a trained deep learning model, and (iii) a simulated benchmark dataset, PreserveTask, having five different fundamental image classification tasks to study the generalization nature of models. In the first set of experiments, we learn disentangled representations and suppress overlearning of five popular deep learning models: VGG16, VGG19, Inception-v1, MobileNet, and DenseNet on PreserverTask dataset. Additionally, we show results of our framework on color-MNIST dataset and practical applications of face attribute preservation in Diversity in Faces (DiF) and IMDB-Wiki dataset.

READ FULL TEXT

page 6

page 8

page 11

page 12

page 13

page 14

research
05/29/2019

A Heuristic for Unsupervised Model Selection for Variational Disentangled Representation Learning

Disentangled representations have recently been shown to improve data ef...
research
11/20/2018

Adversarial Removal of Gender from Deep Image Representations

In this work we analyze visual recognition tasks such as object and acti...
research
08/21/2019

Representation Disentanglement for Multi-task Learning with application to Fetal Ultrasound

One of the biggest challenges for deep learning algorithms in medical im...
research
10/31/2021

Learning Debiased and Disentangled Representations for Semantic Segmentation

Deep neural networks are susceptible to learn biased models with entangl...
research
04/28/2019

Domain Agnostic Learning with Disentangled Representations

Unsupervised model transfer has the potential to greatly improve the gen...
research
11/26/2022

Synergies Between Disentanglement and Sparsity: a Multi-Task Learning Perspective

Although disentangled representations are often said to be beneficial fo...
research
03/13/2020

The TrojAI Software Framework: An OpenSource tool for Embedding Trojans into Deep Learning Models

In this paper, we introduce the TrojAI software framework, an open sourc...

Please sign up or login with your details

Forgot password? Click here to reset