Adapting Models to Signal Degradation using Distillation

04/01/2016
by   Jong-Chyi Su, et al.
0

Model compression and knowledge distillation have been successfully applied for cross-architecture and cross-domain transfer learning. However, a key requirement is that training examples are in correspondence across the domains. We show that in many scenarios of practical importance such aligned data can be synthetically generated using computer graphics pipelines allowing domain adaptation through distillation. We apply this technique to learn models for recognizing low-resolution images using labeled high-resolution images, non-localized objects using labeled localized objects, line-drawings using labeled color images, etc. Experiments on various fine-grained recognition datasets demonstrate that the technique improves recognition performance on the low-quality data and beats strong baselines for domain adaptation. Finally, we present insights into workings of the technique through visualizations and relating it to existing literature.

READ FULL TEXT

page 6

page 9

page 13

page 14

page 16

page 17

research
12/17/2021

Pixel Distillation: A New Knowledge Distillation Scheme for Low-Resolution Image Recognition

The great success of deep learning is mainly due to the large-scale netw...
research
12/31/2021

Data-Free Knowledge Transfer: A Survey

In the last decade, many deep learning models have been well trained and...
research
05/28/2021

FReTAL: Generalizing Deepfake Detection using Knowledge Distillation and Representation Learning

As GAN-based video and image manipulation technologies become more sophi...
research
03/13/2023

Continuous sign language recognition based on cross-resolution knowledge distillation

The goal of continuous sign language recognition(CSLR) research is to ap...
research
11/19/2020

KD3A: Unsupervised Multi-Source Decentralized Domain Adaptation via Knowledge Distillation

Conventional unsupervised multi-source domain adaptation(UMDA) methods a...
research
10/13/2022

M2D2: A Massively Multi-domain Language Modeling Dataset

We present M2D2, a fine-grained, massively multi-domain corpus for study...

Please sign up or login with your details

Forgot password? Click here to reset