DANICE: Domain adaptation without forgetting in neural image compression

04/19/2021
by   Sudeep Katakol, et al.
6

Neural image compression (NIC) is a new coding paradigm where coding capabilities are captured by deep models learned from data. This data-driven nature enables new potential functionalities. In this paper, we study the adaptability of codecs to custom domains of interest. We show that NIC codecs are transferable and that they can be adapted with relatively few target domain images. However, naive adaptation interferes with the solution optimized for the original source domain, resulting in forgetting the original coding capabilities in that domain, and may even break the compatibility with previously encoded bitstreams. Addressing these problems, we propose Codec Adaptation without Forgetting (CAwF), a framework that can avoid these problems by adding a small amount of custom parameters, where the source codec remains embedded and unchanged during the adaptation process. Experiments demonstrate its effectiveness and provide useful insights on the characteristics of catastrophic interference in NIC.

READ FULL TEXT

page 1

page 4

research
04/13/2023

CoSDA: Continual Source-Free Domain Adaptation

Without access to the source data, source-free domain adaptation (SFDA) ...
research
11/12/2022

LLEDA – Lifelong Self-Supervised Domain Adaptation

Lifelong domain adaptation remains a challenging task in machine learnin...
research
01/18/2021

Studying Catastrophic Forgetting in Neural Ranking Models

Several deep neural ranking models have been proposed in the recent IR l...
research
07/02/2020

Sequential Domain Adaptation through Elastic Weight Consolidation for Sentiment Analysis

Elastic Weight Consolidation (EWC) is a technique used in overcoming cat...
research
05/26/2020

Unsupervised Domain Expansion from Multiple Sources

Given an existing system learned from previous source domains, it is des...
research
04/12/2019

ACE: Adapting to Changing Environments for Semantic Segmentation

Deep neural networks exhibit exceptional accuracy when they are trained ...

Please sign up or login with your details

Forgot password? Click here to reset