Adversarial Learning Networks: Source-free Unsupervised Domain Incremental Learning

01/28/2023
by   Abhinit Kumar Ambastha, et al.
0

This work presents an approach for incrementally updating deep neural network (DNN) models in a non-stationary environment. DNN models are sensitive to changes in input data distribution, which limits their application to problem settings with stationary input datasets. In a non-stationary environment, updating a DNN model requires parameter re-training or model fine-tuning. We propose an unsupervised source-free method to update DNN classification models. The contributions of this work are two-fold. First, we use trainable Gaussian prototypes to generate representative samples for future iterations; second, using unsupervised domain adaptation, we incrementally adapt the existing model using unlabelled data. Unlike existing methods, our approach can update a DNN model incrementally for non-stationary source and target tasks without storing past training data. We evaluated our work on incremental sentiment prediction and incremental disease prediction applications and compared our approach to state-of-the-art continual learning, domain adaptation, and ensemble learning methods. Our results show that our approach achieved improved performance compared to existing incremental learning methods. We observe minimal forgetting of past knowledge over many iterations, which can help us develop unsupervised self-learning systems.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/28/2023

TIDo: Source-free Task Incremental Learning in Non-stationary Environments

This work presents an incremental learning approach for autonomous agent...
research
08/03/2023

Efficient Model Adaptation for Continual Learning at the Edge

Most machine learning (ML) systems assume stationary and matching data d...
research
06/05/2023

Continual Learning with Pretrained Backbones by Tuning in the Input Space

The intrinsic difficulty in adapting deep learning models to non-station...
research
02/26/2020

PrIU: A Provenance-Based Approach for Incrementally Updating Regression Models

The ubiquitous use of machine learning algorithms brings new challenges ...
research
08/10/2022

Continual Machine Reading Comprehension via Uncertainty-aware Fixed Memory and Adversarial Domain Adaptation

Continual Machine Reading Comprehension aims to incrementally learn from...
research
08/25/2023

GRASP: A Rehearsal Policy for Efficient Online Continual Learning

Continual learning (CL) in deep neural networks (DNNs) involves incremen...
research
06/30/2020

Incremental Calibration of Architectural Performance Models with Parametric Dependencies

Architecture-based Performance Prediction (AbPP) allows evaluation of th...

Please sign up or login with your details

Forgot password? Click here to reset