Lifelong Learning Without a Task Oracle

11/09/2020
by   Amanda Rios, et al.
0

Supervised deep neural networks are known to undergo a sharp decline in the accuracy of older tasks when new tasks are learned, termed "catastrophic forgetting". Many state-of-the-art solutions to continual learning rely on biasing and/or partitioning a model to accommodate successive tasks incrementally. However, these methods largely depend on the availability of a task-oracle to confer task identities to each test sample, without which the models are entirely unable to perform. To address this shortcoming, we propose and compare several candidate task-assigning mappers which require very little memory overhead: (1) Incremental unsupervised prototype assignment using either nearest means, Gaussian Mixture Models or fuzzy ART backbones; (2) Supervised incremental prototype assignment with fast fuzzy ARTMAP; (3) Shallow perceptron trained via a dynamic coreset. Our proposed model variants are trained either from pre-trained feature extractors or task-dependent feature embeddings of the main classifier network. We apply these pipeline variants to continual learning benchmarks, comprised of either sequences of several datasets or within one single dataset. Overall, these methods, despite their simplicity and compactness, perform very close to a ground truth oracle, especially in experiments of inter-dataset task assignment. Moreover, best-performing variants only impose an average cost of 1.7

READ FULL TEXT
research
05/08/2020

Continual Learning Using Task Conditional Neural Networks

Conventional deep learning models have limited capacity in learning mult...
research
03/16/2023

Steering Prototype with Prompt-tuning for Rehearsal-free Continual Learning

Prototype, as a representation of class embeddings, has been explored to...
research
06/21/2023

TADIL: Task-Agnostic Domain-Incremental Learning through Task-ID Inference using Transformer Nearest-Centroid Embeddings

Machine Learning (ML) models struggle with data that changes over time o...
research
03/17/2022

Continual Learning Based on OOD Detection and Task Masking

Existing continual learning techniques focus on either task incremental ...
research
07/26/2022

S-Prompts Learning with Pre-trained Transformers: An Occam's Razor for Domain Incremental Learning

State-of-the-art deep neural networks are still struggling to address th...

Please sign up or login with your details

Forgot password? Click here to reset