Representative Task Self-selection for Flexible Clustered Lifelong Learning

03/06/2019
by   Gan Sun, et al.
6

Consider the lifelong learning paradigm whose objective is to learn a sequence of tasks depending on previous experiences, e.g., knowledge library or deep network weights. However, the knowledge libraries or deep networks for most recent lifelong learning models are with prescribed size, and can degenerate the performance for both learned tasks and coming ones when facing with a new task environment (cluster). To address this challenge, we propose a novel incremental clustered lifelong learning framework with two knowledge libraries: feature learning library and model knowledge library, called Flexible Clustered Lifelong Learning (FCL3). Specifically, the feature learning library modeled by an autoencoder architecture maintains a set of representation common across all the observed tasks, and the model knowledge library can be self-selected by identifying and adding new representative models (clusters). When a new task arrives, our proposed FCL3 model firstly transfers knowledge from these libraries to encode the new task, i.e., effectively and selectively soft-assigning this new task to multiple representative models over feature learning library. Then, 1) the new task with a higher outlier probability will then be judged as a new representative, and used to redefine both feature learning library and representative models over time; or 2) the new task with lower outlier probability will only refine the feature learning library. For model optimization, we cast this lifelong learning problem as an alternating direction minimization problem as a new task comes. Finally, we evaluate the proposed framework by analyzing several multi-task datasets, and the experimental results demonstrate that our FCL3 model can achieve better performance than most lifelong learning frameworks, even batch clustered multi-task learning models.

READ FULL TEXT

page 1

page 8

page 9

page 11

page 13

research
12/17/2020

Task Uncertainty Loss Reduce Negative Transfer in Asymmetric Multi-task Feature Learning

Multi-task learning (MTL) is frequently used in settings where a target ...
research
05/03/2017

Lifelong Metric Learning

The state-of-the-art online learning approaches are only capable of lear...
research
08/01/2017

Deep Asymmetric Multi-task Feature Learning

We propose Deep Asymmetric Multitask Feature Learning (Deep-AMTFL) which...
research
06/16/2014

Multi-stage Multi-task feature learning via adaptive threshold

Multi-task feature learning aims to identity the shared features among t...
research
04/12/2022

Medusa: Universal Feature Learning via Attentional Multitasking

Recent approaches to multi-task learning (MTL) have focused on modelling...
research
10/25/2022

A new Stack Autoencoder: Neighbouring Sample Envelope Embedded Stack Autoencoder Ensemble Model

Stack autoencoder (SAE), as a representative deep network, has unique an...

Please sign up or login with your details

Forgot password? Click here to reset