Zero-shot Task Preference Addressing Enabled by Imprecise Bayesian Continual Learning

05/24/2023
by   Pengyuan Lu, et al.
0

Like generic multi-task learning, continual learning has the nature of multi-objective optimization, and therefore faces a trade-off between the performance of different tasks. That is, to optimize for the current task distribution, it may need to compromise performance on some tasks to improve on others. This means there exist multiple models that are each optimal at different times, each addressing a distinct task-performance trade-off. Researchers have discussed how to train particular models to address specific preferences on these trade-offs. However, existing algorithms require additional sample overheads – a large burden when there are multiple, possibly infinitely many, preferences. As a response, we propose Imprecise Bayesian Continual Learning (IBCL). Upon a new task, IBCL (1) updates a knowledge base in the form of a convex hull of model parameter distributions and (2) obtains particular models to address preferences with zero-shot. That is, IBCL does not require any additional training overhead to construct preference-addressing models from its knowledge base. We show that models obtained by IBCL have guarantees in identifying the preferred parameters. Moreover, experiments show that IBCL is able to locate the Pareto set of parameters given a preference, maintain similar to better performance than baseline methods, and significantly reduce training overhead via zero-shot preference addressing.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/26/2020

Bookworm continual learning: beyond zero-shot learning and continual learning

We propose bookworm continual learning(BCL), a flexible setting where un...
research
11/06/2022

Momentum-based Weight Interpolation of Strong Zero-Shot Models for Continual Learning

Large pre-trained, zero-shot capable models have shown considerable succ...
research
10/26/2022

Is Multi-Task Learning an Upper Bound for Continual Learning?

Continual and multi-task learning are common machine learning approaches...
research
11/15/2021

CoLLIE: Continual Learning of Language Grounding from Language-Image Embeddings

This paper presents CoLLIE: a simple, yet effective model for continual ...
research
07/04/2023

Continual Learning in Open-vocabulary Classification with Complementary Memory Systems

We introduce a method for flexible continual learning in open-vocabulary...
research
04/12/2022

Dynamic Dialogue Policy Transformer for Continual Reinforcement Learning

Continual learning is one of the key components of human learning and a ...
research
06/22/2023

Optimal Cost-Preference Trade-off Planning with Multiple Temporal Tasks

Autonomous robots are increasingly utilized in realistic scenarios with ...

Please sign up or login with your details

Forgot password? Click here to reset