An Evolutionary Approach to Dynamic Introduction of Tasks in Large-scale Multitask Learning Systems

05/25/2022
by   Andrea Gesmundo, et al.
0

Multitask learning assumes that models capable of learning from multiple tasks can achieve better quality and efficiency via knowledge transfer, a key feature of human learning. Though, state of the art ML models rely on high customization for each task and leverage size and data scale rather than scaling the number of tasks. Also, continual learning, that adds the temporal aspect to multitask, is often focused to the study of common pitfalls such as catastrophic forgetting instead of being studied at a large scale as a critical component to build the next generation artificial intelligence. We propose an evolutionary method that can generate a large scale multitask model, and can support the dynamic and continuous addition of new tasks. The generated multitask model is sparsely activated and integrates a task-based routing that guarantees bounded compute cost and fewer added parameters per task as the model expands. The proposed method relies on a knowledge compartmentalization technique to achieve immunity against catastrophic forgetting and other common pitfalls such as gradient interference and negative transfer. We empirically show that the proposed method can jointly solve and achieve competitive results on 69image classification tasks, for example achieving the best test accuracy reported fora model trained only on public data for competitive tasks such as cifar10: 99.43

READ FULL TEXT
research
09/15/2022

A Continual Development Methodology for Large-scale Multitask Dynamic ML Systems

The traditional Machine Learning (ML) methodology requires to fragment t...
research
05/22/2022

muNet: Evolving Pretrained Deep Neural Networks into Scalable Auto-tuning Multitask Systems

Most uses of machine learning today involve training a model from scratc...
research
04/05/2019

Reducing catastrophic forgetting when evolving neural networks

A key stepping stone in the development of an artificial general intelli...
research
02/06/2023

Multipath agents for modular multitask ML systems

A standard ML model is commonly generated by a single method that specif...
research
03/24/2021

Active Multitask Learning with Committees

The cost of annotating training data has traditionally been a bottleneck...
research
06/10/2022

Nominal Metaphor Generation with Multitask Learning

Nominal metaphors are frequently used in human language and have been sh...
research
06/06/2023

Exploring the effects of robotic design on learning and neural control

The ongoing deep learning revolution has allowed computers to outclass h...

Please sign up or login with your details

Forgot password? Click here to reset