Learnware: Small Models Do Big

10/07/2022
by   Zhi-Hua Zhou, et al.
0

There are complaints about current machine learning techniques such as the requirement of a huge amount of training data and proficient training skills, the difficulty of continual learning, the risk of catastrophic forgetting, the leaking of data privacy/proprietary, etc. Most research efforts have been focusing on one of those concerned issues separately, paying less attention to the fact that most issues are entangled in practice. The prevailing big model paradigm, which has achieved impressive results in natural language processing and computer vision applications, has not yet addressed those issues, whereas becoming a serious source of carbon emissions. This article offers an overview of the learnware paradigm, which attempts to enable users not need to build machine learning models from scratch, with the hope of reusing small models to do things even beyond their original purposes, where the key ingredient is the specification which enables a trained model to be adequately identified to reuse according to the requirement of future users who know nothing about the model in advance.

READ FULL TEXT
research
08/05/2021

Quantum Continual Learning Overcoming Catastrophic Forgetting

Catastrophic forgetting describes the fact that machine learning models ...
research
05/16/2022

Continual learning on 3D point clouds with random compressed rehearsal

Contemporary deep neural networks offer state-of-the-art results when ap...
research
02/22/2023

Preventing Catastrophic Forgetting in Continual Learning of New Natural Language Tasks

Multi-Task Learning (MTL) is widely-accepted in Natural Language Process...
research
06/28/2022

Continual Learning with Transformers for Image Classification

In many real-world scenarios, data to train machine learning models beco...
research
01/20/2020

Model Reuse with Reduced Kernel Mean Embedding Specification

Given a publicly available pool of machine learning models constructed f...
research
12/09/2022

Selective Amnesia: On Efficient, High-Fidelity and Blind Suppression of Backdoor Effects in Trojaned Machine Learning Models

In this paper, we present a simple yet surprisingly effective technique ...
research
11/14/2018

Human-like machine learning: limitations and suggestions

This paper attempts to address the issues of machine learning in its cur...

Please sign up or login with your details

Forgot password? Click here to reset