Tiny Transfer Learning: Towards Memory-Efficient On-Device Learning

07/22/2020
by   Han Cai, et al.
21

We present Tiny-Transfer-Learning (TinyTL), an efficient on-device learning method to adapt pre-trained models to newly collected data on edge devices. Different from conventional transfer learning methods that fine-tune the full network or the last layer, TinyTL freezes the weights of the feature extractor while only learning the biases, thus doesn't require storing the intermediate activations, which is the major memory bottleneck for on-device learning. To maintain the adaptation capacity without updating the weights, TinyTL introduces memory-efficient lite residual modules to refine the feature extractor by learning small residual feature maps in the middle. Besides, instead of using the same feature extractor, TinyTL adapts the architecture of the feature extractor to fit different target datasets while fixing the weights: TinyTL pre-trains a large super-net that contains many weight-shared sub-nets that can individually operate; different target dataset selects the sub-net that best match the dataset. This backpropagation-free discrete sub-net selection incurs no memory overhead. Extensive experiments show that TinyTL can reduce the training memory cost by order of magnitude (up to 13.3x) without sacrificing accuracy compared to fine-tuning the full network.

READ FULL TEXT
research
03/02/2021

TransTailor: Pruning the Pre-trained Model for Improved Transfer Learning

The increasing of pre-trained models has significantly facilitated the p...
research
12/05/2022

MobileTL: On-device Transfer Learning with Inverted Residual Blocks

Transfer learning on edge is challenging due to on-device limited resour...
research
07/06/2023

Transfer Learning for the Efficient Detection of COVID-19 from Smartphone Audio Data

Disease detection from smartphone data represents an open research chall...
research
04/20/2023

Visual DNA: Representing and Comparing Images using Distributions of Neuron Activations

Selecting appropriate datasets is critical in modern computer vision. Ho...
research
12/04/2018

Energy Efficient Hardware for On-Device CNN Inference via Transfer Learning

On-device CNN inference for real-time computer vision applications can r...
research
11/22/2022

Compiler Provenance Recovery for Multi-CPU Architectures Using a Centrifuge Mechanism

Bit-stream recognition (BSR) has many applications, such as forensic inv...
research
01/26/2019

DELTA: DEep Learning Transfer using Feature Map with Attention for Convolutional Networks

Transfer learning through fine-tuning a pre-trained neural network with ...

Please sign up or login with your details

Forgot password? Click here to reset