On-Device Training Under 256KB Memory

06/30/2022
by   Ji Lin, et al.
14

On-device training enables the model to adapt to new data collected from the sensors by fine-tuning a pre-trained model. However, the training memory consumption is prohibitive for IoT devices that have tiny memory resources. We propose an algorithm-system co-design framework to make on-device training possible with only 256KB of memory. On-device training faces two unique challenges: (1) the quantized graphs of neural networks are hard to optimize due to mixed bit-precision and the lack of normalization; (2) the limited hardware resource (memory and computation) does not allow full backward computation. To cope with the optimization difficulty, we propose Quantization-Aware Scaling to calibrate the gradient scales and stabilize quantized training. To reduce the memory footprint, we propose Sparse Update to skip the gradient computation of less important layers and sub-tensors. The algorithm innovation is implemented by a lightweight training system, Tiny Training Engine, which prunes the backward computation graph to support sparse updates and offloads the runtime auto-differentiation to compile time. Our framework is the first practical solution for on-device transfer learning of visual recognition on tiny IoT devices (e.g., a microcontroller with only 256KB SRAM), using less than 1/100 of the memory of existing frameworks while matching the accuracy of cloud training+edge deployment for the tinyML application VWW. Our study enables IoT devices to not only perform inference but also continuously adapt to new data for on-device lifelong learning.

READ FULL TEXT

page 3

page 6

page 7

page 10

page 11

page 12

page 16

page 17

research
07/19/2023

TinyTrain: Deep Neural Network Training at the Extreme Edge

On-device training is essential for user personalisation and privacy. Wi...
research
04/08/2023

FlexMoE: Scaling Large-scale Sparse Pre-trained Model Training via Dynamic Device Placement

With the increasing data volume, there is a trend of using large-scale p...
research
12/05/2022

MobileTL: On-device Transfer Learning with Inverted Residual Blocks

Transfer learning on edge is challenging due to on-device limited resour...
research
06/25/2022

p-Meta: Towards On-device Deep Model Adaptation

Data collected by IoT devices are often private and have a large diversi...
research
03/15/2021

TinyOL: TinyML with Online-Learning on Microcontrollers

Tiny machine learning (TinyML) is a fast-growing research area committed...
research
02/18/2022

EF-Train: Enable Efficient On-device CNN Training on FPGA Through Data Reshaping for Online Adaptation or Personalization

Conventionally, DNN models are trained once in the cloud and deployed in...
research
01/01/2023

Efficient On-device Training via Gradient Filtering

Despite its importance for federated learning, continuous learning and m...

Please sign up or login with your details

Forgot password? Click here to reset