EF-Train: Enable Efficient On-device CNN Training on FPGA Through Data Reshaping for Online Adaptation or Personalization

02/18/2022
by   Yue Tang, et al.
0

Conventionally, DNN models are trained once in the cloud and deployed in edge devices such as cars, robots, or unmanned aerial vehicles (UAVs) for real-time inference. However, there are many cases that require the models to adapt to new environments, domains, or new users. In order to realize such domain adaption or personalization, the models on devices need to be continuously trained on the device. In this work, we design EF-Train, an efficient DNN training accelerator with a unified channel-level parallelism-based convolution kernel that can achieve end-to-end training on resource-limited low-power edge-level FPGAs. It is challenging to implement on-device training on resource-limited FPGAs due to the low efficiency caused by different memory access patterns among forward, backward propagation, and weight update. Therefore, we developed a data reshaping approach with intra-tile continuous memory allocation and weight reuse. An analytical model is established to automatically schedule computation and memory resources to achieve high energy efficiency on edge FPGAs. The experimental results show that our design achieves 46.99 GFLOPS and 6.09GFLOPS/W in terms of throughput and energy efficiency, respectively.

READ FULL TEXT

page 14

page 17

page 18

page 20

research
06/25/2019

SkyNet: A Champion Model for DAC-SDC on Low Power Object Detection

Developing artificial intelligence (AI) at the edge is always challengin...
research
09/15/2023

A Precision-Scalable RISC-V DNN Processor with On-Device Learning Capability at the Extreme Edge

Extreme edge platforms, such as in-vehicle smart devices, require effici...
research
05/06/2022

OMU: A Probabilistic 3D Occupancy Mapping Accelerator for Real-time OctoMap at the Edge

Autonomous machines (e.g., vehicles, mobile robots, drones) require soph...
research
10/03/2018

Sparse Winograd Convolutional neural networks on small-scale systolic arrays

The reconfigurability, energy-efficiency, and massive parallelism on FPG...
research
08/03/2018

GeneSys: Enabling Continuous Learning through Neural Network Evolution in Hardware

Modern deep learning systems rely on (a) a hand-tuned neural network top...
research
05/24/2022

Efficient and Lightweight In-memory Computing Architecture for Hardware Security

The paper proposes in-memory computing (IMC) solution for the design and...
research
06/30/2022

On-Device Training Under 256KB Memory

On-device training enables the model to adapt to new data collected from...

Please sign up or login with your details

Forgot password? Click here to reset