RedMulE: A Compact FP16 Matrix-Multiplication Accelerator for Adaptive Deep Learning on RISC-V-Based Ultra-Low-Power SoCs

04/24/2022
by   Yvan Tortorella, et al.
0

The fast proliferation of extreme-edge applications using Deep Learning (DL) based algorithms required dedicated hardware to satisfy extreme-edge applications' latency, throughput, and precision requirements. While inference is achievable in practical cases, online finetuning and adaptation of general DL models are still highly challenging. One of the key stumbling stones is the need for parallel floating-point operations, which are considered unaffordable on sub-100 mW extreme-edge SoCs. We tackle this problem with RedMulE (Reduced-precision matrix Multiplication Engine), a parametric low-power hardware accelerator for FP16 matrix multiplications - the main kernel of DL training and inference - conceived for tight integration within a cluster of tiny RISC-V cores based on the PULP (Parallel Ultra-Low-Power) architecture. In 22 nm technology, a 32-FMA RedMulE instance occupies just 0.07 mm^2 (14 8-core RISC-V cluster) and achieves up to 666 MHz maximum operating frequency, for a throughput of 31.6 MAC/cycle (98.8 cluster-level power consumption of 43.5 mW and a full-cluster energy efficiency of 688 16-bit GFLOPS/W. Overall, RedMulE features up to 4.65x higher energy efficiency and 22x speedup over SW execution on 8 RISC-V cores.

READ FULL TEXT

page 1

page 4

research
02/26/2016

Scalable and Sustainable Deep Learning via Randomized Hashing

Current deep learning architectures are growing larger in order to learn...
research
01/21/2022

Dustin: A 16-Cores Parallel Ultra-Low-Power Cluster with 2b-to-32b Fully Flexible Bit-Precision and Vector Lockstep Execution Mode

Computationally intensive algorithms such as Deep Neural Networks (DNNs)...
research
12/12/2020

Source Code Classification for Energy Efficiency in Parallel Ultra Low-Power Microcontrollers

The analysis of source code through machine learning techniques is an in...
research
03/31/2023

DARKSIDE: A Heterogeneous RISC-V Compute Cluster for Extreme-Edge On-Chip DNN Inference and Training

On-chip DNN inference and training at the Extreme-Edge (TinyML) impose s...
research
09/18/2023

Spatz: Clustering Compact RISC-V-Based Vector Units to Maximize Computing Efficiency

The ever-increasing computational and storage requirements of modern app...
research
12/01/2022

CONVOLVE: Smart and seamless design of smart edge processors

With the rise of Deep Learning (DL), our world braces for AI in every ed...

Please sign up or login with your details

Forgot password? Click here to reset