Boosting Mobile CNN Inference through Semantic Memory

12/05/2021
by   Yun Li, et al.
5

Human brains are known to be capable of speeding up visual recognition of repeatedly presented objects through faster memory encoding and accessing procedures on activated neurons. For the first time, we borrow and distill such a capability into a semantic memory design, namely SMTM, to improve on-device CNN inference. SMTM employs a hierarchical memory architecture to leverage the long-tail distribution of objects of interest, and further incorporates several novel techniques to put it into effects: (1) it encodes high-dimensional feature maps into low-dimensional, semantic vectors for low-cost yet accurate cache and lookup; (2) it uses a novel metric in determining the exit timing considering different layers' inherent characteristics; (3) it adaptively adjusts the cache size and semantic vectors to fit the scene dynamics. SMTM is prototyped on commodity CNN engine and runs on both mobile CPU and GPU. Extensive experiments on large-scale datasets and models show that SMTM can significantly speed up the model inference over standard approach (up to 2X) and prior cache designs (up to 1.5X), with acceptable accuracy loss.

READ FULL TEXT

page 3

page 8

research
05/10/2022

Training Personalized Recommendation Systems from (GPU) Scratch: Look Forward not Backwards

Personalized recommendation models (RecSys) are one of the most popular ...
research
05/09/2018

Neural Cache: Bit-Serial In-Cache Acceleration of Deep Neural Networks

This paper presents the Neural Cache architecture, which re-purposes cac...
research
05/11/2021

EL-Attention: Memory Efficient Lossless Attention for Generation

Transformer model with multi-head attention requires caching intermediat...
research
08/08/2019

Energy and Performance Analysis of STTRAM Caches for Mobile Applications

Spin-Transfer Torque RAMs (STTRAMs) have been shown to offer much promis...
research
10/20/2021

Fast Bitmap Fit: A CPU Cache Line friendly memory allocator for single object allocations

Applications making excessive use of single-object based data structures...
research
12/22/2022

Accelerating CNN inference on long vector architectures via co-design

CPU-based inference can be an alternative to off-chip accelerators, and ...
research
10/31/2018

Low-Dimensional Bottleneck Features for On-Device Continuous Speech Recognition

Low power digital signal processors (DSPs) typically have a very limited...

Please sign up or login with your details

Forgot password? Click here to reset