Proximu: Efficiently Scaling DNN Inference in Multi-core CPUs through Near-Cache Compute

11/23/2020
by   Anant V. Nori, et al.
0

Deep Neural Network (DNN) inference is emerging as the fundamental bedrock for a multitude of utilities and services. CPUs continue to scale up their raw compute capabilities for DNN inference along with mature high performance libraries to extract optimal performance. While general purpose CPUs offer unique attractive advantages for DNN inference at both datacenter and edge, they have primarily evolved to optimize single thread performance. For highly parallel, throughput-oriented DNN inference, this results in inefficiencies in both power and performance, impacting both raw performance scaling and overall performance/watt. We present Proximu$, where we systematically tackle the root inefficiencies in power and performance scaling for CPU DNN inference. Performance scales efficiently by distributing light-weight tensor compute near all caches in a multi-level cache hierarchy. This maximizes the cumulative utilization of the existing bandwidth resources in the system and minimizes movement of data. Power is drastically reduced through simple ISA extensions that encode the structured, loop-y workload behavior. This enables a bulk offload of pre-decoded work, with loop unrolling in the light-weight near-cache units, effectively bypassing the power-hungry stages of the wide Out-of-Order (OOO) CPU pipeline. Across a number of DNN models, Proximu$ achieves a 2.3x increase in convolution performance/watt with a 2x to 3.94x scaling in raw performance. Similarly, Proximu$ achieves a 1.8x increase in inner-product performance/watt with 2.8x scaling in performance. With no changes to the programming model, no increase in cache capacity or bandwidth and minimal additional hardware, Proximu$ enables unprecedented CPU efficiency gains while achieving similar performance to state-of-the-art Domain Specific Accelerators (DSA) for DNN inference in this AI era.

READ FULL TEXT

page 4

page 5

page 7

page 12

research
11/30/2020

Dataflow-Architecture Co-Design for 2.5D DNN Accelerators using Wireless Network-on-Package

Deep neural network (DNN) models continue to grow in size and complexity...
research
10/09/2018

To Use or Not to Use: CPUs' Cache Optimization Techniques on GPGPUs

General Purpose Graphic Processing Unit(GPGPU) is used widely for achiev...
research
12/28/2021

Casper: Accelerating Stencil Computation using Near-cache Processing

Stencil computation is one of the most used kernels in a wide variety of...
research
05/09/2018

Neural Cache: Bit-Serial In-Cache Acceleration of Deep Neural Networks

This paper presents the Neural Cache architecture, which re-purposes cac...
research
07/31/2023

PATRONoC: Parallel AXI Transport Reducing Overhead for Networks-on-Chip targeting Multi-Accelerator DNN Platforms at the Edge

Emerging deep neural network (DNN) applications require high-performance...
research
07/20/2022

SumMerge: an efficient algorithm and implementation for weight repetition-aware DNN inference

Deep Neural Network (DNN) inference efficiency is a key concern across t...
research
09/12/2019

Complexity-Scalable Neural Network Based MIMO Detection With Learnable Weight Scaling

This paper introduces a framework for systematic complexity scaling of d...

Please sign up or login with your details

Forgot password? Click here to reset