Design Considerations for Efficient Deep Neural Networks on Processing-in-Memory Accelerators

12/18/2019
by   Tien-Ju Yang, et al.
0

This paper describes various design considerations for deep neural networks that enable them to operate efficiently and accurately on processing-in-memory accelerators. We highlight important properties of these accelerators and the resulting design considerations using experiments conducted on various state-of-the-art deep neural networks with the large-scale ImageNet dataset.

READ FULL TEXT
research
04/20/2018

Co-Design of Deep Neural Nets and Neural Net Accelerators for Embedded Vision Applications

Deep Learning is arguably the most rapidly evolving research area in rec...
research
08/04/2023

Deep neural networks from the perspective of ergodic theory

The design of deep neural networks remains somewhat of an art rather tha...
research
05/25/2022

On the Reliability of Computing-in-Memory Accelerators for Deep Neural Networks

Computing-in-memory with emerging non-volatile memory (nvCiM) is shown t...
research
05/07/2019

Rethinking Arithmetic for Deep Neural Networks

We consider efficiency in deep neural networks. Hardware accelerators ar...
research
11/16/2018

GPipe: Efficient Training of Giant Neural Networks using Pipeline Parallelism

GPipe is a scalable pipeline parallelism library that enables learning o...
research
03/24/2022

GX-Plug: a Middleware for Plugging Accelerators to Distributed Graph Processing

Recently, research communities highlight the necessity of formulating a ...
research
03/23/2022

Pathways: Asynchronous Distributed Dataflow for ML

We present the design of a new large scale orchestration layer for accel...

Please sign up or login with your details

Forgot password? Click here to reset