Device-Circuit-Architecture Co-Exploration for Computing-in-Memory Neural Accelerators

10/31/2019
by   Weiwen Jiang, et al.
7

Co-exploration of neural architectures and hardware design is promising to simultaneously optimize network accuracy and hardware efficiency. However, state-of-the-art neural architecture search algorithms for the co-exploration are dedicated for the conventional von-neumann computing architecture, whose performance is heavily limited by the well-known memory wall. In this paper, we are the first to bring the computing-in-memory architecture, which can easily transcend the memory wall, to interplay with the neural architecture search, aiming to find the most efficient neural architectures with high network accuracy and maximized hardware efficiency. Such a novel combination makes opportunities to boost performance, but also brings a bunch of challenges. The design space spans across multiple layers from device type, circuit topology to neural architecture. In addition, the performance may degrade in the presence of device variation. To address these challenges, we propose a cross-layer exploration framework, namely NACIM, which jointly explores device, circuit and architecture design space and takes device variation into consideration to find the most robust neural architectures. Experimental results demonstrate that NACIM can find the robust neural network with 0.45 presence of device variation, compared with a 76.44 state-of-the-art NAS without consideration of variation; in addition, NACIM achieves an energy efficiency up to 16.3 TOPs/W, 3.17X higher than the state-of-the-art NAS.

READ FULL TEXT

page 2

page 3

page 4

page 5

page 6

page 7

page 8

page 10

research
07/06/2019

Hardware/Software Co-Exploration of Neural Architectures

We propose a novel hardware and software co-exploration framework for ef...
research
08/29/2023

Best Memory Architecture Exploration under Parameters Variations accelerated with Machine Learning

The design of effective memory architecture is of utmost importance in m...
research
06/23/2021

NAX: Co-Designing Neural Network and Hardware Architecture for Memristive Xbar based Computing Systems

In-Memory Computing (IMC) hardware using Memristive Crossbar Arrays (MCA...
research
02/10/2020

Co-Exploration of Neural Architectures and Heterogeneous ASIC Accelerator Designs Targeting Multiple Tasks

Neural Architecture Search (NAS) has demonstrated its power on various A...
research
01/27/2019

Eva-CiM: A System-Level Performance and Energy Evaluation Framework for Computing-in-Memory Architectures

Computing-in-Memory (CiM) architectures aim to reduce costly data transf...
research
01/27/2019

Eva-CiM: A System-Level Energy Evaluation Framework for Computing-in-Memory Architectures

Computing-in-Memory (CiM) architectures aim to reduce costly data transf...
research
05/09/2022

Hardware-Robust In-RRAM-Computing for Object Detection

In-memory computing is becoming a popular architecture for deep-learning...

Please sign up or login with your details

Forgot password? Click here to reset