Dopant Network Processing Units: Towards Efficient Neural-network Emulators with High-capacity Nanoelectronic Nodes

The rapidly growing computational demands of deep neural networks require novel hardware designs. Recently, tunable nanoelectronic devices were developed based on hopping electrons through a network of dopant atoms in silicon. These "Dopant Network Processing Units" (DNPUs) are highly energy-efficient and have potentially very high throughput. By adapting the control voltages applied to its terminals, a single DNPU can solve a variety of linearly non-separable classification problems. However, using a single device has limitations due to the implicit single-node architecture. This paper presents a promising novel approach to neural information processing by introducing DNPUs as high-capacity neurons and moving from a single to a multi-neuron framework. By implementing and testing a small multi-DNPU classifier in hardware, we show that feed-forward DNPU networks improve the performance of a single DNPU from 77 94 plane. Furthermore, motivated by the integration of DNPUs with memristor arrays, we study the potential of using DNPUs in combination with linear layers. We show by simulation that a single-layer MNIST classifier with only 10 DNPUs achieves over 96 hardware neural-network emulators that offer atomic-scale information processing with low latency and energy consumption.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/04/2016

Sparsely-Connected Neural Networks: Towards Efficient VLSI Implementation of Deep Neural Networks

Recently deep neural networks have received considerable attention due t...
research
04/06/2020

LogicNets: Co-Designed Neural Networks and Circuits for Extreme-Throughput Applications

Deployment of deep neural networks for applications that require very hi...
research
09/30/2021

Energy-Efficient and Delay-Guaranteed Joint Resource Allocation and DU Selection in O-RAN

The radio access network (RAN) part of the next-generation wireless netw...
research
01/18/2022

DEFER: Distributed Edge Inference for Deep Neural Networks

Modern machine learning tools such as deep neural networks (DNNs) are pl...
research
02/27/2016

Multiplier-less Artificial Neurons Exploiting Error Resiliency for Energy-Efficient Neural Computing

Large-scale artificial neural networks have shown significant promise in...
research
11/20/2022

TuRaN: True Random Number Generation Using Supply Voltage Underscaling in SRAMs

Prior works propose SRAM-based TRNGs that extract entropy from SRAM arra...
research
01/21/2021

ItNet: iterative neural networks with small graphs for accurate and efficient anytime prediction

Deep neural networks have usually to be compressed and accelerated for t...

Please sign up or login with your details

Forgot password? Click here to reset