Impact of L1 Batch Normalization on Analog Noise Resistant Property of Deep Learning Models

05/07/2022
by   Omobayode Fagbohungbe, et al.
0

Analog hardware has become a popular choice for machine learning on resource-constrained devices recently due to its fast execution and energy efficiency. However, the inherent presence of noise in analog hardware and the negative impact of the noise on deployed deep neural network (DNN) models limit their usage. The degradation in performance due to the noise calls for the novel design of DNN models that have excellent noiseresistant property, leveraging the properties of the fundamental building block of DNN models. In this work, the use of L1 or TopK BatchNorm type, a fundamental DNN model building block, in designing DNN models with excellent noise-resistant property is proposed. Specifically, a systematic study has been carried out by training DNN models with L1/TopK BatchNorm type, and the performance is compared with DNN models with L2 BatchNorm types. The resulting model noise-resistant property is tested by injecting additive noise to the model weights and evaluating the new model inference accuracy due to the noise. The results show that L1 and TopK BatchNorm type has excellent noise-resistant property, and there is no sacrifice in performance due to the change in the BatchNorm type from L2 to L1/TopK BatchNorm type.

READ FULL TEXT
research
05/15/2022

Effect of Batch Normalization on Noise Resistant Property of Deep Learning Models

The fast execution speed and energy efficiency of analog hardware has ma...
research
11/24/2020

Benchmarking Inference Performance of Deep Learning Models on Analog Devices

Analog hardware implemented deep learning models are promising for compu...
research
11/18/2022

Sharpness-Aware Training for Accurate Inference on Noisy DNN Accelerators

Energy-efficient deep neural network (DNN) accelerators are prone to non...
research
05/03/2022

MemSE: Fast MSE Prediction for Noisy Memristor-Based DNN Accelerators

Memristors enable the computation of matrix-vector multiplications (MVM)...
research
02/16/2023

Hardware-aware training for large-scale and diverse deep learning inference workloads using in-memory computing-based accelerators

Analog in-memory computing (AIMC) – a promising approach for energy-effi...
research
01/29/2022

Interconnect Parasitics and Partitioning in Fully-Analog In-Memory Computing Architectures

Fully-analog in-memory computing (IMC) architectures that implement both...
research
05/17/2022

Perturbation of Deep Autoencoder Weights for Model Compression and Classification of Tabular Data

Fully connected deep neural networks (DNN) often include redundant weigh...

Please sign up or login with your details

Forgot password? Click here to reset