-
Noisy Machines: Understanding Noisy Neural Networks and Enhancing Robustness to Analog Hardware Errors Using Distillation
The success of deep learning has brought forth a wave of interest in com...
read it
-
Performance Analysis and Characterization of Training Deep Learning Models on Mobile Devices
Training deep learning models on mobile devices recently becomes possibl...
read it
-
Using learned optimizers to make models robust to input noise
State-of-the art vision models can achieve superhuman performance on ima...
read it
-
Deeplite Neutrino: An End-to-End Framework for Constrained Deep Learning Model Optimization
Designing deep learning-based solutions is becoming a race for training ...
read it
-
A Survey on Analog Models of Computation
We present a survey on analog models of computations. Analog can be unde...
read it
-
Deep Learning Inference in Facebook Data Centers: Characterization, Performance Optimizations and Hardware Implications
The application of deep learning techniques resulted in remarkable impro...
read it
-
Confidential Deep Learning: Executing Proprietary Models on Untrusted Devices
Performing deep learning on end-user devices provides fast offline infer...
read it
Benchmarking Inference Performance of Deep Learning Models on Analog Devices
Analog hardware implemented deep learning models are promising for computation and energy constrained systems such as edge computing devices. However, the analog nature of the device and the associated many noise sources will cause changes to the value of the weights in the trained deep learning models deployed on such devices. In this study, systematic evaluation of the inference performance of trained popular deep learning models for image classification deployed on analog devices has been carried out, where additive white Gaussian noise has been added to the weights of the trained models during inference. It is observed that deeper models and models with more redundancy in design such as VGG are more robust to the noise in general. However, the performance is also affected by the design philosophy of the model, the detailed structure of the model, the exact machine learning task, as well as the datasets.
READ FULL TEXT
Comments
There are no comments yet.