Randomness In Neural Network Training: Characterizing The Impact of Tooling

06/22/2021
by   Donglin Zhuang, et al.
35

The quest for determinism in machine learning has disproportionately focused on characterizing the impact of noise introduced by algorithmic design choices. In this work, we address a less well understood and studied question: how does our choice of tooling introduce randomness to deep neural network training. We conduct large scale experiments across different types of hardware, accelerators, state of art networks, and open-source datasets, to characterize how tooling choices contribute to the level of non-determinism in a system, the impact of said non-determinism, and the cost of eliminating different sources of noise. Our findings are surprising, and suggest that the impact of non-determinism in nuanced. While top-line metrics such as top-1 accuracy are not noticeably impacted, model performance on certain parts of the data distribution is far more sensitive to the introduction of randomness. Our results suggest that deterministic tooling is critical for AI safety. However, we also find that the cost of ensuring determinism varies dramatically between neural network architectures and hardware types, e.g., with overhead up to 746%, 241%, and 196% on a spectrum of widely used GPU accelerator architectures, relative to non-deterministic training. The source code used in this paper is available at https://github.com/usyd-fsalab/NeuralNetworkRandomness.

READ FULL TEXT

page 4

page 8

research
09/03/2021

Impact of GPU uncertainty on the training of predictive deep neural networks

[retracted] We found out that the difference was dependent on the Chaine...
research
03/04/2021

Lost in Pruning: The Effects of Pruning Neural Networks beyond Test Accuracy

Neural network pruning is a popular technique used to reduce the inferen...
research
11/14/2019

A Scalable Approach for Facial Action Unit Classifier Training UsingNoisy Data for Pre-Training

Machine learning systems are being used to automate many types of labori...
research
07/18/2023

Using the IBM Analog In-Memory Hardware Acceleration Kit for Neural Network Training and Inference

Analog In-Memory Computing (AIMC) is a promising approach to reduce the ...
research
03/25/2022

A Semi-Decoupled Approach to Fast and Optimal Hardware-Software Co-Design of Neural Accelerators

In view of the performance limitations of fully-decoupled designs for ne...
research
05/31/2023

Investigation of the Robustness of Neural Density Fields

Recent advances in modeling density distributions, so-called neural dens...
research
04/25/2020

Muscle Synergy and Coupling for Hand

The knowledge of the intuitive link between muscle activity and the fing...

Please sign up or login with your details

Forgot password? Click here to reset