Numerical Stability of DeepGOPlus Inference

12/13/2022
by   Inés Gonzalez Pepe, et al.
0

Convolutional neural networks (CNNs) are currently among the most widely-used neural networks available and achieve state-of-the-art performance for many problems. While originally applied to computer vision tasks, CNNs work well with any data with a spatial relationship, besides images, and have been applied to different fields. However, recent works have highlighted how CNNs, like other deep learning models, are sensitive to noise injection which can jeopardise their performance. This paper quantifies the numerical uncertainty of the floating point arithmetic inaccuracies of the inference stage of DeepGOPlus, a CNN that predicts protein function, in order to determine its numerical stability. In addition, this paper investigates the possibility to use reduced-precision floating point formats for DeepGOPlus inference to reduce memory consumption and latency. This is achieved with Monte Carlo Arithmetic, a technique that experimentally quantifies floating point operation errors and VPREC, a tool that emulates results with customizable floating point precision formats. Focus is placed on the inference stage as it is the main deliverable of the DeepGOPlus model that will be used across environments and therefore most likely be subjected to the most amount of noise. Furthermore, studies have shown that the inference stage is the part of the model which is most disposed to being scaled down in terms of reduced precision. All in all, it has been found that the numerical uncertainty of the DeepGOPlus CNN is very low at its current numerical precision format, but the model cannot currently be reduced to a lower precision that might render it more lightweight.

READ FULL TEXT

page 7

page 8

research
07/11/2020

HOBFLOPS CNNs: Hardware Optimized Bitsliced Floating-Point Operations Convolutional Neural Networks

Convolutional neural network (CNN) inference is commonly performed with ...
research
04/28/2022

FPIRM: Floating-point Processing in Racetrack Memories

Convolutional neural networks (CNN) have become a ubiquitous algorithm w...
research
11/17/2015

Reduced-Precision Strategies for Bounded Memory in Deep Neural Nets

This work investigates how using reduced precision data in Convolutional...
research
06/28/2021

Reducing numerical precision preserves classification accuracy in Mondrian Forests

Mondrian Forests are a powerful data stream classification method, but t...
research
06/23/2019

Efficient Implementation of Second-Order Stochastic Approximation Algorithms in High-Dimensional Problems

Stochastic approximation (SA) algorithms have been widely applied in min...
research
07/18/2023

Multi-stage Neural Networks: Function Approximator of Machine Precision

Deep learning techniques are increasingly applied to scientific problems...
research
10/11/2022

Block Format Error Bounds and Optimal Block Size Selection

The amounts of data that need to be transmitted, processed, and stored b...

Please sign up or login with your details

Forgot password? Click here to reset