Error Analysis and Improving the Accuracy of Winograd Convolution for Deep Neural Networks

03/29/2018
by   Barbara Barabasz, et al.
0

Modern deep neural networks (DNNs) spend a large amount of their execution time computing convolutions. Winograd's minimal algorithm for small convolutions can greatly reduce the number of arithmetic operations. However, a large reduction in floating point (FP) operations in these algorithms can result in poor numeric accuracy. In this paper we analyse the FP error and prove boundaries on the error. We show that the "modified" algorithm gives a significantly better accuracy of the result. We propose several methods for reducing FP error of these algorithms. Minimal convolution algorithms depend on the selection of several numeric points that have a large impact on the accuracy of the result. We propose a canonical evaluation ordering that both reduces FP error and the size of the search space based on Huffman coding. We study point selection experimentally, and find empirically good points. We also identify the main factors that associated with sets of points that result in a low error. In addition, we explore other methods to reduce FP error, including mixed-precision convolution, and pairwise addition across DNN channels. Using our methods we can significantly reduce FP error for a given block size, which allows larger block sizes and reduced computation.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/29/2018

Improving accuracy of Winograd convolution for DNNs

Modern deep neural networks (DNNs) spend a large amount of their executi...
research
09/08/2017

Low-memory GEMM-based convolution algorithms for deep neural networks

Deep neural networks (DNNs) require very large amounts of computation bo...
research
01/25/2022

Winograd Convolution for Deep Neural Networks: Efficient Point Selection

Convolutional neural networks (CNNs) have dramatically improved the accu...
research
11/19/2022

Accuracy Boosters: Epoch-Driven Mixed-Mantissa Block Floating-Point for DNN Training

The unprecedented growth in DNN model complexity, size and the amount of...
research
11/19/2018

Building Efficient Deep Neural Networks with Unitary Group Convolutions

We propose unitary group convolutions (UGConvs), a building block for CN...
research
06/25/2019

New pointwise convolution in Deep Neural Networks through Extremely Fast and Non Parametric Transforms

Some conventional transforms such as Discrete Walsh-Hadamard Transform (...
research
10/29/2019

Derivation and Analysis of Fast Bilinear Algorithms for Convolution

The prevalence of convolution in applications within signal processing, ...

Please sign up or login with your details

Forgot password? Click here to reset