Improving accuracy of Winograd convolution for DNNs

03/29/2018
by   Barbara Barabasz, et al.
0

Modern deep neural networks (DNNs) spend a large amount of their execution time computing convolutions. Winograd's minimal algorithm for small convolutions can greatly reduce the number of arithmetic operations. However, a large reduction in floating point (FP) operations in these algorithms can result in significantly reduced FP accuracy of the result. In this paper we propose several methods for reducing the FP error of these algorithms. Minimal convolution algorithms depend on the selection of several numeric points that have a large impact on the accuracy of the result. Some points are known to be better than others, but there is no systematic method selecting points for small convolutions. We show that there are a relatively small number of important cases for DNN convolution, that can be searched empirically. We compared both standard and modified versions of the Winograd algorithm. Further, we demonstrate that both the ordering and value of the points is important, and we propose a canonical evaluation ordering that both reduces FP error and the size of the search space based on Huffman coding. We find that good point selections depend on the values of the points themselves and on symmetries between different points. We show that sets of points with symmetric groups give better results. In addition, we explore other methods to reduce FP error, including mixed-precision convolution, and pairwise addition across DNN channels. Using our methods we can significantly reduce FP error for a given Winograd convolution block size, which allows larger block sizes and reduced computation.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/29/2018

Error Analysis and Improving the Accuracy of Winograd Convolution for Deep Neural Networks

Modern deep neural networks (DNNs) spend a large amount of their executi...
research
01/25/2022

Winograd Convolution for Deep Neural Networks: Efficient Point Selection

Convolutional neural networks (CNNs) have dramatically improved the accu...
research
11/19/2022

Accuracy Boosters: Epoch-Driven Mixed-Mantissa Block Floating-Point for DNN Training

The unprecedented growth in DNN model complexity, size and the amount of...
research
11/19/2018

Building Efficient Deep Neural Networks with Unitary Group Convolutions

We propose unitary group convolutions (UGConvs), a building block for CN...
research
09/08/2017

Low-memory GEMM-based convolution algorithms for deep neural networks

Deep neural networks (DNNs) require very large amounts of computation bo...
research
04/08/2023

Arithmetic Intensity Balancing Convolution for Hardware-aware Efficient Block Design

As deep learning advances, edge devices and lightweight neural networks ...
research
06/25/2019

New pointwise convolution in Deep Neural Networks through Extremely Fast and Non Parametric Transforms

Some conventional transforms such as Discrete Walsh-Hadamard Transform (...

Please sign up or login with your details

Forgot password? Click here to reset