Improving accuracy of Winograd convolution for DNNs
Modern deep neural networks (DNNs) spend a large amount of their execution time computing convolutions. Winograd's minimal algorithm for small convolutions can greatly reduce the number of arithmetic operations. However, a large reduction in floating point (FP) operations in these algorithms can result in significantly reduced FP accuracy of the result. In this paper we propose several methods for reducing the FP error of these algorithms. Minimal convolution algorithms depend on the selection of several numeric points that have a large impact on the accuracy of the result. Some points are known to be better than others, but there is no systematic method selecting points for small convolutions. We show that there are a relatively small number of important cases for DNN convolution, that can be searched empirically. We compared both standard and modified versions of the Winograd algorithm. Further, we demonstrate that both the ordering and value of the points is important, and we propose a canonical evaluation ordering that both reduces FP error and the size of the search space based on Huffman coding. We find that good point selections depend on the values of the points themselves and on symmetries between different points. We show that sets of points with symmetric groups give better results. In addition, we explore other methods to reduce FP error, including mixed-precision convolution, and pairwise addition across DNN channels. Using our methods we can significantly reduce FP error for a given Winograd convolution block size, which allows larger block sizes and reduced computation.
READ FULL TEXT