DeepAI
Log In Sign Up

Towards Design Methodology of Efficient Fast Algorithms for Accelerating Generative Adversarial Networks on FPGAs

11/15/2019
by   Jung-Woo Chang, et al.
31

Generative adversarial networks (GANs) have shown excellent performance in image and speech applications. GANs create impressive data primarily through a new type of operator called deconvolution (DeConv) or transposed convolution (Conv). To implement the DeConv layer in hardware, the state-of-the-art accelerator reduces the high computational complexity via the DeConv-to-Conv conversion and achieves the same results. However, there is a problem that the number of filters increases due to this conversion. Recently, Winograd minimal filtering has been recognized as an effective solution to improve the arithmetic complexity and resource efficiency of the Conv layer. In this paper, we propose an efficient Winograd DeConv accelerator that combines these two orthogonal approaches on FPGAs. Firstly, we introduce a new class of fast algorithm for DeConv layers using Winograd minimal filtering. Since there are regular sparse patterns in Winograd filters, we further amortize the computational complexity by skipping zero weights. Secondly, we propose a new dataflow to prevent resource underutilization by reorganizing the filter layout in the Winograd domain. Finally, we propose an efficient architecture for implementing Winograd DeConv by designing the line buffer and exploring the design space. Experimental results on various GANs show that our accelerator achieves up to 1.78x 8.38x speedup over the state-of-the-art DeConv accelerators.

READ FULL TEXT

page 1

page 3

page 4

page 5

07/03/2019

Accelerating Deconvolution on Unmodified CNN Accelerators for Generative Adversarial Networks -- A Software Approach

Generative Adversarial Networks (GANs) are the emerging machine learning...
05/10/2018

GANAX: A Unified MIMD-SIMD Acceleration for Generative Adversarial Networks

Generative Adversarial Networks (GANs) are one of the most recent deep l...
07/05/2019

RED: A ReRAM-based Deconvolution Accelerator

Deconvolution has been widespread in neural networks. For example, it is...
10/03/2018

Sparse Winograd Convolutional neural networks on small-scale systolic arrays

The reconfigurability, energy-efficiency, and massive parallelism on FPG...
10/26/2018

CrystalGAN: Learning to Discover Crystallographic Structures with Generative Adversarial Networks

Our main motivation is to propose an efficient approach to generate nove...
05/27/2021

Efficient and Accurate Gradients for Neural SDEs

Neural SDEs combine many of the best qualities of both RNNs and SDEs, an...
01/07/2019

Efficient Winograd Convolution via Integer Arithmetic

Convolution is the core operation for many deep neural networks. The Win...