Towards Design Methodology of Efficient Fast Algorithms for Accelerating Generative Adversarial Networks on FPGAs

11/15/2019
by   Jung-Woo Chang, et al.
31

Generative adversarial networks (GANs) have shown excellent performance in image and speech applications. GANs create impressive data primarily through a new type of operator called deconvolution (DeConv) or transposed convolution (Conv). To implement the DeConv layer in hardware, the state-of-the-art accelerator reduces the high computational complexity via the DeConv-to-Conv conversion and achieves the same results. However, there is a problem that the number of filters increases due to this conversion. Recently, Winograd minimal filtering has been recognized as an effective solution to improve the arithmetic complexity and resource efficiency of the Conv layer. In this paper, we propose an efficient Winograd DeConv accelerator that combines these two orthogonal approaches on FPGAs. Firstly, we introduce a new class of fast algorithm for DeConv layers using Winograd minimal filtering. Since there are regular sparse patterns in Winograd filters, we further amortize the computational complexity by skipping zero weights. Secondly, we propose a new dataflow to prevent resource underutilization by reorganizing the filter layout in the Winograd domain. Finally, we propose an efficient architecture for implementing Winograd DeConv by designing the line buffer and exploring the design space. Experimental results on various GANs show that our accelerator achieves up to 1.78x 8.38x speedup over the state-of-the-art DeConv accelerators.

READ FULL TEXT

page 1

page 3

page 4

page 5

research
07/03/2019

Accelerating Deconvolution on Unmodified CNN Accelerators for Generative Adversarial Networks -- A Software Approach

Generative Adversarial Networks (GANs) are the emerging machine learning...
research
05/10/2018

GANAX: A Unified MIMD-SIMD Acceleration for Generative Adversarial Networks

Generative Adversarial Networks (GANs) are one of the most recent deep l...
research
07/05/2019

RED: A ReRAM-based Deconvolution Accelerator

Deconvolution has been widespread in neural networks. For example, it is...
research
10/03/2018

Sparse Winograd Convolutional neural networks on small-scale systolic arrays

The reconfigurability, energy-efficiency, and massive parallelism on FPG...
research
03/20/2021

Efficient Subsampling for Generating High-Quality Images from Conditional Generative Adversarial Networks

Subsampling unconditional generative adversarial networks (GANs) to impr...
research
05/27/2021

Efficient and Accurate Gradients for Neural SDEs

Neural SDEs combine many of the best qualities of both RNNs and SDEs, an...
research
06/01/2019

GANchors: Realistic Image Perturbation Distributions for Anchors Using Generative Models

We extend and improve the work of Model Agnostic Anchors for explanation...

Please sign up or login with your details

Forgot password? Click here to reset