I Introduction
Generative adversarial networks (GANs) and fully convolutional networks (FCNs) have been widely explored for their superior performance in processing complicated image tasks. For example, GANs are used to reconstruct 3D models from 2D images [1] and recover corrupted images [2]. FCNs are applied in semantic segmentation [3] and object detection [4]. Deconvolution layers are the important for these networks to carry out the upsampling from lowresolution to highresolution images. As most of the current platforms are mainly optimized for the regular convolution, the deconvolutional computation suffers from lowefficiency due to the involved additional operations. Designing an efficient accelerator catering to deconvolutional computation is considerable significant and taken as the focus of this work.
Among the existing neural network accelerator, processinginmemory (PIM), which moves the computation close to and even within the memory elements, demonstrates great potentials. Resistive RAM (ReRAM) has been taken as a competitive technology for PIM implementation due to its lowenergy and highefficient vectormatrix multiplications performed on the crossbar structure. Various ReRAMbased accelerators
[5, 6, 7, 8, 9, 10] have been presented for fast and efficient convolution, showing great advantages of ReRAM over the CMOSbased counterparts.Unfortunately, the unique computation patterns of deconvolution make its implementation on existing ReRAMbased accelerators very challenging. For example, the common
zeropadding
in deconvolution inserts plenty of zeros into input feature maps before convolution, resulting in massive redundant operations. The paddingfree deconvolution excludes the zeroinserting but involves extra operations, i.e., addition and cropping after convolution. Though paddingfree is more friendly for the CMOSbased accelerators [11], the addon operations leads to the modified circuits on ReRAMbased accelerator.This work aims to develop an efficient ReRAMbased deconvolution accelerator. In the work, we first analyze the efficiency of zeropadding and paddingfree deconvolution algorithms on existing ReRAMbased platforms. Considering the inefficiency in performing zeropadding and the high overhead induced by paddingfree, we propose RED, a ReRAMbased accelerator tailored for deconvolutional computation. Our approach integrates the optimization on data mapping and data flow. More specific, the pixelwise mapping can dramatically reduce the redundancy caused by zeroinserting operations, and the zeroskipping data flow further elevates the computation parallelism without addon periphery circuits.
We evaluated the power, latency, and area of RED when performing the deconvolutional layers in GANs and FCNs and compared with stateoftheart ReRAMbased accelerators of zeropadding design and paddingfree design [12]. Experimental results show that RED achieve 3.6931.15 speedup and 8%88.36% energy consumption reduction, with 22.14% increment in design area.
This paper is organized as follows. Section II introduces the background knowledge including ReRAMbased accelerator and the deconvolutional computation. Section III elaborates the principle and implementation of RED. In Section IV, we evaluate RED in terms of power, latency and area and compare it with stateoftheart ReRAMbased counterparts. At the end, we conclude the paper in Section V.
Ii Preliminary
Iia ReRAMbased CNN Accelerator
As an emerging memory technology, ReRAM crossbar structure can also effectively execute vectormatrix multiplication operations, which have gained significant attention. As illustrated in Fig. 1(a), the elements of a weight matrix are represented as the conductance of ReRAM cells located at the crosspoint of the wordlines and bitlines. During the operation, an input vector in the form of voltage spikes enters the crossbar along the wordlines, a.k.a., rows, and the currents flowing out from the bitlines, a.k.a., columns, denote the computed output vector of the vectormatrix multiplication. The integrated & fire circuit converts the output currents to the digital output data, which is then summed up together by the shift adder.
By leveraging the ReRAM structure, various CNN accelerators have been proposed for the inference or training [5, 8]
. The kernel in a convolution layer is a tensor with 4 dimensions and its mapping on crossbar requires a complicated design
[8, 9]. Fig. 1(b) illustrates an example of the kernel mapping design in the ReRAMbased accelerator. The filters of channels spread into a onedimension vector and is stored in one column.IiB Deconvolutional Computation
Fig. 2 illustrates two kinds of deconvolutional algorithms: zeropadding and paddingfree
. Similar to convolutional computation, stride
and padding are two hyperparameters in the deconvolutional computation. Suppose that the input data and consists of a serial feature maps is a tensor. Here, and are respectively the height and width of each input feature map. is the number of channel. The convolution kernel contains a set of filters and is represented by a tensor. is the number of filters, which is equivalent to the number of output feature maps. Like the input data, the output composes of feature maps each of which is . In this work, each element in the input and output is referred as pixel. The deconvolution is a upsampling operation and therefore and .The zeropadding deconvolution (Algorithm 1) includes two steps: a) Padding: Insert zeros between the pixels in the input feature maps (denoted as ); b) Convolution: Perform regular convolution for with kernels. As can be seen, zeropadding has massive redundant multiplications as the zerovalue input operands in the padding input feature maps.
Algorithm 2 describes the paddingfree algorithm with the following four major steps: a) Rotation: rotate the weight kernel by ; b) Convolution: compute the intermediate results by multiplyandaccumulating (MAC) an input pixel with the corresponding kernel in the channel direction; c) Addition: add the overlapped pixels obtained in step b) together; and d) Cropping: crop the data at the edge of the output matrices to fit the size of the final output. Paddingfree algorithm avoids inserting zero into the input in comparison with zeropadding. However, it introduces two additional operations—addition and cropping. Previously, Xu et al. [11] successfully utilize the paddingfree algorithm to adapt the CMOSbased hardware for efficient deconvolutional computation. As we shall show in Section IIIA, the existing ReRAMbased accelerators need substantial efforts to realize these operations, incurring a large overhead.
Iii ReRAMbased Deconvolution Accelerator
In this section, we first analyze the computation inefficiency when mapping the two popular deconvolutional algorithms to the existing ReRAMbased accelerators. Then we elaborate the proposed RED: a ReRAMbased deconvolution accelerator design which exploits pixelwise mapping and zeroskipping data flow to perform highefficient deconvolution computation. We also analyze the tradeoff in RED between the area overhead and parallelism.
Iiia Analysis & Observations
Fig. 3(a) illustrates the zeropadding deconvolution implementation. The kernel mapping of zeropadding deconvolution is the same as the standard convolutional computation described in Section II: weights in deconvolutional layer are mapped on columns of a ReRAM crossbar. In each cycle, one input vector is fed into the crossbar for computation and each pixel in the produced bit output vector corresponds to onepixel information for output feature maps. As such, it will take cycles to obtain the completed data of the output feature maps in the shape of . After the padding step (Section IIB), the input vector has inserted a large number of zeros and becomes very sparse, inducing the redundant computations on the zero pixel. Fig. 4 presents the zero redundancy ratio (i.e., the ratio of redundant computation induced by zeropadding over total computation) when varying the stride. Typically, the deconvolution layer in GANs (e.g., SNGAN [13]) sets the stride step to 2, while FCNs [3] usually prefer larger strides in deconvolution layers, such as 8, 16, or 32. As shown in Fig. 4, the zero redundancy ratio is already when and grows up to amazingly when . The high zero redundancy ratios indicates there are a large amount of redundant operations when ReRAMbased accelerator performs deconvolution. Note that ReGAN [12] adopted the zeropadding deconvolution but neglected the redundant operations.
Paddingfree is an alternative deconvolution algorithm that escapes the zero redundancy. The previous study [11] showed that the paddingfree deconvolution achieved up to 44.9 performance improvement on the CMOSbased platforms, such as ASIC. However, our analysis shows that direct mapping the paddingfree algorithm on a ReRAM architecture might not be efficient. As depicted in Fig. 3(b), different from the zeropadding deconvolution with a compacted output in columns, the implementation of paddingfree deconvolution on a crossbar requires columns. As the wordline/bitline driving power increases in a quadratic relation with the column number, the paddingfree deconvolution expects a much higher power consumption than the zeropadding deconvolution. What’s more, the output from the crossbar is not the final result but requires further processing (addition and cropping), which leads to dedicated circuit support and extra area cost.
IiiB RED Architecture
To overcome the aforementioned problems, we propose RED—a new ReRAMbased deconvolution accelerator. The design combines two orthogonal approaches, respectively for minimizing the redundant operations induced by the padded zeros and for enhancing the execution parallelism without additional operations. For ease of the explanation, we take a deconvolutional computation with and the kernel filter size of as the example in the following description.
The overall RED architecture is presented in Fig. 5(a). Here, the computation of the deconvolution is executed by subcrossbars (denoted as “SC”) each of which is the size of . To clarify, we demonstrate the padded zeros in Fig. 5. During the deconvolutional computation, RED takes only those nonzero pixels (in purple) to form the input vectors. The partial results from the corresponding subcrossbars are summed up to obtain the output pixels. In each clock cycle, multiple pixels for each output feature map are generated concurrently. In the following, we will elaborate the details of the pixelwise mapping in Fig. 5(b) and zeroskipping data flow in Fig. 5(c).
IiiB1 Pixelwise Mapping
We propose the pixelwise mapping to eliminate the high zero redundancy induced by zeropadding algorithm. Fig. 6 explains the design principle. In the figure, the large grid refers to a padded input feature map, whose nonzero and zero pixels are denoted in purple and white colors, respectively. The small grid with numbers indicates the kernel with its weight location, in which only the purple bricks are the valid weights in utilization. Due to the high redundancy of the padded image, only a small portion of weights take part in the convolution operation.
Fig. 6(a)(d) illustrates the four computation modes when sliding the kernel filter within the input feature map. For the given configuration, a kernel filter has nine weights, labeled with numbers . Starting from the first convolutional computation in Fig. 6(a), there are only four weights (, , and ) contributing to the calculation result. The following convolution by sliding the kernel filter horizontally one step involves only two weights and , as shown in Fig. 6(b). Similarly, the computation modes in Fig. 6(c) & (d) occur when moving the kernel window down one grid from the positions in Fig. 6(a) & (b), respectively. We observe that the convolution operations in the deconvolution are the repetition of the four computation modes. Furthermore, the weights of the kernel filter are exclusive among these modes. Thus, we propose pixelwise mapping to execute the computation modes (a)(d) in parallel.
We map a kernel in size of into subcrossbars. Each subcrossbar has inputs and outputs, thus can be expressed as a matrix whose shape is . Suppose that combining all the subcrossbars can form a subcrossbar tensor (), whose shape is , as shown in Fig. 5(b), then our pixelwise mapping approach can be expressed as:
(1) 
where and indicate the location of the weight in each filter, denotes the channel of the weight filter, refers to weight filter.
Once a round of computation in subcrossbars is completed, we add the output from corresponding SCs to obtain the final deconvolution results. Thanks to the vertical sumup design in the existing ReRAMbased accelerators [8, 12], no extra circuitry is needed to realize the addition operations in pixelwise mapping.
IiiB2 Zeroskipping Data Flow
Based on the pixelwise mapping scheme, we develop the zeroskipping data flow which takes nonzero pixels as the inputs of . Fig. 5(c) illustrates the operation for the given example with and kernel size of . Accordingly, there are 9 subcrossbars. The output vectors from the subcrossbars on the same row will be added up together; and subcrossbars along the same column will take the same input vectors. For brevity, we use to denote the input vector and has pixels each of which from one channel.The proposed data flow bypasses padded zeros, so and corresponding to the index on padded image are always even numbers. In , goes to SC1, is provided to SC2 and SC3, is taken by SC4 and SC7, and is applied to SC5, SC6, SC8 and SC9. The 9 subcrossbars operate simultaneously and their outputs will be put together upon the above explanation for the final deconvolution results. In the following cycle, RED continues to compute the kernels with the next batch of nonzero pixels, e.g., , , and in as illustrated in the figure. Compared to the zeropadding deconvolution, the zeroskipping data flow increases the computation parallelism of this example .
Layer Name  Network Model  Dataset 



Stride  

GAN_Deconv1  DCGAN [14]  LSUN  (8, 8, 512)  (16, 16, 256)  (5, 5, 512, 256)  2  
GAN_Deconv2  Improved GAN [15]  Cifar10  (4, 4, 512)  (8, 8, 256)  (5, 5, 512, 256)  2  
GAN_Deconv3  SNGAN [13]  Cifar10  (4, 4, 512)  (8, 8, 256)  (4, 4, 512, 256)  2  
GAN_Deconv4  SNGAN [13]  STL10  (6, 6, 512)  (12, 12, 256)  (4, 4, 512, 256)  2  
FCN_Deconv1  vocfcn8s_2x [3]  PASCAL VOC  (16, 16, 21)  (34, 34, 21)  (4, 4, 21, 21)  2  
FCN_Deconv2  vocfcn8s_8x [3]  PASCAL VOC  (70, 70, 21)  (568, 568, 21)  (16, 16, 21, 21)  8 
IiiC Design Tradeoff
We use to illustrate the RED design. The deconvolution with can be decomposed into four computation modes and therefore achieve 4 speedup by RED. The number of computation modes is , indicating the speedup brought by RED quadratically increases with the stride.
The kernel size usually grows with the stride. For the FCN [3] with , the kernel filter size is . Accordingly, 256 subcrossbars are needed to complete the entire computation modes simultaneously. More subcrossbars can cause the increment of the area due to the extra wordline/bitline driver, column mux, shiftadder, etc.).
There exists a tradeoff between the area and the execution speeup in RED. When the kernel filter size is too large, RED can take more time for computation in exchange for the area efficiency. We can reduce the number of subcrossbars to half of its original number by adding zeros to the input vector. Suppose that the size of subcrossbar tensor SCT is . In areaefficient design, the shape of SCT shape is . The data flow changes as below:
(2)  
where denotes the input vector of modified subcrossbar () and is the original input vector in the pixelwise mapping method. In this way, we employ 128 subarrays to complete the 64 computation modes in two cycles when and kernel filter size of .
Component  Abbr.  

Array ()  Computation  
Wordline Driving  
Bitine Driving  
Periphery ()  Multiplexer  
Decoder  
Read Circuit / Integrated & Fire Circuit  
Shift Adder 
Iv Experiments
This section evaluates RED in terms of performance, energy consumption, and area overhead. We compare RED with the conventional zeropadding and paddingfree design using the deconvolutional layers from representative neural networks.
Iva Experimental Setup
We modified NeuroSim+ [16] to implement the conventional zeropadding design, paddingfree design, and our proposed RED design. The system ran at the clock frequency and employed 1T1R ReRAM cell structure and 65nm technology node. The benchmark includes several deconvolutional layers from a set of representative neural networks models including GANs and FCNs. The details of the benchmark used in our work are summarized in Table I.
The performance of the three designs for each benchmark is provided hereinafter, including latency, energy consumption and area overhead. All the results are normalized to that of the zeropadding design. For analysis purpose, we present the results by separating the contributions from array and periphery circuitry. Table II lists the detailed breakdown components and their abbreviations.
We select the layers from GANs and FCNs as the benchmark in order to evaluate the performance of RED in various deconvolution applications. The deconvolution layer in GANs usually has a larger amount of input channels and output channels. As such, the kernel size is usually large, e.g., for GAN_Deconv1. In contrast, the kernel size in FCNs is usually much smaller, such as in vocfcn8s. Such a difference in configuration indicates that in GANs, the array resources could outweigh the peripheral circuitry, while the situation in FCNs is opposite. This distinction between GAN and FCN deconvolution is clearly reflected in the evaluation results as we shall present in the following.
IvB Experimental Results & Analysis
IvB1 Latency
Fig. 7 presents the total and breakdown of latency of the three design implementations obtained from the following calculation:
(3) 
Fig. 7(a) shows that RED annexes the advantages of both paddingfree and zeropadding designs. It acquires the lowest total latency and achieves highest speedup across all the benchmarks. The performance improvement of RED benefits from two aspects: 1) it eliminates the zero redundancy in input vectors and diminishes the number of cycles; and 2) the size of output vectors is the same as the zeropadding design, hence the two designs have the similar array latency, which is much lower than that of the paddingfree design. Compared to the zeropadding design, RED achieves speedup.
Fig. 7(b) presents the breakdown of the execution time. Compared to the paddingfree design, RED reduces the array latency because of the smaller size of output vectors and thus the lower latency caused by wordline driving. The paddingfree design has longer array latency for its much longer output vector. Compared to the zeropadding design, RED arouses 76.9%96.8% less array and periphery latency. The zeropadding design requires number of cycles compared to the other two designs after adding zero redundancy to input vectors, which induces extensive periphery latency to the computation. When (such as the GANs and FCN_Deconv4), the zeropadding design reaches periphery latency compared to the paddingfree design and RED. Despite the fact that the paddingfree design produces more array latency than the zeropadding design and RED in GANs, the zeropadding design still holds longer latency than the paddingfree design.
IvB2 Energy
Fig. 8 presents the total and breakdown of energy consumption of the paddingfree, zeropadding and RED. The following equation shows the breakdowns of the energy consumption.
(4) 
Experimental results demonstrate that RED outperforms the other two implementations in the total energy efficiency. Owing to the prodigious energy consumption for wordline/bitline driving, the array energy of the paddingfree design is conspicuously considerable, which is about compared to the other two designs. For this reason, the paddingfree design consumes up to more energy than the others when implementing GAN where the array contributes more.
Due to the fact that the total size of the ReRAM crossbar array remains the same, the zeropadding design and RED have the similar array energy. The periphery energy of RED is lower than that of the zeropadding design as the input data size of each crossbar is reduced, and thereby decoders consume less energy. In total, RED saves energy consumption than the zeropadding design.
IvB3 Area
Fig. 9 shows the breakdown of the area overhead of the three designs. For the sake of brevity, we show only a handful of cases. Similar area overhead is observed for all the layers of GANs and FCNs considered in our study. Likewise, the area overhead has two parts—array area and periphery area. The results demonstrate that three designs incur the same array area because of their identical kernel size. The paddingfree design procures higher area overhead ( in GANs and in FCNs) in counting of numerous outputrelated circuits. The disparity between the area overhead of the paddingfree design and the zeropadding design is remarkable in FCNs. The reason is the difference ( times) in the output sizes of the two designs. More specific, it is in GAN_Deconv1 but in FCN_Deconv2. Compared with the zeropadding design, the proposed RED introduces higher area overhead. The overhead increases mainly because the pixelwise mapping method augments outputrelated periphery circuits by splitting the crossbar apart.
V Conclusion
This work introduces RED, a highperformance and energyefficient ReRAMbased deconvolution accelerator. Through the optimization of the mapping design and data flow, RED eliminates the redundant computations and avoids the overhead of the incremental periphery circuitry. Experimental evaluation shows that RED outperforms the existing ReRAMbased accelerators for the common deconvolutional computation algorithms, with up to speedup and energy consumption reduction.
Acknowledgements
This work was supported by US Department of Energy (DOE) SC0017030. Bing Li acknowledges the National Academy of Sciences (NAS), USA for awarding the NRC research fellowship.
References
 [1] Jiajun Wu et al. Learning a probabilistic latent space of object shapes via 3d generativeadversarial modeling. In NIPS, pages 82–90, 2016.
 [2] Raymond Yeh et al. Semantic image inpainting with perceptual and contextual losses. arxiv preprint. arXiv:1607.07539.
 [3] Jonathan Long et al. Fully convolutional networks for semantic segmentation. In CVPR, pages 3431–3440, 2015.
 [4] Shifeng Zhang et al. Singleshot refinement neural network for object detection. In IEEE CVPR, 2018.
 [5] Ping Chi et al. Prime: A novel processinginmemory architecture for neural network computation in rerambased main memory. In SIGARCH Comput. Archit. News, volume 44, pages 27–39, 2016.
 [6] Ming Cheng et al. Time: A traininginmemory architecture for rrambased deep neural networks. TCAD, 2018.

[7]
Ali Shafiee et al.
Isaac: A convolutional neural network accelerator with insitu analog arithmetic in crossbars.
SIGARCH Comput. Archit. News, 44(3):14–26, 2016. 
[8]
Linghao Song et al.
Pipelayer: A pipelined rerambased accelerator for deep learning.
In HPCA, pages 541–552, 2017.  [9] Ximing Qiao et al. Atomlayer: a universal rerambased cnn accelerator with atomic layer computation. In DAC.
 [10] Bing Li et al. Rerambased accelerator for deep learning. In DATE, pages 815–820, 2018.
 [11] Dawen Xu et al. Fcnengine: Accelerating deconvolutional layers in classic cnn processors. In ICCAD, 2018.
 [12] Fan Chen et al. Regan: A pipelined rerambased accelerator for generative adversarial networks. In ASPDAC.
 [13] Takeru Miyato et al. Spectral normalization for generative adversarial networks. arXiv:1802.05957, 2018.
 [14] Alec Radford et al. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv:1511.06434, 2015.
 [15] Tim Salimans et al. Improved techniques for training gans. In NIPS, pages 2234–2242, 2016.
 [16] Pai Yu Chen et al. Neurosim+: An integrated devicetoalgorithm framework for benchmarking synaptic devices and array architectures. In IEDM, pages 6–1, 2018.
Comments
There are no comments yet.