I Introduction
Edgeaware (EA) filters are important building blocks used in many imagebased applications like stylization, HDR tone mapping, detail editing and noise reduction [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12]. However, several highquality methods such as the weighted least squares (WLS) filter [4] are computationally demanding and hence unsuitable for realtime applications on resource constrained devices. We focus on a recently proposed method termed permeability filter (PF) [11] that can be used to approximate imagebased regularization problems such as HDR tone mapping [12], disparity [11], and optical flow estimation [13]. The PF has been designed to converge to results similar to the highquality WLS filter, but with significantly lower computational effort [13] – which renders the PF an ideal candidate for highquality filtering in realtime.
In this work, we present a hardware accelerator for the PF that can be used as an areaefficient coprocessor in systemsonchip (SoCs) tailored towards video processing. In particular, we contribute the following:

We propose a tiled variant of the PF (TPF) with low onchip memory requirements and a 6.4 lower offchip memory bandwidth than the nontiled PF.

We devise an efficient hardware architecture for the TPF that employs loop pipelining and an optimized memory interleaving scheme. Our design maximizes floatingpoint (FP) unit utilization and eliminates memory contentions caused by frequent tile transpositions required by the TPF.

Our design is the first custom hardware implementation tapedout in CMOS technology, and provides a high compute density of . When scaled to technology, this is around 12 denser than in recent embedded GPUs. When applying 4 internal PF iterations the chip processes 720p monochromatic video at with a measured power of .
Ii Related Work
The PF approximates the highquality WLS filter [4] with a low computational effort [11, 12, 13]. Furthermore, the PF features good halo reduction and information spreading capabilities that are important for HDR tonemapping, regularization methods, sparsetodense conversions, disparity and optical flow estimation. Other EA filters such as variants of the bilateral filter (BF) [15] and the guided filter (GF) [6] are computationally less involved as the PF, but do not achieve the same level of quality and are hence used for different applications. We compare our chip with ASIC implementations of the BF [16] and GF [17] accelerators in Sec. V.
Iii Permeability Filter and Tiling
Similar to other EA filtering methods such as the GF and the domaintransform, the PF uses a guiding image that controls the EA filtering behavior. The filtered data channels may differ from the input image (e.g., in certain applications,
may hold other features like sparse optical flow vectors or disparity data). In a first step, the PF algorithm extracts
pairwise permeabilities and from the guiding image [13]. Permeabilities measure the similarity between pixels at index and in the horizontal and vertical direction, and define the rowstochastic matrices and holding the filtering coefficients for the horizontal and vertical filter passes. The PF is defined as a 1D operation over a single row/column in the image as follows:(1) 
where holds the data channel to be filtered, denotes the intermediate filtering result after iterations (), is a bias parameter towards the original data to reduce halo artifacts and denotes the length of the current row/column to be filtered. To generalize the PF into two dimensions, we iteratively apply (1) on each row (called XPass) and column (called YPass). Typical applications considered in this work (HDR tonemapping [12], filtering of optical flow data [13]) apply XYpasses to the entire frame.
Reformulating Equation (1) enables an efficient 1D scanline evaluation [11, 13]. Each XPass and YPass is decomposed into a forward and backward recursion. The forward recursion with normalization weight is given by:
(2) 
and a backward recursion with normalization weight :
(3) 
The map in Equations (2) and (3) is either or depending on the filtering direction. The initial values of the recursions are set to zero. Finally, the resulting filter output can be computed onthefly during the backward recursion by
(4) 
Hence, one PF iteration comprises a forward/backward recursion in xdirection, followed by a forward/backward recursion in ydirection, as illustrated in Fig. 1a.
Iiia Image Tiling
Since the PF alternates operating on rows and columns, the complete frame must be kept in the working memory. When applying the PF globally to one data channel of a 720p frame ( and ) with a word width of (see Sec. IIIB for an evaluation) the required working memory amounts to to hold the two permeability maps, and (without intermediate storage for ). Since we are considering a coprocessor scenario, such a large memory is unfeasible to be implemented onchip, and hence requires offchip memory. However, this results in a large offchip memory bandwidth of for filter iterations and a throughput of , which is not desirable. To this end, we propose a localized version of the PF, which operates on local square tiles as illustrated in Fig. 1b. To reduce tiling artifacts, we employ linear blending of neighboring tiles. By evaluating different overlap configurations, we found that overlapping tiles by 2/3 provides the best visual quality (see Fig. 2). With that configuration, nine different tiles contribute to result pixels. To minimize the memory requirements, we employ a ‘snake’ scan order (illustrated in Fig. 1c) to be able to reuse intermediate results for blending. Fig. 4 shows onchip SRAM requirements caused by different tile sizes.
A larger tile size is desirable to better approximate the global filter behavior. The following considerations restrict the choice of the tile size: tiles should overlap by 2/3 edge lengths, the length must be divisible by three, and computing the linear weights for the final merging step is simplified when the length is divisible by a power of two. This results in a preferred tile size of . We choose a tile size of pixels, of which pixels overlap the neighbouring tiles on each side. Using this tiling approach, the PF can be reformulated to Alg. 1, which can be implemented with only SRAM storage to hold one tile. Further, the external bandwidth, comprising the input data , , , the filter output , and the partially blended tiles, reduces by 6.4 to only .
IiiB Numerical Precision
One usecase of the PF algorithm is to regularize sparse feature maps (e.g., optical flow vectors) and convert them to a dense representation. This operation requires high precision and dynamic that is difficult to handle with fixedpoint arithmetic. On the other hand, single precision FP with full IEEE754 support (denormals, NaN, etc) is not needed for this application. Fig. 5 shows an evaluation of different FP formats for dense image data, as well as for the opticalflow estimation procedure [13] that operates on sparse velocity data. Result quality is measured w.r.t. a double precision baseline with the PSNR measure in the case of dense data, and with the average endpoint error (AEE) measure for sparse flow data. Exponents with 5 bit and below often lead to underflows for both data types and were hence not further considered. We chose to employ a 24 bit FP format (FP24) with 6 exponent and 17 mantissa bits in order to align the format to byte boundaries (a byte aligned 16 bit FP format would have led to unacceptable quality losses for both dense and sparse data). This leads to a negligible implementation loss below 2E4 AEE for sparse flow data, and over 90 dB PSNR for dense image data.
Iv Architecture
Fig. 3a shows the proposed TPF architecture consisting of input buffer, filter cluster and merger. The tiles of the current frame are streamed through the input buffer into the filter cluster that operates on one 4848 tile at a time. The input buffer aggregates the input such that it can be bursted into the filter cluster together with the last ypass of the currently processed tile. I.e., the computation of the last ypass is effectively overlapped with the data transfers that replace the now obsolete data in the twoport tile memories. During the last ypass, the filter output is streamed into the merger unit that fuses the overlapping tile areas and finally outputs the results. The architecture implements the tiled PF with a fixed number of iterations, and is parameterized to process monochromatic images with 720p resolution in realtime. However, the same design can be scaled to higher resolutions and multiple channels by deploying more parallel filter units (FUs). By minimizing the bandwidth to external memory, we maximize energy efficiency and facilitate integration into a larger system (e.g., as a filter accelerator in a mobile SoC).
Iva Filter Cluster
Due to the filter feedback loop, FP24 units should operate at single cycle latency to achieve high utilization (Sec. IVB). Core frequencies up to 300 MHz achieve single cycle operation. Henceforth, we assume the limiting frequency of 300 MHz in the following throughput calculations. The 1D PF requires a single FP division per pixel. One 720p frame is split into 3354 tiles. Each pixel in a single tile is processed 8 times (4 iterations with 1 horizontal and 1 vertical pass each). In order to achieve a throughput of 25 fps, we need FP divisions per seconds. Since the divisions are only performed during the backward recursion, we need at least FP dividers to run in parallel. The proposed architecture hence contains 12 parallel FUs. As described in the forthcoming sections, we employ pipeline interleaving and an optimized SRAM interleaving pattern to achieve a high utilization rate of for FP multipliers and adders (49.9% utilization rate for the FP dividers), that is required to achieve the targeted throughput without further datapath replication.
IvB Pipeline Interleaving
Fig. 6a shows part of the datapath inside the FUs. Due to the long combinational path through the FP adder and multiplier, timing closure can not be achieved at the target frequency of . The insertion of a pipeline register as in Fig. 6b improves the timing, but the feedback path from the multiplier back to the adder leads to a different functional behavior if the additional latency is not accounted for. To this end, a technique called pipeline interleaving is used [18]. Instead of processing a single row or column of the tile at a time, each FU simultaneously processes two lines in an interleaved manner. As can be seen in Fig. 7
a, the next pixel from an even row enters the FU in even cycles, and the next pixel from the neighboring odd row enters the FU in odd cycles. With the additional pipeline stage the propagation delay of the critical path in the FUs can be reduced to
.IvC Data Access Pattern
The proposed architecture provides simultaneous access to currently processed pixels in all operation modes and avoids filter pipeline stalls. To load twelve pixels in parallel, at least one memory block per filter block is required. Storing full rows of the tile in different memory blocks allows parallel access to the pixels during the horizontal pass but prevents simultaneous processing of the first pixels in the vertical pass since all the pixels of the first row reside in the same memory block. Instead, we employ an access pattern that subdivides the tile into squares of pixels denoted as . The rule assigns squares to memory blocks . The square size of pixels is motivated due to the pipeline interleaving and reduces the complexity of the address and memory index calculation units. The resulting checkerboardlike pattern is visualized in Fig. 7.
IvD Cyclic Permutation of Pixel Locations
The proposed architecture tiles frames such that every two adjacent tiles overlap by 2/3, as explained in Sec. IIIA. The tile size of pixels implies – with the exception of the first tile of a frame – that pixels (or in the case of switching rows) need to be replaced. Reusing the remaining pixels reduces the input bandwidth. To maximize throughput, new pixels are stored where the now obsolete pixels were located in memory without reordering them. Since the individual tiles overlap by exactly the tile is subdivided into squares of pixels (visualized in Fig. 7). The rows and columns of this grid undergo a cyclic permutation that results in 9 different fragmentation states. The filter cluster and the merger keep track of the fragmentation state and transform the addresses accordingly. This approach increases the complexity of the address calculation but minimizes pipeline stalls in the filter cluster and allows 2/3 of the tiled pixels to remain in the SRAMs when stepping to the next tile.
V Implementation & Results
A chip named Hydra (depicted in Fig. 8) implements the proposed architecture and was fabricated in CMOS technology. The design has been synthesised with Synopsys DC 2016.03 and P&R has been performed with Cadence Innovus 16.1. The total design complexity is 1.3 MGE of which 43% is occupied by the 52 SRAMs. Hydra supports atruntime configuration of the filter parameters and arbitrary video resolutions (with realtime performance up to 720p at 1.2 V). It also features a high FP24 compute density of . When scaled to , this would amount to , which is around 12 higher than in modern mobile GPUs manufactured in . For instance, the NVidia Tegra X2 provides a theoretical peak throughput of [19], and with an assumed silicon area of for the GPU subsystem, this results in only . In terms of external memory bandwidth, Hydra requires . This amounts to only 7.4 % of the total bandwidth provided by a LPDDR41600 memory channel, which makes our design an ideal candidate for inclusion within a specialized domain accelerator in a SoC.
Tbl. I compares the key figures of Hydra with two related accelerators [16, 17]. Note, that these designs implement simpler EA filters (BF/GF variants) with fixed point arithmetic since they have been developed to process dense, 8 bit standard dynamic range (SDR) data occurring in applications like flash denoising [17]. Our design is the only one that reports measured data, and it has been designed to support much more challenging HDR images and sparse optical flow data, which requires accurate arithmetic with high dynamic range (Sec. IIIB). The PF provides better filtering quality than the GF and BF since it does not suffer from halo artifacts [12].
Properties \ Design  [16]  [17]  Hydra  

Algo. 
Filter Type  Joint BF  GF  TPF 
Window Size [px]  3131  3131  4848  
Arithmetic  FIXP  FIXP  FP24  
Appl. 
SDR Data  ✓  ✓  ✓ 
HDR Data  –  –  ✓  
Sparse Coordinate Data  –  –  ✓  
Resources 
Results from  GateLevel  PostLayout  Measured 
Technology [nm]  90  90  65  
Logic [kGE]  276  93  762  
SRAM [kB]  23  3.2  47.3  
Total Complexity [kGE]      1’328  
Perform. 
Resolution  1080p  1080p  720p 
Frequency [MHz]  100  100  259  
Throughput [fps]  30  30  24.8  
Core Power [mW/MHz]    0.23  1.72  
Bandwidth [MB/frame]  16.6  32.8  38 
Vi Conclusions
We present Hydra, a compact and efficient accelerator for highquality, permeabilitybased EA filtering in realtime. The accelerator employs a novel, tiled variant of the PF that significantly reduces the required onchip memory and offchip bandwidth compared to a nontiled PF implementation. Hydra is parameterized to deliver a throughput of 25 fps for monochromatic 720p video and provides significantly higher compute density than recent mobile GPUs. Our design is scalable to higher resolutions by increasing the amount of parallel FUs, and by employing higherorder pipeline interleaving to increase the operating frequency. Integrated into a SoC, the presented accelerator could act as a highly accurate and areaefficient coprocessor for video processing applications.
References
 [1] C. Tomasi and R. Manduchi, “Bilateral filtering for gray and color images,” in ICCV, Jan 1998, pp. 839–846.
 [2] P. Perona and J. Malik, “Scalespace and edge detection using anisotropic diffusion,” IEEE TPAMI, vol. 12, no. 7, Jul 1990.
 [3] P. Milanfar, “A tour of modern image filtering: New insights and methods, both practical and theoretical,” IEEE SPM, Jan 2013.
 [4] Z. Farbman, R. Fattal, D. Lischinski et al., “Edgepreserving decompositions for multiscale tone and detail manipulation,” ACM TOG, vol. 27, no. 3, pp. 67:1–67:10, Aug. 2008.
 [5] R. Fattal, “Edgeavoiding wavelets and their applications,” ACM TOG, vol. 28, no. 3, pp. 1–10, 2009.
 [6] K. He, J. Sun, and X. Tang, “Guided image filtering,” IEEE TPAMI, vol. 35, no. 6, pp. 1397–1409, June 2013.
 [7] E. S. L. Gastal and M. M. Oliveira, “Domain transform for edgeaware image and video processing,” ACM TOG, vol. 30, no. 4, 2011.
 [8] E. S. Gastal and M. M. Oliveira, “HighOrder Recursive Filtering of NonUniformly Sampled Signals for Image and Video Processing,” in Computer Graphics Forum, vol. 34, no. 2, 2015, pp. 81–93.
 [9] M. Aubry, S. Paris, S. W. Hasinoff et al., “Fast Local Laplacian Filters: Theory and Applications,” ACM TOG, vol. 33, no. 5, 2014.
 [10] S. Paris, S. W. Hasinoff, and J. Kautz, “Local Laplacian Filters: EdgeAware Image Processing With a Laplacian Pyramid,” ACM TOG, vol. 30, no. 4, p. 68, 2011.
 [11] C. Cigla and A. A. Alatan, “Information Permeability for Stereo Matching,” Signal Processing: Image Communication, 2013.
 [12] T. O. Aydin, N. Stefanoski, S. Croci et al., “Temporally Coherent Local Tone Mapping of HDR Video,” ACM TOG, 2014.
 [13] M. Schaffner, F. Scheidegger, L. Cavigelli et al., “Towards EdgeAware SpatioTemporal Filtering in RealTime,” IEEE TIP, vol. PP, 2017.
 [14] M. Lang, O. Wang, T. Aydın et al., “Practical Temporal Consistency for ImageBased Graphics Applications,” ACM TOG, vol. 31, no. 4, 2012.
 [15] F. Porikli, “Constant time o(1) bilateral filtering,” in IEEE CVPR 2008, June 2008, pp. 1–8.
 [16] Y. C. Tseng, P. H. Hsu, and T. S. Chang, “A 124 Mpixels/s VLSI Design for HistogramBased Joint Bilateral Filtering,” IEEE TIP, vol. 20, no. 11, pp. 3231–3241, Nov 2011.
 [17] C. C. Kao, J. H. Lai, and S. Y. Chien, “VLSI Architecture Design of Guided Filter for 30 Frames/s FullHD Video,” IEEE TCSVT, vol. 24, no. 3, pp. 513–524, March 2014.
 [18] H. Kaeslin, “TopDown Digital VLSI Design, from VLSI Architectures to GateLevel Circuits and FPGAs,” Morgan Kaufmann, 2014.
 [19] A. Skende, “Introducing parker nextgeneration tegra systemonchip,” in 2016 IEEE Hot Chips 28 Symposium (HCS), Aug 2016, pp. 1–17.
Comments
There are no comments yet.