Fully Quantized Image Super-Resolution Networks

11/29/2020 ∙ by Hu Wang, et al. ∙ 0

With the rising popularity of intelligent mobile devices, it is of great practical significance to develop accurate, realtime and energy-efficient image Super-Resolution (SR) inference methods. A prevailing method for improving the inference efficiency is model quantization, which allows for replacing the expensive floating-point operations with efficient fixed-point or bitwise arithmetic. To date, it is still challenging for quantized SR frameworks to deliver feasible accuracy-efficiency trade-off. Here, we propose a Fully Quantized image Super-Resolution framework (FQSR) to jointly optimize efficiency and accuracy. In particular, we target on obtaining end-to-end quantized models for all layers, especially including skip connections, which was rarely addressed in the literature. We further identify training obstacles faced by low-bit SR networks and propose two novel methods accordingly. The two difficulites are caused by 1) activation and weight distributions being vastly distinctive in different layers; 2) the inaccurate approximation of the quantization. We apply our quantization scheme on multiple mainstream super-resolution architectures, including SRResNet, SRGAN and EDSR. Experimental results show that our FQSR using low bits quantization can achieve on par performance compared with the full-precision counterparts on five benchmark datasets and surpass state-of-the-art quantized SR methods with significantly reduced computational cost and memory consumption.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The rapid development of Deep Convolutional Neural Networks (CNNs) has led to significant breakthroughs in image super-resolution, which aims to generate high-resolution images from the low-resolution inputs. For real-world applications, the inference of SR is usually executed on edge devices, such as HD television, mobile phones and drones, which require real-time, low power consumption and fully embeddable. However, the high computational cost of CNNs prohibits the deployment of SR models to resource-constrained edge devices.

To improve computation and memory efficiency, various solutions have been proposed in the literature, including network pruning [36], low-rank decomposition [31], network quantization [34, 19] and efficient architecture design [11]. In this work, we aim to train a low-precision SR network, including all layers and skip connections. Although current quantization methods have achieved promising performance on the image classification task, training quantized models for more complex tasks such as super-resolution, still remains a challenge in terms of the unverifiable efficiency improvement on hardware and the severe accuracy degradation. For example, to our knowledge, existing quantized SR models typically keep the skip connections to be full-precision, making it impractical be deployed to embedded devices. In this paper, we introduce Fully Quantized Image Super-Resolution Networks (FQSR), to yield a promising efficiency-versus-accuracy trade-off.

Typically, a common SR network consists of a feature extraction module, a nonlinear mapping module and an image reconstruction module, as shown in

[28]. Recently, various quantized SR methods [21, 28] leverage binary quantization on the non-linear sub-module of the SR network, while paying less attention to the quantization of the feature extraction and image reconstruction. However, we observe that the feature extraction and reconstruction modules also account for significant computational cost during inference (, these two sub-modules occupy 45.1% and 38.7% of total computational FLOPs in

up-scaling SRResNet and EDSR, respectively). Therefore, it is essential to pay attention to quantize all three sub-modules to obtain more compact models. Additionally, in the SR task, the feature dimensions are usually very high. These features will occupy a huge amount of memories, especially when skip connections exist in the network which require multiple copies of the tensors. Thus, with the quantization of the skip connections, the memory consumption can be saved dramatically (by approximately

when compared to the full-precision counterparts). In this paper, we propose to quantize all layers in SR networks to reduce the burden of computation and storage starving SR tasks on resource-limited platforms.

In addition to the fully-quantized design, we further introduce specific modifications with respect to the quantization algorithm for super-resolution. In particular, we empirically observe that the data distributions of the activations and weights of different layers differ drastically for the SR task. For the distribution with a small value range, the corresponding quantization interval should be sufficiently compact in order to maintain appropriate quantization resolution. On the other hand, if the quantization interval is too compact for distribution with a large value range, it may cause severe information loss. Therefore, we propose to learn quantizers that can find the optimal quantization intervals that minimize the task loss. To achieve this, we propose to parameterize the quantization intervals to make quantizers trainable. Specifically, we first estimate the quantization intervals through moving average as the initialization and then optimize them using back-propagation with stochastic gradient descent. Moreover, we also observe that the categorical distribution of quantized values may not fit the original distribution in some layers during training. Thus, we propose a quantization-aware calibration loss to encourage the minimization of the distribution difference.

Our contributions are summarized as follows.

  • We introduce fully quantized neural networks for image super-resolution to thoroughly quantize the model including all layers and skip connections. To our knowledge,

    we are the first to perform fully end-to-end quantization for the SR task.

  • We identify several difficulties faced by current low-bitwidth SR networks during training. Specifically, we first propose quantizers with learnable intervals to adapt the vastly distinct distributions of weights and activations in different network layers. To further reduce the quantization error, we also introduce a calibration loss to mimic the categorical distribution after discretization to the original continuous distribution.

  • Our extensive experiments with various bit configurations demonstrate that our FQSR is able to achieve comparable performance with the full-precision counterpart, while saving considerable amount of computation and memory usage. Moreover, experiments on mainstream architectures and datasets demonstrate the superior performance of the proposed FQSR over a few competitive state-of-the-art methods.

2 Related Work

Image super-resolution.

Super-resolution research has attracted increasing attention in recent years. Since the deep learning based super-resolution is first proposed by Dong

[5, 6], a variety of convolutional neural models have been studied. ESPCN [24] is proposed to optimize the SR model by learning sub-pixel convolutional filters. Ledig [16]

introduce a Generative Adversarial Networks (GANs) SR model named SRGAN, along with which the generator is described as SRResNet. Lim

[17] propose a model named EDSR. Residual channel attention is introduced by Zhang [32] to overcome gradient vanishing problem in very deep SR networks.

Besides, much effort has been devoted to improve the efficiency of the SR models by designing light-weight structures. For example, works in [7, 24] speed up the SR without the upsampling operations. Hui [14] introduce a light-weighted information multi-distillation block into the proposed super-resolution model.

Model quantization. Model quantization aims to represent the weights, activations and even gradients in low-precision, to yield highly compact DNNs. Notably, convolutions and matrix multiplications can be replaced with fixed-point or bitwise operations, which can be implemented more efficiently than the floating-point counterpart. In general, quantization methods involve binary neural networks (BNNs) and fixed-point quantization. In particular, BNNs [23, 13, 35, 19] constrain both weights and activations to only two possible values (, or ), enabling the multiply-accumulations be replaced by the bitwise operations: and . However, BNNs usually suffer from severe accuracy degradation. To make a trade-off between accuracy and efficiency, researchers also study fixed-point quantization with higher-bit representation. To date, most quantization studies have employed uniform quantizers and focus on fitting the quantizer to the data, based on statistics of the data distribution [33, 3], minimizing quantization error during training [4, 30] or minimizing the task loss with stochastic gradient descent [15, 8, 34].

In terms of quantization for super-resolution, Ma [21]

apply BNNs to compress super-resolution networks. Note that it only proposes to binarize the weights of the residual blocks within the model. Most recently, Xin

[28] propose a bit-accumulation mechanism for single image super resolution to boost the quantization performance. Their models are only partially quantized with the feature extraction module, image reconstruction module and skipped connections kept in full-precision. In contrast, our FQSR allows inference to be carried out using integer-only arithmetic, which delivers improved efficiency and accuracy trade-off.

3 Method

3.1 Preliminary

In this work, we propose to quantize weights of all convolutional layers and activations of all the network layers into low-precision values. According to [23, 33]

, for two binary vector

and within binary neural networks (BNNs), the inner product of them can be formulated as:

(1)

where counts the number of bits in a bit vector and represents the bitwise “” operation.

More generally, for quantization with higher and arbitrary bit-widths, the quantized values can be viewed as the linear combination of binary bases. Let be a -bit quantized vector which can be represented as , where . Similarly, for another -bit vector , we have , where . Formally, the inner product calculation between and is

(2)

For a general full-precision value (activation or weight) to be quantized, an interval parameter is introduced to control the quantization range. The quantization function can be formulated as:

(3)

where represents the quantization interval, presents the quantization levels for -bit quantization, rounds to the nearest integer, and . For unsigned data, and ; for signed data, and . At the end of the equation, a scale factor is multiplied to the intermediate results after rounding operation to re-scale the value back to its original magnitude. In our paper, practically, we privatize quantizers for activations and weights in each layer.

During the training process, latent full-precision weights are kept to update the gradients during back-propagation, while being discarded during inference. The gradient is derived by using the straight-through estimator (STE) [1] to approximate the gradient through the non-differentiable rounding function as a pass-through operation, and differentiating all other operations in Eq. (3) normally.

3.2 Distribution-Aware Interval Adaptation

Figure 1: In the figure, (a) and (b) are the histograms of feature map values of 15th and 16th convolutional layers within SRResNet, respectively. For the super-resolution task, we empirically find that the data distribution ranges of the feature maps and weights within different layers are drastically different, as shown in (a) and (b) (-15 to 15 for (a) and -150 to 150 for (b)). Thus, we propose a trainable quantizer to adaptively decide the quantization interval according to current distribution for mitigation of this phenomenon.

In model quantization, the values within the quantization interval will be quantized. The quantization process would proceed smoothly if a suitable quantization interval is determined. However, once the quantization interval does not fit in the distribution of values to be quantized, it would incur large quantization error. For the super-resolution task, we empirically find that the data distributions of the features and weights of different layers are drastically different, as shown in Figure 1. Thus, different quantization intervals should be allocated for different quantizers. Toward this end, we propose to estimate the intervals automatically by parameterizing . To alleviate the optimization difficulty of the interval, we devise to find a good initial point for of a quantizer. Specifically, we propose to use the moving average of max values within the tensor (batch-wise activations or convolutional filters within a layer) to be quantized as the initial point:

(4)

This process is performed at the first iterations of the model training as a warmup. Then the parameterized interval

is optimized in conjunction with other network parameters using backpropagation with stochastic gradient descent. Similar to the training process of

[8], the gradient through the quantizer to the quantization interval is approximated by STE as a pass-through function.

3.3 Fully Quantized Inference

Figure 2: The overview of the proposed fully quantized super-resolution networks. The existing quantization models for super-resolution quantize the Non-linear Mapping part merely; while we quantize all three modules, by which large computation can be saved.

According to [28], the super-resolution process can be divided into three sub-modules: input feature extraction module , nonlinear mapping module and SR image reconstruction module . Formally, for an input low-resolution image , the aforementioned process to generate a super-resolution image can be presented as:

(5)

Usually, there is a skip connection to link the feature extraction module and the reconstruction module . Despite and consist of simple structures, they play an important role to achieve good performance in the super-resolution process. Moreover, large computational burdens are not only laid on the nonlinear mapping , but also on the reconstruction module , since the convolutional layers before upsampling are with a large number of channels to deal with the super-resolution output. However, current super-resolution quantization models propose to quantize only [21, 28]. In addition, the existence of the skip connections within the network inevitably incurs huge memory consumption, which is known to dominate the energy consumption [10]. To obtain an energy efficient super-resolution framework, we propose to fully quantize all layers of the three modules, especially including all skip connections. The overview of the proposed fully quantized super-resolution network is shown in Figure 2 and the comparison of quantization differences between existing SR quantization methods and our proposed FQSR networks is presented in Table 1.

Methods All Layers wt fm sc
SRResNet_Bin [21]
SRGAN_Bin [21]
VDSR_BAM [28]
SRResNet_BAM [28]
FQSR (Ours)
Table 1:

Comparison of the quantized operations of different methods. Within the table, “✓” represents whether quantization is enabled for the column; “All Layers” include convolutional layers, BN layers, ReLU layers and Element-wise addition layers; “wt” stands for weight quantization of convolutional layers; “fm” denotes the feature map quantization; “sc” denotes the quantization of skip connections.

Quantization for BN.

During the inference phase, if the batch normalization layer is adopted in the quantized model, it can be folded into the preceding convolutional layer to get rid of the extra floating-point operations. The folding of the batch normalization operation is formally presented as:

(6)

where , and are the weights, inputs and bias term of the preceding convolutional layer, respectively; and

are the mean and standard deviation of the corresponding dimension;

is the output of the batch normalization layer and , are the weights and bias after folding, respectively.

Quantization for skip connections.

Residual learning is critical to fetch exceptional representations in computer vision tasks. In the residual structural networks, skip connections are the core components to build direct links between shallow layers and deeper ones. Nevertheless, in quantized models, the skip connections carrying floating-point operands will hinder the model to be applied practically on embedding systems or mobile platforms because the quantization status of each layer is inconsistent. In addition, it will inevitably increase the computation as well. Moreover, for the super-resolution task, the input images and the images after super-resolution are usually in very high-resolution (such as 2K or 4K). Therefore, the intermediate features conveyed through skip connections will occupy a huge amount of memorie consumption.

In order to address the aforementioned issues, we quantize the skip connections through quantizing the output features of all convolutional layers and the element-wise addition layers. Consequently, the memory consumption will be saved dramatically (can be saved approximately when compared to the full-precision counterparts). Additionally, if the skip connections are quantized, the models are hardware-friendly since it is fully quantized. Formally, the element-wise addition operation of the skip connection in our quantized network can be formulated as:

(7)

Overall, the process of quantization in a typical residual block in the proposed FQSR network is shown in Figure 3.

Figure 3: The process of the quantization in a typical residual block. In the process, represents the quantization function. The input is the output of the preceding layer/residual block. The outputs of the convolutional layer and the element-wise layer are quantized to ensure the quantization of skip connections.

3.4 Quantization-aware Calibration Loss

As shown in Figure 4, for the super-resolution task, we empirically observe that in some layers, the data distributions before and after quantization change drastically. It will affect the model performance significantly due to the large quantization error. In order to solve this issue, an objective function termed as Quantization Calibration Loss (QCL) is adopted to calibrate the values after quantization to have an approximate distribution as before quantization. The QCL loss is able to be applied on input activations, weights and outputs of each layer.

For a real value to be quantized, here we are targeting to find optimal parameters to minimize the difference of before and after quantization through back-propagation. Formally, QCL is formulated as:

(8)

where denotes the norm.

Figure 4: This figure shows, before and after adding the QCL objective, the weights distribution of the last convolutional layer within SRResNet and its categorical distribution. Intuitively, the QCL is adopted to minimize the jitters before and after quantization to constrain the model quantization in a smoother manner. As shown in (a), for the model without QCL objective, the data distributions before quantization and the categorical distribution after quantization are vastly different; With the QCL objective as illustrated in (b), the categorical distribution is calibrated to fit the data distribution before quantization.

Therefore the final objective function for the proposed super-resolution quantized networks is:

(9)

where represents the super-resolution loss and

is a balancing hyperparameter.

4 Experiments

4.1 Experimental Setup

Following the existing works [17, 16, 21, 28], we train our fully quantized super-resolution networks on DIV2K [25] dataset and evaluate models on five prevalent benchmark datasets. Extensive ablation study is further conducted to validate the effectiveness of each component within the proposed method.

Datasets and evaluation metrics.

We conduct the model training on DIV2k dataset, which is made up of 800 good quality high/low-resolution image pairs for model training, 100 image pairs for model validation and 100 image pairs for testing. However, the testing HR images for DIV2K is not publicly accessible, so we train models on 800 training images and validate models on 10 validation images. The best validation models are tested on Set5

[2]

(5 images), Set14

[29] (14 images), BSD100 [22] (100 images), Urban100 [12] (100 images) and DIV2K (100 validation images). Two scaling settings are considered for model evaluation, containing and .

For the super-resolution model evaluation, we take the most commonly adopted Peak Signal to Noise Ratio (PSNR) and Structural Similarity (SSIM)

[27] as our evaluation metrics. All evaluation is performed by cropping pixels for upscaling.

Implementation details. During training, random vertical/horizontal flips and 90-degree rotation are performed for data augmentation. The model batch size is set to 16 and Adam optimizer is adopted for model optimization. The initial learning rate is set to for SRResNet and SRGAN and

for EDSR model. The models are trained for 300 epochs with cosine annealing

[20] learning rate tuning strategy. The hyperparameter is set to 20 and the trade-off factor

is set to 0.3. The models are implemented using PyTorch with NVIDIA GTX 1080 Ti GPUs. The experimental settings are fixed to all of our trained models to keep fair comparisons.

4.2 Overall Performance

We embed the proposed fully quantized super-resolution scheme on three state-of-the-art architectures, containing SRResNet, EDSR and SRGAN, to compare the effectiveness of low-bitwidth models with full-precision models and bicubic interpolation. The results are shown in Tables

2, 3 and 4.

Evaluation on SRResNet. As shown in Table 2, we implement the FQSR on SRResNet with multiple configurations. When compared to the bicubic interpolation, the 4/4/4 model (, weights, activations and skip connections are all quantized to 4 bits) surpasses it by 0.864 with up-scaling and 1.542 with up-scaling for PSNR on Set5 dataset. By raising the skip connection precision to 8 bits, the performances are boosted by a large margin, , 1.814 surpass on Set5 and 1.101 on Set14 for PSNR up-scaling, respectively. It is worth noting that, on both and up-scaling settings, the 6/6/6 and 6/6/8 models achieve comparable results with or outperform the full-precision version of SRResNet. In addition, the 8/8/32 version fully quantized model significantly outperforms the full-precision counterpart by 0.121 and 0.336 on the PSNR metric with Set5 dataset for and up-scaling, respectively. The last rows of both and up-scaling settings are the lite version 6/6/8 configuration of the proposed FQSR model named as FQSR_Lite, within which the nonlinear mapping module only consists of 10 residual blocks rather than 16. Thus, we intend to save more computational cost while not losing much performance. On both and up-scaling settings, the FQSR_Lite surpasses or receives comparable results with the 6/6/6 configuration but with a less computational cost.

Methods Scale wt fm sc OPs Memo Set5 Set14 B100 Urban100 DIV2K
PSNR SSIM PSNR SSIM PSNR SSIM PSNR SSIM PSNR SSIM
SRResNet [16] 32 32 32 997.018 531.117 37.760 0.958 33.270 0.914 31.950 0.895 31.280 0.919 - -
Bicubic 32 32 32 - - 33.660 0.930 30.240 0.869 29.560 0.843 26.880 0.840 31.010 0.939
SRResNet_Bin [21] p1 32 32 997.018 531.117 35.660 0.946 31.560 0.897 - - 28.760 0.882 - -
SRResNet_BAM [28] p1 p1 32 168.894 5842.287 37.210 0.956 32.740 0.910 31.600 0.891 30.200 0.906 - -
SRResNet_w/o 32 32 32 155.749 177.039 36.863 0.954 32.536 0.907 31.379 0.887 29.525 0.896 33.268 0.934
FQSR (Ours) 4 4 4 62.314 66.390 34.524 0.927 31.302 0.879 30.412 0.862 28.946 0.879 30.419 0.908
4 4 8 62.314 132.779 36.338 0.945 32.403 0.901 31.367 0.882 29.982 0.899 32.357 0.927
4 4 32 62.314 531.117 36.854 0.953 32.710 0.908 31.583 0.890 30.430 0.909 32.985 0.935
6 6 6 93.470 99.854 37.299 0.954 33.069 0.910 31.869 0.892 31.160 0.916 33.671 0.938
6 6 8 93.470 132.779 37.541 0.957 33.236 0.913 31.966 0.894 31.398 0.920 33.964 0.941
8 8 8 124.627 132.779 37.555 0.958 33.202 0.914 31.972 0.896 31.356 0.921 33.452 0.942
8 8 32 124.627 531.117 37.881 0.959 33.408 0.915 32.093 0.897 31.712 0.924 34.424 0.943
FQSR_Lite (Ours) 6 6 8 63.894 132.779 37.349 0.956 33.070 0.912 31.851 0.895 30.964 0.916 33.753 0.939
SRResNet [16] 32 32 32 383.487 132.777 31.760 0.888 28.250 0.773 27.380 0.727 25.540 0.767 - -
Bicubic 32 32 32 - - 28.420 0.810 26.000 0.703 25.960 0.668 23.140 0.658 26.660 0.852
SRResNet_Bin [21] p1 32 32 383.487 132.777 30.340 0.864 27.160 0.756 - - 24.480 0.728 - -
SRResNet_BAM [28] p1 p1 32 176.461 1460.580 31.240 0.878 27.970 0.765 27.150 0.719 24.950 0.745 - -
SRResNet_w/o 32 32 32 173.175 44.260 30.880 0.841 27.808 0.723 27.059 0.694 24.777 0.714 28.081 0.815
FQSR (Ours) 4 4 4 23.968 16.597 29.962 0.846 27.235 0.735 26.591 0.691 24.427 0.714 26.046 0.792
4 4 8 23.968 33.194 31.038 0.874 27.860 0.761 27.090 0.714 24.949 0.744 27.925 0.816
4 4 32 23.968 132.777 31.303 0.880 28.045 0.767 27.188 0.719 25.165 0.754 28.074 0.821
6 6 6 35.952 24.896 31.759 0.887 28.319 0.774 27.399 0.727 25.678 0.772 28.476 0.829
6 6 8 35.952 33.194 31.923 0.889 28.404 0.775 27.452 0.727 25.752 0.774 28.571 0.830
8 8 8 47.936 33.194 32.098 0.888 28.514 0.773 27.526 0.725 25.968 0.770 28.328 0.829
8 8 32 47.936 132.777 32.096 0.892 28.559 0.780 27.555 0.732 26.034 0.783 28.894 0.836
FQSR_Lite (Ours) 6 6 8 28.558 33.194 31.644 0.886 28.249 0.774 27.348 0.727 25.460 0.765 28.405 0.826
Table 2: The comparison between existing methods and our FQSR on SRResNet [16]. The OPs are in the unit G Flops and Memory consumption is in the unit M Bytes. Similar as Table 1, “wt” represents weight quantization of convolutional layers; “fm” is the feature map quantization of layers; “sc” denotes the quantization of skip connections; “p1” represents the corresponding models are partially binarized.

Evaluation on EDSR. In the evaluation on EDSR, as shown in Table 3, the 4/4/4 model is able to outperform bicubic interpolation by 0.847 for and 0.753 for on Set5. On both and up-scaling settings, the 6/6/6 and 6/6/8 models achieve comparable results with the full-precision version of EDSR. The 8/8/8 and 8/8/32 models outperform the full-precision baseline model on most of the metrics. On , 0.131 PSNR improvement on Set14 and 0.216 PSNR improvement on Urban100 are obtained by the 8/8/32 model compared to the full-precision model.

Methods Scale wt fm sc Set5 Set14 B100 Urban100 DIV2K
PSNR SSIM PSNR SSIM PSNR SSIM PSNR SSIM PSNR SSIM
EDSR [17] 32 32 32 37.885 0.958 33.425 0.915 32.106 0.897 31.777 0.924 34.471 0.944
Bicubic 32 32 32 33.660 0.930 30.240 0.869 29.56 0.843 26.880 0.84 31.010 0.939
FQSR (Ours) 4 4 4 34.507 0.923 31.366 0.876 30.461 0.853 29.245 0.881 30.904 0.905
4 4 8 35.017 0.926 31.685 0.881 30.711 0.859 29.617 0.887 31.387 0.908
4 4 32 35.027 0.926 31.717 0.882 30.723 0.860 29.665 0.888 31.387 0.908
6 6 6 37.209 0.952 33.050 0.909 31.823 0.890 31.297 0.917 33.610 0.936
6 6 8 37.373 0.953 33.184 0.910 31.925 0.892 31.545 0.920 33.787 0.938
8 8 8 37.886 0.958 33.498 0.915 33.498 0.897 31.938 0.926 34.433 0.944
8 8 32 37.953 0.959 33.556 0.916 32.148 0.897 31.993 0.926 34.487 0.944
EDSR [17] 32 32 32 32.007 0.892 28.486 0.778 27.528 0.731 25.934 0.781 28.880 0.835
Bicubic 32 32 32 28.420 0.810 26.000 0.703 25.960 0.668 23.140 0.658 26.66 0.852
FQSR (Ours) 4 4 4 29.173 0.827 26.652 0.719 26.210 0.675 23.945 0.697 26.41 0.777
4 4 8 30.026 0.847 27.294 0.738 26.630 0.690 24.542 0.723 27.016 0.792
4 4 32 30.049 0.847 27.323 0.739 26.631 0.691 24.557 0.724 27.080 0.792
6 6 6 31.299 0.870 28.054 0.760 27.185 0.713 25.344 0.755 27.967 0.813
6 6 8 31.700 0.883 28.287 0.771 27.358 0.723 25.583 0.768 28.450 0.825
8 8 8 32.060 0.890 28.516 0.777 27.532 0.729 25.936 0.779 28.776 0.833
8 8 32 32.069 0.891 28.515 0.778 27.533 0.731 25.938 0.781 28.850 0.834
Table 3: The comparison of our FQSR with full-precision networks on EDSR [17] and Bicubic interpolation.

Evaluation on SRGAN. The evaluation on SRGAN is shown in Table 4. Similar to the evaluation on SRResNet, the performance of 4/4/4 models achieves better performance than bicubic interpolation on most of the metrics and datasets. Surprisingly, on setting, the 6/6/8 setting outperforms the full-precision model on multiple metrics, , 0.075 PSNR improvement on Set5 and 0.189 PSNR boost on Set14 compared with the full-precision model. Moreover, the 8/8/32 model outperforms the full-precision model on most of the metrics.

Methods Scale wt fm sc Set5 Set14 B100 Urban100 DIV2K
PSNR SSIM PSNR SSIM PSNR SSIM PSNR SSIM PSNR SSIM
SRGAN [16] 32 32 32 37.446 0.958 33.033 0.914 31.971 0.896 31.300 0.920 33.885 0.942
Bicubic 32 32 32 33.660 0.930 30.240 0.869 29.560 0.843 26.880 0.840 31.010 0.939
FQSR (Ours) 4 4 4 34.522 0.927 31.405 0.879 30.531 0.861 29.023 0.879 30.529 0.907
4 4 8 36.693 0.950 32.644 0.906 31.565 0.888 30.373 0.908 32.921 0.933
4 4 32 36.731 0.952 32.640 0.906 31.550 0.889 30.327 0.907 32.859 0.934
6 6 6 37.288 0.955 33.071 0.910 31.859 0.892 31.145 0.917 33.640 0.938
6 6 8 37.521 0.957 33.222 0.913 31.955 0.894 31.343 0.919 33.975 0.941
8 8 8 37.669 0.957 33.293 0.914 32.009 0.895 31.488 0.921 34.162 0.942
8 8 32 37.665 0.958 33.254 0.914 31.980 0.895 31.378 0.919 34.060 0.941
SRGAN [16] 32 32 32 31.934 0.890 28.451 0.776 27.470 0.728 25.824 0.775 28.712 0.832
Bicubic 32 32 32 28.420 0.810 26.000 0.703 25.960 0.668 23.140 0.658 26.660 0.852
FQSR (Ours) 4 4 4 29.657 0.844 27.027 0.732 26.466 0.689 24.277 0.712 26.388 0.789
4 4 8 30.963 0.872 27.854 0.759 27.078 0.713 24.932 0.742 27.833 0.814
4 4 32 31.253 0.879 27.997 0.766 27.164 0.718 25.105 0.752 27.967 0.820
6 6 6 31.731 0.886 28.319 0.773 27.385 0.726 25.639 0.771 28.277 0.828
6 6 8 31.874 0.889 28.398 0.775 27.443 0.726 25.732 0.772 28.625 0.830
8 8 8 31.960 0.890 28.483 0.777 27.514 0.730 25.898 0.778 28.760 0.833
8 8 32 32.030 0.891 28.482 0.778 27.499 0.729 25.864 0.777 28.793 0.833
Table 4: The comparison of Fully Quantized Super-resolution networks with full-precision networks on SRGAN [16] and Bicubic interpolation.

4.3 Comparison with Existing SR Quantization Models

The comparison of the proposed FQSR model with Ma [21] and Xin [28] on SRResNet is shown in Table 2, since they all provide results on SRResNet structure. Worth noting that, in [21], the models are trained 500 epochs for SRResNet and in [28] the learning rate is decreased by half every 200 epoch, while we only train the FQSR model 300 epochs for comparison. With much less training epochs, the proposed FQSR models are able to achieve better performance with less computation cost and memory consumption. Following [18, 26, 9], OPs is the sum of low-bit operations and floating-point operations, , for -bit networks, OPs = BOPs/64 + FLOPs. Only the multiplication operations are calculated for OPs. In terms of the memory consumption, because of the existence of long and short skip connections within the networks, we consider the peak memory consumption of each model at the inference stage. Maximally, feature maps of three convolutional layers are considered for SRResNet_Bin and our proposed FQSR networks (one for long skip connection feature storing, one for short connection and another for the main trunk); the features of only one convolutional layer is considered for SRResNet_w/o , since it just consists of three convolutional layers without skip connections. However, in terms of the SRResNet_BAM model, because the activation quantization of each layer takes outputs of several preceding layers into consideration (these activations should be stored for re-using), features of 33 convolutional layers within are computed. We consider 1020678 resolution DIV2K dataset images as inputs and up-scaling as the configuration. The OPs are in the unit G (=) OPs and Memory consumption is in the unit M (=) Bytes.

In the table, SRResNet_Bin is the binary SR network from paper [21]. Because only the weights of each layer are quantized, floating-point operations are still required each layer, the OPs and memory consumption will not be reduced. SRResNet_BAM is the bit accumulation model proposed by [28]. It binarizes both the activation and weights of each convolutional layer, so it reduces the OPs and memory consumption in some extent. However, it does not take the quantization of convolutional layers before and after upsampling into consideration, which introduces huge OPs consumption. This is because in SR models, the convolutional channels should be raised before upsampling and the size of features is increased to their multiples after upsampling operations. SRResNet_w/o is the model only consists of one convolutional layer within and two convolutional layers within . The results show without , the simple full-precision super-resolution model could achieve promising performance, such that the existence of full-precision sub-net will shrink the significance of model quantization dramatically.

In this case, from the table, we can perceive that with approximately 1/2 of the OPs and 1/50 memory consumption only (6/6/6 model) on up-scaling, the FQSR model is able to achieve better results on multiple metrics and datasets compared to SRResNet_BAM models. If we increase the bit number, the gaps will be bigger. Finally, our proposed lite version 6/6/8 model is able to receive better results compared to SRResNet_BAM with fewer OPs.

4.4 Ablation Study

Models DAIA QCL Set5 Set14 B100 Urban100 DIV2K
PSNR SSIM PSNR SSIM PSNR SSIM PSNR SSIM PSNR SSIM
1 35.536 0.944 31.775 0.898 30.972 0.883 28.893 0.888 31.792 0.926
2 36.372 0.945 32.395 0.901 31.397 0.884 30.164 0.902 32.455 0.927
3 36.854 0.953 32.71 0.908 31.583 0.89 30.43 0.909 32.985 0.935
Table 5: Ablation study on each component. The experiments are conducted on the 4/4/32 and up-scaling setting.

Effect of different components. In this section, we examine the effect of each component in our FQSR model. The experimental results are reported in Table 5. We empirically find that the proposed distribution-aware trainable quantization interval in Sec. 3.2 and the calibration loss in Sec. 3.4 are critical for the model to gain promising performance in the super-resolution process. With the trainable quantizers only, the PSNR performance of the baseline model is raised from 35.536 to 36.372 on Set5, and significant improvements on other metrics and datasets can also be observed. When equipped with strategies, the performance is further boosted to 36.854 on Set5. The ablation study generally shows the effectiveness of the proposed methods.

Effect of . This section presents the sensitivity of our FQSR model with different setting in Eq. (9). Figure 5 demonstrates the PSNR and SSIM performance of FQSR on different datasets with the up-scaling setting. It is clear that FQSR generally performs stably w.r.t. different settings. From the figure, the trend of curves raising to peaks then falling down can be observed for both PSNR and SSIM performance. In general, is recommended for FQSR model to achieve the best performance.

Figure 5: PSNR and SSIM performance of FQSR using different loss trade-off factors on different datasets with up-scaling.

5 Conclusion

In this paper, we have proposed a fully quantized super-resolution framework, including all network layers and skip connections, as a practical solution to achieve a good trade-off between the accuracy and efficiency. We have also identified multiple difficulties faced by current low-bitwidth SR networks during training, which are 1) activation and weight distributions being vastly distinctive in different layers; 2) the inaccurate approximation of the quantization. To tackle these challenges, we have first proposed a distribution-aware interval adaptation strategy to automatically decide the quantization intervals during training. We have further proposed a quantization calibration loss to explicitly minimize the quantization error. We have evaluated our method on multiple state-of-the-art deep super-resolution models on five benchmark datasets. The extensive experimental results have shown that our proposed FSQR is able to achieve the state-of-the-art results while saving considerable computational cost and memory usage compared to the full-precision counterparts and competing methods.

Acknowledgement HW’s work was in part supported by SA State Government through its Research Consortia Program under the Premier’s Research and Industry Fund.

References

  • [1] Yoshua Bengio, Nicholas Léonard, and Aaron Courville. Estimating or propagating gradients through stochastic neurons for conditional computation. arXiv preprint arXiv:1308.3432, 2013.
  • [2] Marco Bevilacqua, Aline Roumy, Christine Guillemot, and Marie Line Alberi-Morel. Low-complexity single-image super-resolution based on nonnegative neighbor embedding. 2012.
  • [3] Zhaowei Cai, Xiaodong He, Jian Sun, and Nuno Vasconcelos. Deep learning with low precision by half-wave gaussian quantization. In IEEE Conf. Comput. Vis. Pattern Recog., pages 5918–5926, 2017.
  • [4] Jungwook Choi, Zhuo Wang, Swagath Venkataramani, Pierce I-Jen Chuang, Vijayalakshmi Srinivasan, and Kailash Gopalakrishnan. Pact: Parameterized clipping activation for quantized neural networks. arXiv preprint arXiv:1805.06085, 2018.
  • [5] Chao Dong, Chen Change Loy, Kaiming He, and Xiaoou Tang. Learning a deep convolutional network for image super-resolution. In Eur. Conf. Comput. Vis., pages 184–199, 2014.
  • [6] Chao Dong, Chen Change Loy, Kaiming He, and Xiaoou Tang. Image super-resolution using deep convolutional networks. IEEE T. Pattern Analysis and Machine Intelligence, 38(2):295–307, 2015.
  • [7] Chao Dong, Chen Change Loy, and Xiaoou Tang. Accelerating the super-resolution convolutional neural network. In Eur. Conf. Comput. Vis., pages 391–407. Springer, 2016.
  • [8] Steven K Esser, Jeffrey L McKinstry, Deepika Bablani, Rathinakumar Appuswamy, and Dharmendra S Modha. Learned step size quantization. arXiv preprint arXiv:1902.08153, 2019.
  • [9] Zichao Guo, Xiangyu Zhang, Haoyuan Mu, Wen Heng, Zechun Liu, Yichen Wei, and Jian Sun. Single path one-shot neural architecture search with uniform sampling. In Eur. Conf. Comput. Vis., pages 544–560, 2020.
  • [10] Song Han, Huizi Mao, and William J Dally. Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding. In Int. Conf. Learn. Represent., 2016.
  • [11] Andrew G Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, and Hartwig Adam. Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861, 2017.
  • [12] Jia-Bin Huang, Abhishek Singh, and Narendra Ahuja. Single image super-resolution from transformed self-exemplars. In IEEE Conf. Comput. Vis. Pattern Recog., pages 5197–5206, 2015.
  • [13] Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran El-Yaniv, and Yoshua Bengio. Binarized neural networks. In Adv. Neural Inform. Process. Syst., pages 4107–4115, 2016.
  • [14] Zheng Hui, Xinbo Gao, Yunchu Yang, and Xiumei Wang. Lightweight image super-resolution with information multi-distillation network. In ACM Int. Conf. Multimedia, pages 2024–2032, 2019.
  • [15] Sangil Jung, Changyong Son, Seohyung Lee, Jinwoo Son, Jae-Joon Han, Youngjun Kwak, Sung Ju Hwang, and Changkyu Choi. Learning to quantize deep networks by optimizing quantization intervals with task loss. In IEEE Conf. Comput. Vis. Pattern Recog., pages 4350–4359, 2019.
  • [16] Christian Ledig, Lucas Theis, Ferenc Huszár, Jose Caballero, Andrew Cunningham, Alejandro Acosta, Andrew Aitken, Alykhan Tejani, Johannes Totz, Zehan Wang, et al. Photo-realistic single image super-resolution using a generative adversarial network. In IEEE Conf. Comput. Vis. Pattern Recog., pages 4681–4690, 2017.
  • [17] Bee Lim, Sanghyun Son, Heewon Kim, Seungjun Nah, and Kyoung Mu Lee. Enhanced deep residual networks for single image super-resolution. In

    Proc. IEEE Conf. Computer Vision Pattern Recognition Workshops

    , pages 136–144, 2017.
  • [18] Zechun Liu, Zhiqiang Shen, Marios Savvides, and Kwang-Ting Cheng. Reactnet: Towards precise binary neural network with generalized activation functions. arXiv preprint arXiv:2003.03488, 2020.
  • [19] Zechun Liu, Baoyuan Wu, Wenhan Luo, Xin Yang, Wei Liu, and Kwang-Ting Cheng. Bi-real net: Enhancing the performance of 1-bit cnns with improved representational capability and advanced training algorithm. In Eur. Conf. Comput. Vis., 2018.
  • [20] Ilya Loshchilov and Frank Hutter. Sgdr: Stochastic gradient descent with warm restarts. arXiv preprint arXiv:1608.03983, 2016.
  • [21] Yinglan Ma, Hongyu Xiong, Zhe Hu, and Lizhuang Ma. Efficient super resolution using binarized neural network. In Proc. IEEE Conf. Computer Vision Pattern Recognition Workshops, pages 0–0, 2019.
  • [22] David Martin, Charless Fowlkes, Doron Tal, and Jitendra Malik. A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics. In Int. Conf. Comput. Vis., volume 2, pages 416–423. IEEE, 2001.
  • [23] Mohammad Rastegari, Vicente Ordonez, Joseph Redmon, and Ali Farhadi. Xnor-net: Imagenet classification using binary convolutional neural networks. In Eur. Conf. Comput. Vis., pages 525–542, 2016.
  • [24] Wenzhe Shi, Jose Caballero, Ferenc Huszár, Johannes Totz, Andrew P Aitken, Rob Bishop, Daniel Rueckert, and Zehan Wang. Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network. In IEEE Conf. Comput. Vis. Pattern Recog., pages 1874–1883, 2016.
  • [25] Radu Timofte, Eirikur Agustsson, Luc Van Gool, Ming-Hsuan Yang, and Lei Zhang. Ntire 2017 challenge on single image super-resolution: Methods and results. In Proc. IEEE Conf. Computer Vision Pattern Recognition Workshops, pages 114–125, 2017.
  • [26] Ying Wang, Yadong Lu, and Tijmen Blankevoort. Differentiable joint pruning and quantization for hardware efficiency. In Eur. Conf. Comput. Vis., pages 259–277, 2020.
  • [27] Zhou Wang, Alan C Bovik, Hamid R Sheikh, and Eero P Simoncelli. Image quality assessment: from error visibility to structural similarity. IEEE T. Image Process., 13(4):600–612, 2004.
  • [28] Jingwei Xin, Nannan Wang, Xinrui Jiang, Jie Li, Heng Huang, and Xinbo Gao. Binarized neural network for single image super resolution. In Eur. Conf. Comput. Vis., 2020.
  • [29] Roman Zeyde, Michael Elad, and Matan Protter. On single image scale-up using sparse-representations. In International conference on curves and surfaces, pages 711–730. Springer, 2010.
  • [30] Dongqing Zhang, Jiaolong Yang, Dongqiangzi Ye, and Gang Hua. Lq-nets: Learned quantization for highly accurate and compact deep neural networks. In Eur. Conf. Comput. Vis., 2018.
  • [31] Xiangyu Zhang, Jianhua Zou, Kaiming He, and Jian Sun. Accelerating very deep convolutional networks for classification and detection. IEEE Trans. Pattern Anal. Mach. Intell., 38(10):1943–1955, 2016.
  • [32] Yulun Zhang, Kunpeng Li, Kai Li, Lichen Wang, Bineng Zhong, and Yun Fu. Image super-resolution using very deep residual channel attention networks. In Eur. Conf. Comput. Vis., pages 286–301, 2018.
  • [33] Shuchang Zhou, Yuxin Wu, Zekun Ni, Xinyu Zhou, He Wen, and Yuheng Zou. Dorefa-net: Training low bitwidth convolutional neural networks with low bitwidth gradients. arXiv preprint arXiv:1606.06160, 2016.
  • [34] Bohan Zhuang, Chunhua Shen, Mingkui Tan, Lingqiao Liu, and Ian Reid. Towards effective low-bitwidth convolutional neural networks. In IEEE Conf. Comput. Vis. Pattern Recog., 2018.
  • [35] Bohan Zhuang, Chunhua Shen, Mingkui Tan, Lingqiao Liu, and Ian Reid. Strutured binary neural network for accurate image classification and semantic segmentation. In IEEE Conf. Comput. Vis. Pattern Recog., 2019.
  • [36] Zhuangwei Zhuang, Mingkui Tan, Bohan Zhuang, Jing Liu, Yong Guo, Qingyao Wu, Junzhou Huang, and Jinhui Zhu. Discrimination-aware channel pruning for deep neural networks. In Adv. Neural Inform. Process. Syst., pages 875–886, 2018.