The popularity of mobile devices makes it easier for users to create videos. However, in general, the inevitable device shake during shooting usually produces an annoying visual experience. As a result, global motion blur is increasingly considered by device manufacturers in the development of new products. As one of the solutions to this, a robust video deblurring method can often be designed to adapt to different shooting scenarios.
Inspired by the needs in various deblurring scenarios, this paper would like to propose a motion-robust video deblurring method that is effective for both global and local blur. In recent years, more and more deblurring works have emerged. In terms of image deblurring, a major trend is to use multi-scale structure Nah_2017_CVPR; Tao_2018_CVPR; multiscaleprior, or use pyramid structure deblurganv2, for the purpose of enabling the network to have varying receptive fields, so as to effectively deal with different degrees of blur. For more complex video deblurring, in addition to conventionally aligning multiple frames and deblurring the center frame like EDVR edvr and DeBlurNet deblurnet, it is often focused on how to make full use of inter-frame redundancy, such as using recurrent structure recurrentKim_2017_ICCV; recurrentZhang_2018_CVPR; recurrentNah_2019_CVPR, 3D convolution adversarialSTlearning; anempirical, or encoding each frame separately and then aggregating them to decode ntire. Therefore, research on the utilization of multi-scale receptive fields and inter-frame information can be considered relatively mature. Different from these research perspectives, we study the importance of prior information of data itself to video deblurring.
Deblurring challenging blurs, including blur in low-contrast and severe motion (e.g., close to cameras) areas, non-uniform blur (e.g., local blur), and other special cases, has been not well solved by existing video deblurring yet is a very important problem. This inspires us to dig out the inherent information of scenes themselves when designing priors, and try to make our design plug-and-play and easy to migrate.
As the typical representative of conventional methods, EDVR mainly consists of predeblur, alignment, fusion, and reconstruction modules, as the golden background area shown in Fig. 4. Among them, predeblur module is essentially a pyramid network composed of residual blocks, rather than any existing deblurring work. It is inserted before alignment module to improve alignment accuracy. Then, in the alignment module, each neighboring frame is aligned to the center one at feature level. The temporal and spatial attention (TSA) fusion module next assigns pixel-level aggregation weights on each frame according to correlation, and further fuses image information of different frames. The fused feature then passes through a reconstruction module to restore the center frame.
However, by analyzing its design and deblurring results, we have some interesting observations:
Fail to effectively and deeply utilize temporal correlation to help deblur. The aligned feature is obtained by just concatenating. This lacks the mining of motion information. On the other hand, 2D convolution also has limited capability in modelling long-term temporal dependency.
The deblurred frames are prone to be infected with structure distortion, like straight lines becoming twisty, as the red dashed boxes shown in Fig. 1 (a) and (b). This is because the model does not have the constraint of structure preserving, whatever in model and loss design.
The larger the optical flow, the more difficult it is to eliminate blur. We observe the red and yellow blocks presented in Fig. 1 (a)(b)(c). It is not difficult to find these areas with large motion have a poor recovery of detail, such as the leaves. This is because the model lacks the prior information of motion, which makes it unable to adjust the intensity of deblurring adaptively.
The lower the contrast, the harder it is to remove blur. Similarly as the red and blue boxes shown in Fig. 1 (a)(b)(d), the low contrast areas on the door edge and the tire are not well restored, since the model lacks the ability to perceive the contrast distribution of scenes.
The original sharp information cannot be preserved. For example in Fig. 2, if we input the sharp frame mixed with blur frames, the output is seriously worse than the input, which indicates the model has no ability to distinguish sharp and blurred inputs (non-uniform blur).
The loss design, that is, Charbonnier loss charbonnierloss which calculates the error of each pixel indiscriminately, cannot also effectively restrain artifacts and embody the influence from non-uniform blur.
Based on the above analysis, we enhance the model’s perception about scenes from the use of prior information, the way of feature extraction, and the design of constraints. Our contributions are as follows:
We utilize 3D group convolution to encode heterogeneous prior information to explicitly supplement the model’s multidimensional perception about scenes, which is not learned well before, due to out of consideration both in model and loss design. The heterogeneous priors involve structure, motion, and contrast, which effectively constrain the artifacts of the model and enhance the detail recovery of challenging blurs. This corresponds to observation (1) to (4).
Both in the temporal and spatial dimension, we increase the model’s ability to distinguish sharp and blurred inputs. In temporal dimension, the blur reasoning vector is embedded in aligned feature to represent the blur degree of each frame. In spatial dimension, optical flow based attention is used to indicate the spatial blur degree within single frame. This corresponds to observation (5).
The dual loss function, considering from pixel level and perceptual level simultaneously, is designed to effectively constrain the blur removal and guarantee subjective quality. This corresponds to observation (6).
A video deblurring dataset for hand pose estimation is firstly constructed. Then we verified that the estimation accuracy is improved after deblurring. This indirectly reflects the effectiveness for local blurry scenes, and further indicates our model can bring not only visual gain but also machine taks gain.
The use of priori information brings benefits to video deblurring task, especially in terms of subjective effect, as shown in Fig. 3 where our results have more detail restoration and structure fidelity.
2 Related work
Video deblurring. As one of the first treatments of blur in conjunction with optical flow, tra_deblur1 can calculate accurate flow in the presence of spatially-varying motion blur. Instead of assuming the captured scenes are static, tra_deblur2 first proposes an effective general video deblurring method for dynamic scenes. tra_deblur3 exploits semantic segmentation to undertand the scene contents and further presents a pixel-wise non-linear kernel model to explain motion blur. Since the first end-to-end data-driven video deblurring method DeBlurNet was propopsed deblurnet, in the past two years, the success of deep learning has brought significant promotion to video deblurring adversarialSTlearning; anempirical; recurrentKim_2017_ICCV; recurrentZhang_2018_CVPR; recurrentNah_2019_CVPR; reblur2deblur; stfan; rdurud; deblurnet; anempirical; edvr. Different from image deblurring only focusing on the mining of spatial information, such as using multi-scale receptive field Nah_2017_CVPR; Tao_2018_CVPR; multiscaleprior, video deblurring also needs to focus on how to effectively use redundant information in the temporal domain to assist deblurring.
The conventional approach is to stack multiple frames together as a single input and then uses 2D convolutions to extract features deblurnet; anempirical; edvr. Compared to the classic DeBlurNet deblurnet, anempirical has more consideration about the complexity and parameters of the model. Before going through 2D convolutions, EDVR aligns adjacent frames to better aggregate information edvr.
Some works use recurrent neural networks (RNNs) for sequential data processingrecurrentKim_2017_ICCV; recurrentZhang_2018_CVPR; recurrentNah_2019_CVPR. recurrentKim_2017_ICCV designs a spatio-temporal recurrent network which extends the receptive field while keeping the network small, and uses dynamic temporal blending to enforce temporal consistency. recurrentNah_2019_CVPR presents a RNN-based video deblurring method that exploits both the intra-frame and inter-frame recurrent schemes and updates the hidden state multiple times internally during a single time-step. Another spatially variant RNN for dynamic scene deblurring is proposed in recurrentZhang_2018_CVPR, where the weights of the RNN are learned by a deep CNN.
Another part studies how to extract pixel-wise information to handle the spatially variant blur reblur2deblur; stfan; rdurud. Similar to CycleGAN’s circular idea cyclegan, reblur2deblur
first deblurs each frame separately, then estimates optical flow and pixel-wise blur kernels to reblur the estimated sharp images, which makes the network fine-tuned via self-supervised learning. Similarly,stfan proposes a spatial-temporal network for video deblurring based on filter adaptive convolutional layers, and the network is able to dynamically generate element-wise alignment and deblurring filters in order. rdurud presents a motion deblurring kernel learning network that predicts the per-pixel deblur kernel and a residual image with two novel base blocks named residual down-up and residual up-down blocks.
In addition, 3D convolution has been also used for video deblurring recently to extract spatio-temporal information simultaneously adversarialSTlearning. adversarialSTlearning applies 3D convolutions to RGB frames, and uses a discriminator for adversarial training. While we use 3D convolution for priori frames, and furthermore, we realize the fusion of heterogeneous priors by means of group convolution.
Priori information. In human action recognition, 3dconv designs a set of hardwired kernels in the network to encode the prior knowledge, including gray, gradient and optical flow in horizontal and vertical directions, which is proved to usually lead to better performance as compared to the random initialization. In image deblurring, there are also some works using different prior information darkchannelprior; extremechannelprior; deblurtext; multiscaleprior; discriminativeprior; classprior; featureprior. Inspired by the observation that the dark channel of blurred images is less sparse, darkchannelprior presents a blind image deblurring method based on the dark channel prior. Similarly, extremechannelprior defines the bright channel prior and takes advantage of both bright and dark channel prior to deblur images. In deblurtext, a -regularized prior based on intensity and gradient is designed for text image deblurring. multiscaleprior proposes a blind image deblurring method using multi-scale latent structure prior. Recently, discriminativeprior
formulates the image prior as a binary classifier, that is whether the image is clear or not, and presents the deblurring method based on a data-driven discriminative prior. Andclassprior devises a class-specific prior based on the band-pass filter responses and incorporates it into deblurring. In featureprior, two-directional two-dimensional PCA is used to extract feature-based sparse representation prior, thus mitigating the influence of blur and then achieving better image matching. As a result, a well-designed priori plays an important role in deblurring task.
Unlike these empirical priors, the heterogeneous priors designed by us are directly related to each frame itself, which are essentially the explicit representation of input from multiple angles, and jointly serve as the input to the network with frames. It should be noted that, different from the conventional prior, the “prior” in our work, on the one hand, refers to the inherent information of scenes, and on the other hand, it refers to the practice of using structure, motion, and contrast information to assist deblurring. In addition, as far as we know, this is also the first work to explicitly utilize priori information in video deblurring.
3 Proposed method
The overall diagram of our proposed PRiOr-enlightened and MOTION-robust video deblurring (PROMOTION) method is presented in Fig. 4. Given 5 consecutive input frames where is the center frame’s number in the sequence, we denote the center frame as and the other frames as neighboring frames. The aim of video deblurring is to restore a sharp center frame which is close to its corresponding ground truth . On the one hand, we explicitly calculate the heterogeneous prior information of the input frame stack and encode it with 3D convolution to obtain a prior feature map, which is then multiplied by the features output from the TSA fusion module. This is described in Sec. 3.1. On the other hand, we design the blur reasoning vector to rectify the aligned feature and introduce the optical flow based attention information in the loss function, to increase the model’s perception ability of uneven blur distribution in spatio-temporal dimension. These are described in Sec. 3.2 and Sec. 3.4 separately. Finally, in order to make the model learn deblurring patterns more flexibly, channel attention is introduced into the basic residual blocks, which is described in Sec. 3.3.
3.1 Heterogeneous prior information
As discussed earlier in the introduction, we use the contrast, structure and motion priors of scenes to improve the model’s structural fidelity and detail recovery abilities for low contrast and large optical flow regions.
Contrast group. For each frame in the input stack, we calculate its contrast map as follows:
where is the gray map of , and denotes the 4-neighborhood of pixel . From the contrast map in Fig. 6, it can be seen that there are larger activation values for the high contrast regions and smaller activation values for the low ones.
Gradient group. Similarly, we calculate the gradient map for each input frame as the equation (3) given, and use it to increase the sensitivity of the model to structure information.
As the pink box shown in the upper right corner of Fig. 4, gradient map can highlight structural information in a scene, such as regular lines on the wall. This allows the model to have better structural fidelity.
Motion group. Instead of using all the optical flow maps of input stack, we only estimate the optical flow map of center frame as baseline by using flownet2.0, while for the neighbor frames, we use the difference maps between center frame and each neighbor frame to represent the relative motion information. This is to save the calculation time of optical flow.
Then 3D group convolutions are used to encode the spatio-temporal prior information more efficiently. The detail parameters of the heterogeneous prior encoding network are presented in Table 1.
3.2 Blur reasoning vector
In order to enable the model to place corresponding emphasis on frames with different degrees of blur in temporal dimension, blur reasoning vector is designed to explicitly tell the model which frames it should pay more attention to.
Specifically, we first filter each frame with the Laplacian operator of size , as shown in Fig. 6
. It is not difficult to find that the blurry image on the left has a smooth filtered map, while the clear one on the right has a sharper filtered result. Then, we calculate the variance of these filtered mapsto reflect the blur degree of each frame. The more blurry the frame, the smaller its variance. Therefore, the blur reasoning vector can be represented by:
The size of is 5, whose each dimension indicates the importance of the frame. Finally, this vector is multiplied by the aligned feature in depth and we obtain the refined aligned feature.
3.3 Channel attention enhanced residual block
Inspired by the success of channel attention in super resolutionrcan, in the residual basic block, we also introduce this mechanism to adaptively enhance and organize the necessary patterns related to deblurring. As shown in Fig. 7, on the one hand, the dimensions of channel are first compressed and then restored, therefore the effective information can be amplified. On the other hand, by adaptively learning the importance of each channel, the model can express deblurring patterns more flexibly.
3.4 Dual loss function
In order to increase the naturalness of deblurred videos, we constrain the training process from two complementary levels, namely pixel level and perceptual level.
At the pixel level, we believe that the blurred areas are often the areas suffering from motion or change, while these are exactly the focus of people when watching videos. For example, in the hand pose estimation scenario, compared with the still background, people usually pay more attention to the movement of hands. As a result, the optical flow information of the current frame is used as an indicator of attention, to increase the model’s sense ability for non-uniform blur in spatio domain, which complements the blur reasoning vector perceiving in temporal domain. Therefore, the pixel-wise loss can be represented as below:
where denotes element-wise multiplication and is . is the normalized optical flow map. It should be noted that, the optical flow is first converted to an RGB image according to the color encoding system, then it is converted to a grayscale image. Therefore, the flow map mentioned is . and are the height and width of one frame.
At the perceptual level, in order to describe the subjective visual differences between deblurred frames and ground truth, we use neural network-based perceptual similarity to represent such high-level distance perceptualsimilarity as below:
where is the perceptual similarity network, whose weights have been pretrained on a large-scale subjective dataset and output is a score ranging from 0 to 1.
Therefore, the overall loss function is:
Here is a balance factor adjusting the relative importance of pixel-level and perceptual-level losses. We empirically set it as . In this way, the model can not only handle non-uniform blur, but also guarantee the subjective quality of deblurred videos.
In this section, we first verify the motion robustness of our model in the global and local blurry scenarios. For global blur, two well-known video deblurred datasets for natural scenes, involving camera shake and object moving, are used for evaluation in Sec. 4.1. For local blur, in Sec. 4.2, we consider the specific application scenario, hand pose estimation, and construct the first video deblurring dataset for hand pose estimation. In this case, the estimation accuracy is also used as one of the measurement of deblurring effect. Then we conduct the ablation studies to prove the validity of each module in Sec. 4.3. Finally, the effectiveness of handling challenging blurs, and the complexity and parameters of the model are further discussed in Sec. 4.4.
4.1 Global blur
|Metric||Fan’s anempirical||Sim’s rdurud||PROMOTION|
where anempirical uses 8x self-ensemble.
|Metric||DeblurGAN deblurgan||Nah’s Nah_2017_CVPR||SRN Tao_2018_CVPR||DeBlurNet deblurnet||EDVR edvr||PROMOTION|
REDS dataset. This dataset used in the NTIRE19 challenge includes 300 videos, where 240 videos are for training, 30 videos are for validation and the remaining 30 videos are for testing ntire. However, only the training and validation sets (total 270 videos) are available online right now. For more comprehensive evaluation, we consider two kinds of dataset division. One is the standard division (240 training videos and 30 validation videos for testing), and the other one follows the same setting as EDVR, which wins the championship in the challenge, for fair comparison (266 training videos and 4 specific videos for testing) edvr. Each video has 100 frames. And their resolution is 720*1280 and frame rate is 24 fps.
Under the standard setting, we compare our method with the and methods in the challenge anempirical; rdurud, as the average results given in Table 2. Even though anempirical has used eight times self-ensemble, our method still achieves the best performance without using any data augmentation trick.
Under the second setting, as presented in Table 3, compared with the original EDVR and other four kinds of deblurring methods, namely DeblurGAN deblurgan, Nah’s Nah_2017_CVPR, SRN Tao_2018_CVPR, and DeBlurNet deblurnet, our model continues obtaining the best performance both in terms of PSNR and SSIM. Let us more intuitively analyze the performance gap with EDVR. As the tire, window and roof of the black car shown in the Fig. 8, our method can restore more details for these challenging low-contrast areas.
GoPro dataset. To further demonstrate the robustness of our model, we also test it on another video deblurring dataset for natural scenes named GoPro Nah_2017_CVPR. 22 training sequences and 11 testing sequences are included in it. Each sequence has unequal lengths, but the resolution is the same one 720*1280. It should be noted that the dataset provides two kinds of blurry images, including gamma corrected and linear CRF versions.
For the evaluation of EDVR and our model on GoPro dataset, we finetune the models pretrained on REDS dataset. As Table 4 presented, where all the models are tested on the linear CRF version, we compare with both video deblurring recurrentZhang_2018_CVPR; rdurud; edvr and image debluring methods Nah_2017_CVPR; deblurganv2; Tao_2018_CVPR. Our model outperforms the state-of-the-art methods by a large margin, both in terms of video or image deblurring. Compared with the improvement on REDS dataset, we attribute this to both the effectiveness of prior information and the lower image quality of GoPro dataset. First, using the prior information can alleviate the model’s dependence on datasets and assist the model in obtaining information about the current data distribution more directly and comprehensively. Second, according to the shielding effect, we can infer that on the basis of low quality, the improvement is obvious, while on the basis of high quality, the difference is less conspicuous. Qualitative results are given in Fig. 9. Although different pedestrians in the scene have various motion modes, it is easy to see that our method can better deal with this non-uniform blur and achieve the deblurring effect closer to ground truth.
|Metric||Nah’s Nah_2017_CVPR||DeblurGAN-v2 deblurganv2||SRN Tao_2018_CVPR||Zhang’s recurrentZhang_2018_CVPR||Sim’s rdurud||EDVR edvr||PROMOTION|
|Note||Image deblurring||Video deblurring|
4.2 Local blur
In order to show the generalization of the model, we consider the specific local blurry downstream task, hand pose estimation. First, a video deblurring dataset for hand pose estimation is synthesized. Then we test the improvement of estimation accuracy after deblurring to indirectly reflect the deblurring effect.
Based on the existing hand pose estimation dataset named First-Person Hand Action Benchmark firstperson, we synthesize blur for these sequences. This dataset provides RGB-D frames and their corresponding hand pose labels. All the sequences have a frame rate of 30 fps and resolution of 1080*1920. Then we follow the same blur synthesis process as REDS dataset ntire
. The frames are interpolated to virtually increase the frame rate up to 1920 fps byframeinterpolation, which makes the averaged frames exhibit smooth and realistic blurs without spikes and step artifacts. Finally, the virtual frames are averaged in the signal space to mimic camera imaging pipeline as below Nah_2017_CVPR:
Here is an approximated camera response function, where is usually set as 2.2. represents the blur frame, and is the latent sharp frame which can be calculated by our observed sharp frame : . The resulting blurry videos have a frame rate of 30 fps. The synthesized blur effect is shown in Fig. 10 (a) and (c).
We finetune the model pretrained on REDS dataset, and use the state-of-the-art hand pose estimation method learningjoint to measure the accuracy. Here we calculate the RMSE and absolute error (ABSE) between the locations of the estimated joints and ground truth as metrics. As expected, in the Table 6, all the indicators improve after deblurring. Visualized results are given in Fig. 10 (b) and Fig. 11. In Fig. 11, skeleton represents ground truth, and mesh represents the estimated results. It is not difficult to see that for the deblurred frame, skeleton and mesh have a better overlap. This indicates our work can be used as an input preprocessing method to bring gains to downstream machine tasks.
where means the improvement of values.
|FLOPs (GMac)||194.78||195.02||0.24 (0.12)|
|Params (M)||23.60||23.72||0.12 (0.51)|
FLOPs are for images of size .
Ab1: w/o heterogeneous prior information, Ab1-[i/s/m]: w/o only [contrast/structure/motion] prior information, Ab2: w/o blur reasoning vector, Ab3: w/o channel attention, Ab4: w/o optical flow based attention in the Charbonnier loss, Ab5: w/o perceptual-level loss.
4.3 Ablation studies
Instead of using linear CRF image subset, we choose the gamma corrected images of GoPro dataset as the evaluation data for ablation study, to further demonstrate the model’s robustness. We test the models of removing each module separately to prove its effectiveness. According to Table 7, we can find that heterogeneous prior information and channel attention play more significant roles. This indirectly shows that the conventional model does not sufficiently mine data information, because neither the model structure nor the loss function particularly consider challenging blurs. However, by explicitly emphasizing this part information in input, we can achieve the promising results. What needs to be explained is the effect of the blur reasoning vector seems to be diminished due to the flat temporal blur variotion. In addition, we visualize the deblurring results of each ablation setting in Fig. 12. It can be seen that the use of prior information significantly enhances the visual subjective effect.
Effectiveness of handling challenging blurs. Corresponding to our work’s motivation, next we visualize the performance of deblurring challenging blurs. As the blue block shown in Fig. 13, our method has a better recovery capability for the low-contrast tire than EDVR. And in the middle green block, we can see that our method has better structual fidelity of the photo frames, which indirectly reflects the effective constraint of gradient prior. For the close shot areas with severe blur, as the orange block presented, our method is able to recover more details thanks to the consolidation of motion prior.
Complexity and params of the model. Although we have proposed a series of effective and plug-and-play designs for deblurring, and achieved the state-of-the-art performance, while as presented in the Table 6, the increase in FLOPs and params is almost negligible.
To better handle challenging blurs, we first introduce prior information in video deblurring, and propose a PRiOr-enlightened and MOTION-robust video deblurring (PROMOTION) model. Specifically, 3D group convolutions are used to better encode heterogeneous priors, including contrast, structure, and motion information of scenes, which are proven to be related with deblurring. Then, blur reasoning vector and optical-flow based attention prior are used to increase the model’s ability to recognize spatio-temporal non-uniform blur. Experimental results on two globally blurred datasets show our method can achieve the state-of-the-art performance. In addition, for downstream applications suffering from local blur, such as hand pose estimation, we also demonstrate that our method can bring performance gains to the task by preprocessing source data.
We believe our work can shed light not only on video deblurring in respect of utilizing the inherent information of scenes, but also on how to measure video deblurring in respect of borrowing downstream tasks. In the future, the prior encoding branch, as a portable module, will be integrated into more existing video deblurring methods and other enhancement tasks to achieve more visually pleasing restoration performance. On the other hand, more downstream tasks will be considered to indirectly measure the deblurring quality, thereby optimizing more machine-based applications.