Event Enhanced High-Quality Image Recovery

07/16/2020 ∙ by Bishan Wang, et al. ∙ Wuhan University 0

With extremely high temporal resolution, event cameras have a large potential for robotics and computer vision. However, their asynchronous imaging mechanism often aggravates the measurement sensitivity to noises and brings a physical burden to increase the image spatial resolution. To recover high-quality intensity images, one should address both denoising and super-resolution problems for event cameras. Since events depict brightness changes, with the enhanced degeneration model by the events, the clear and sharp high-resolution latent images can be recovered from the noisy, blurry and low-resolution intensity observations. Exploiting the framework of sparse learning, the events and the low-resolution intensity observations can be jointly considered. Based on this, we propose an explainable network, an event-enhanced sparse learning network (eSL-Net), to recover the high-quality images from event cameras. After training with a synthetic dataset, the proposed eSL-Net can largely improve the performance of the state-of-the-art by 7-12 dB. Furthermore, without additional training process, the proposed eSL-Net can be easily extended to generate continuous frames with frame-rate as high as the events.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 2

page 12

page 13

page 14

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

footnotetext: Corresponding Authorfootnotetext: * Equal contribution

Unlike the standard frame-based cameras, the event camera is a bio-inspired sensor that produce asynchronous “events” with very low latency (1 s), leading to extremely high temporal resolution [17, 18, 27, 4, 31]. Naturally, it is immune to motion blurs and has highly appealing promise for low/high-level vision tasks [25, 1, 33]. However, the generated event streams can only depict the scene changes instead of the absolute intensity measurements. Meanwhile, the asynchronous data-driven mechanism also prohibits directly applying existing algorithms designed for standard cameras to event cameras. Thus the high-quality intensity image reconstruction from event streams is essentially required for visualization and provides great potentials to bridge the event camera to many high-level vision tasks that have been solved with standard cameras [15, 16, 19, 29, 13].

In order to achieve the low latency property, event cameras capture brightness changes of each pixels independently [4, 12]. This mechanism aggravates the measurement sensitivity to noises and brings a physical burden to increase the image spatial resolution. Thus, the recovering of high-quality images from event cameras is a very challenge problem, where the following issues should be addressed simultaneously.

  • Low frame-rate and blurry intensity images: The APS (Active Pixel Sensor) frames are with relatively low frame-rate ( latency). And the motion blur is inevitable when recording highly dynamic scenes.

  • High level and mixed noises: The thermal effects or unstable light environment can produce a huge amount of noisy events. Together with the noises from APS frames, the reconstruction of intensity image would fall into a mixed noises problem.

  • Low spatial-resolution: The leading commercial event cameras are typically with very low spatial-resolution. And there is a balance between the spatial-resolution and the latency.

To address the problem of noises for recovering images from event cameras, various methods have been proposed. Barua et. al [3] firstly proposed a learning-based approach to smooth the image gradient by imposing sparsity regularization, then exploited Poisson integration to recover the intensity image from denoised image gradient. Instead of sparsity, Munda et. al [19]

introduced the manifold regularization imposed on the event time surface and proposed a real-time intensity reconstruction algorithm. With these hand-crafted regularizations, the noises can be largely alleviated, however, some artifacts (e.g. blurry edges) are meanwhile produced. Recent works turn to convolutional neural network (CNN) for event-based intensity reconstruction, where the network is trained end-to-end with paired events and intensity images

[29, 35, 13, 5]. Implicitly, the convolutional kernels with trained parameters are commonly able to reduce the noises. However, the man-made networks are often lack of physical mean and thus difficult to deal with both events and APS frames [5].

Besides the noise issue, a super-resolution algorithm is also urgent at present phase to further improve intensity reconstructions for high-level vision tasks, e.g. face recognition, but few of progress has been made in this line yet. Even though one can apply existing super-resolution algorithms to the low-resolution intensity frames (reconstructed), a comprehensive approach will be more desirable.

To the best of our knowledge, few study is able to simultaneously resolve all above three tasks, leaving an open problem: Is it possible to find a unified framework to consider denoising, debluring and super-resolution simultaneously? To answer this question, we propose to employ a powerful tool sparse learning to address the three tasks. General degeneration model for blurry images with noises and low-resolution, often assumes the whole image shares the same blurring kernel. However, events record intensity changes at a very high temporal resolution, which can enhance the degeneration model effectively to represent motion blur effect. The enhanced degeneration model provides a road to recover HR sharp and clear latent images from APS frames and their event sequences. We can solve the model by casting it to the framework of sparse learning, which also leads to its natural ability to resist noise. In this paper, we propose the eSL-Net to recover high-quality images for event cameras. Specially, the eSL-Net trained by our synthetic dataset can be generalized to real scenes and without additional training process the eSL-Net can be easily extended to generate high frame-rate videos by transforming the event sequence. Experimental results show the proposed eSL-Net can improve high-quality intensity reconstruction.

Overall, our contributions are summarized as below:

  • We propose an event enhanced degeneration model for the high-quality image recovery based on event cameras. Based on this, exploiting the framework of sparse learning, we propose an explainable network, an event-enhanced sparse learning network (eSL-Net), to recover the high-quality images from event cameras.

  • Without retraining process, we propose an easy method to extend the eSL-Net for high frame-rate and high-quality video recovery.

  • We build a synthetic dataset for event camera to connect events, LR blurry images and the HR sharp clear images.

Dataset, code, and more results are available at: https://github.com/ShinyWang33/eSL-Net.

2 Related Works

Event-based Intensity Reconstruction: Early attempts of reconstructing intensity from pure events are commonly based on the assumption of brightness constancy, i.e. static scenes [15]

. The intensity reconstruction is then addressed by simultaneously estimating the camera movement, optical flow and intensity gradient

[16]. In [6], Cook et al. propose a bio-inspired and interconnected network to simultaneously reconstruct intensity frames, optical flow and angular velocity for small rotation movements. Later on, Bardow et. al [2] formulate the intensity change and optical flow in a unified variational energy minimization framework. By optimization, one can simultaneously reconstruct the video frames together with the optical flow. On the other hand, another research line on intensity reconstruction is the direct event integration method [30, 19, 25], which does not rely on any assumption about the scene structure or motion dynamics.

While the APS frames contain relatively abundant textures, events and APS frames can be used as complementary sources for event-based intensity reconstruction. In [30], events are approximated as the time differential of intensity frames. Based on this, a complementary filter is proposed as a fusion engine and nearly continuous-time intensity frames can be generated. Pan et. al [25] have proposed an event-based deblurring approach by relating blurry APS frames and events with an event-based double integration (EDI) model. Afterwards, a multiple-frame EDI model is proposed for high-rate video reconstruction by further considering frame-to-frame relations [24].

Event-based Super-resolution: Even though event cameras have extremely high temporal frequency, the spatial (pixel) resolution is relative low and yet not easy to be resolved physically [12]. Few of progress has been made to event-based super-resolution. To the best of our knowledge, only one very recent work [5], called SRNet, has been released when we are preparing this manuscript. Comparing to SRNet, our proposed approach differs in the following aspects: (1) we proposed a unified framework to simultaneously resolve the tasks including denoising, deblurring and superresolution, while SRNet [5] cannot directly deal with blurring or noisy inputs; (2) the proposed network is completely interpretable with meaningful intermediate processes; (3) our framework reconstructs the intensity frame by fusing events and APS frames, while SRNet is proposed for reconstruction from pure events.

3 Problem Statement

3.1 Events and Intensity Images

Event camera triggers events whenever the logarithm of the intensity changes over a pre-setting threshold ,

(1)

where and denote the instantaneous intensities at time and for a specific pixel location , is the time since the last event at this pixel location, is the polarity representing the direction (increase or decrease) of the intensity change. Consequently, an event is made up of .

In order to facilitate expression of events, for every location in the image, we define as a function of continuous time such that:

(2)

whenever there is an event . Here, is the Dirac function [8]. As a result, a sequence of discrete events is turned into a continuous time signal.

In addition to event sequence, many event cameras e.g., DAVIS [4], can provide grey-scale intensity images simultaneously with slower frame-rate. And mathematically, the -th frame of the observed intensity image during the exposure interval could be modeled as an average of sharp clear latent intensity images [25]:

(3)

Suppose that is the sharp clear latent intensity image at any time , we have the following relationship according to (1) and (2), , then

(4)

Since each pixel can be treated separately, subscripts , are often omitted henceforth. Finally, considering the whole pixels, we can get a simple model connecting events, the observed intensity image and the latent intensity image:

(5)

with being double integral of events at time [25] and denoting the Hadamard product.

3.2 Event Enhanced Degeneration Model

Practically, the non-ideality of sensors and the relative motion between cameras and target scenes may largely degrade the quality of the observed intensity image and make it noisy and blurry. Moreover, even though event cameras have extremely high temporal resolution, the spatial pixel resolution is relatively low due to the physical limitations. With these considerations, (5) becomes:

(6)

with the measuring noise which can be assumed to be white Gaussian, the downsampling operator and the latent clear image with high-resolution (HR) at time . Consequently, (6) is the degeneration model where events are exploited to introduce the motion information.

Given the observed image , the corresponding triggered events and the specified time , our goal is to reconstruct a high quality intensity image at time . Obviously, it is a multi-task and ill-posed problem where denoising, deblurring and super-resolution should be addressed simultaneously.

In the following, we will first address the problem of reconstructing single high quality intensity image from events and a degraded LR blurry image. Then, the method to extend to generate high frame-rate video is addressed in Section 5.

4 Event Enhanced High-Quality Image Recovery

4.1 Event-Enhanced Sparse Learning

Many methods were proposed for image denoising, deblurring and SR [37, 11, 23, 9]. However, most of them can not be applied for event cameras directly due to the asynchronous data-driven mechanism. Thanks to the sparse learning, we could integrate the events into sparsity framework and reconstruct satisfactory images to solve the aforementioned problems.

In this section, the expression of the time and frame index

is temporally removed for simplicity. Then we arrange the image matrices as column vectors,

i.e., , , and , thus the blurring operator can be represented as , where are the elements of original blurring operator, so does , where denotes the downsampling scale factor and denotes the product of height and width of the observed image . Then, according to (6), we have:

(7)

The reconstruction from the observed image to HR sharp clear image is highly ill-posed since the inevitable loss of information in the image degeneration process. Inspired by the success of Compressed Sensing [10], we assume that LR sharp clear image and HR sharp clear image can be sparsely represented on LR dictionary and HR dictionary , i.e., and where and are known as sparse codes. Since the downsampling operator is linear, LR sharp clear image and HR sharp clear image can share the same sparse code, i.e. if the dictionaries and are defined properly. Therefore, given an observed image , we first need to find its sparse code on by solving the LASSO [32] problem below:

(8)

where denotes the -norm, and is a regularization coefficient.

To solve (8), a common approach is to use iterative shrinkage thresholding algorithm (ISTA) [7]. At the -th iteration, the sparse code is updated as:

(9)

where is the Lipschitz constant, denotes the element-wise soft thresholding function. After obtaining the optimum solution of sparse code , we could finally recover HR sharp clear image by:

(10)

where is the HR dictionary.

4.2 Network

Inspired by [14], we can solve the sparse coding problem efficiently by integrating it into the CNN architecture. Therefore we propose an Event-enhanced Sparse Learning Net (eSL-Net) to solve problems of noise, motion blur and low spatial resolution in a unified framework.

The basic idea of eSL-Net is to map the update steps of event-based intensity reconstruction method to a deep network architecture that consists of a fixed number of phases, each of which corresponds to one iteration of (9). Therefore eSL-Net is an interpretable deep network.

Figure 2: eSL-Net Framework

The whole eSL-Net architecture is shown as Fig. 2. Obviously the most attractive part in the network is iteration module corresponding to (9) in the green box. According to [26], when the coefficient in (9) is limited to nonnegative, ISTA is not affected. It is easy to find the equality of the soft nonnegative thresholding operator

and the ReLU activation function. We use ReLU layer to implement

. Convolution is a special kind of matrix multiplication, therefore we use convolution layers to implement matrix multiplication. Then the plus node in the green box with three inputs represents in (9).

According to (5), is double integral of events. In discrete case, the continuous integral turns into discrete summation. More generally, we use the weighted summation, convolution, to replace integral. As a result, through two convolution layers with suitable parameters, the event sequence input can be transformed to approximative . What’s more, convolution has some de-noise effect on event sequences.

Finally, the output of the iterative module, optimum sparse encoding , is passed through a HR dictionary according to (10). In eSL-Net, we use convolution layers followed by shuffle layer to implement HR dictionary , due to the fact that the shuffle operator, arranging the pixels of different channels, can be regarded as a linear operator.

Figure 3: (a) The method of transforming the event sequence to input event frames when eSL-Net is trained. The upper part is positive event sequence and the lower part is negative event sequence. (b) The method of transforming the event sequence to input event frames when eSL-Net outputs the latent frame at time .

4.3 Network Training

Because the network expects image-like inputs, we divide the time duration of the event sequence into equal-scale portions, and then grayscale frames, , , are formed respectively by merging the positive and negative events in each time interval, which is shown in Fig. 3 (a). is the amount of positive events at and is the amount of negative events at

. The input tensor (obtained by concatenating

, , , , , …, , ), of size is passed through eSL-Net and size of output is ( is upscale factor of the image).

As shown in Fig. 2, the eSL-Net is then fed with a pair of inputs including the -th frame of the observed LR blurry and noisy intensity image and its corresponding event sequence triggered between the time interval . With such inputs, the output is the HR sharp and clear intensity image at time , as shown in Fig. 3 (a).

Loss : we use loss which is a common loss in many image reconstruction methods. By minimizing loss, our network effectively learns to make the output closer to the desired image. And loss makes training process more stable.

5 High Frame-Rate Video Generation

With the -th observed LR image frame and the corresponding event sequence triggered during , it is possible to reconstruct the latent intensity image by the trained eSL-Net, as shown in Fig. 3 (a). Thus, the eSL-Net is trained for reconstructing the latent image at .

In order to reconstruct the latent intensity image of , one should get the learned double integral of events at time . Let us consider the (4), the definition of the double integral implies that all events are fed into the network keeping the polarity and the order unchanged when , but when the polarity and the order of the input events with timestamp less than should be reversed. It is worth noting that this reversion has special physical mean for event cameras where the polarity and order of the produced events respectively represent the direction and the relative time of the brightness change. So when reconstructing the latent intensity image with the trained network for , we need to re-organize the input event sequence to match the pattern of the polarity and the order as in the training phase.

Consequently, instead of retraining the network, we propose a very simple preprocessing step for the input event sequence to reconstruct the latent intensity image for any . As shown in Fig. 3 (b), the preprocessing step is only reversing the polarities and the orders of events with timestamp less than . After that, the resulted new event sequence is then merged into frames as the inputs of eSL-Net. Theoretically, we can generate a video with frame-rate as high as the DVS’s (Dynamic Vision Sensor) eps (events per second).

6 Dataset Preparation

In order to train the proposed eSL-Net, a mass of LR blurry noisy images with corresponding HR ground-truth images and event sequences are required. However, there exists no such large-scale dataset. This encourages us to synthesize a new dataset with LR blurry noisy images and the corresponding HR sharp clear images and events. And Section 7 shows that, although trained on synthetic data, the eSL-Net is able to be generalized to real-world scenes [25].

HR clear images: We choose the continuous sharp clear images with resolution of from GoPro dataset [20] as our ground truth.

LR clear images: LR sharp clear images with resolution of

are obtained by sampling HR clear images with bicubic interpolation, that are used as ground truth for the eSL-Net without SR.

LR blurry images: The GoPro dataset [20] also provides LR blurry images, but we have to regenerate them due to the ignorance of exposure time. Mathematically, during the exposure, a motion blurry image can be simulated by averaging a series of sharp images at a high frame rate [21]. However, when the frame rate is insufficient, e.g. fps in GoPro [20], simple time averaging would lead to unnatural spikes or steps in the blur trajectory [36]. To avoid this issue, we first increase the frame-rate of LR sharp clear images to fps by the method in [22], and then generate LR blurry images by averaging

continuous LR sharp clear images. Besides, to better simulate the real situation, we add additional white Gaussian noise with standard deviation

( is the approximate mean of the standard deviations of many smooth patches in APS frames in the real dataset) to the LR blurry images.

Event sequence: To simulate events, we resort to the open ESIM [28] which can generate events from a sequence of input images. For a given LR blurry image, we input the corresponding LR sharp clear images ( fps) and obtain the corresponding event sequence. We add ( is artificially calculated approximate ratio of noise events to effective events in simple real scenes) noisy events with uniform random distribution to the sequence.

The entire synthetic dataset contains four parts:

  • HR clear images dataset consists of HR sharp clear frames with various contents, locations, natural and handmade objects, from video, each of which contains 95 images. It is used as ground truth in training and testing of network.

  • LR clear images dataset consists of LR sharp clear frames from video. It is used as ground truth in training and testing of network without SR.

  • LR blurry images dataset consists of LR blurry noisy frames from video correspondingly, which simulates the APS frames of the event camera with motion blur and noises.

  • Event sequences dataset consists of event sequences corresponding to LR blurry frames. LR blurry images dataset and event sequences dataset are used as inputs of the eSL-Net.

According to partitions of GoPro dataset [20], images and event sequences in the synthetic dataset from videos are used for training and the rest from videos for testing.

7 Experiments

We train our proposed eSL-Net on the synthetic training dataset for epoches using NVIDIA Titan-RTX GPUs, and compare it with state-of-the-art event-based intensity reconstruction methods, including EDI [25], complementary filter method (CF) [30] and manifold regularization method (MR) [19]. All methods are evaluated on the synthetic testing dataset and some real scenes [25] to verify intensity reconstruction capability. For the metrics, we use PSNR and SSIM [34] for quantitative comparison while the visual effect of the reconstructed images for qualitative comparison. In the end, we will test the ability of our method to reconstruct high frame-rate video frames in Section 5.

7.1 Intensity Reconstruction Experiments

Note that EDI, CF and MR can not super resolve the intensity images. Thus for fair comparison, we first replace the HR dictionary with a LR dictionary (delete shuffle layers) in eSL-Net to demonstrate the basic ability of intensity reconstruction. Besides, to demonstrate the ability to solve three problems of denoising, deblurring and super resolution simultaneously, the eSL-Net is compared with EDI, CF and MR armed with an excellent SR network RCAN [38].

Methods EDI CF MR eSL-Net
PSNR(dB) 22.42 19.75 13.76 30.23
SSIM 0.6228 0.4131 0.4960 0.8703
Table 1: Quantitative comparison of our outputs without SR to EDI, CF and MR on the synthetic testing dataset.
Methods EDI+RCAN 4 CF+RCAN 4 MR+RCAN 4 eSL-Net 4
PSNR(dB) 12.88 12.89 12.89 25.41
SSIM 0.4647 0.4638 0.4643 0.6727
Table 2: Quantitative comparison of our outputs to EDI, CF and MR with SR method on the synthetic testing dataset.
Figure 4: Qualitative comparison of our outputs without SR to EDI, CF and MR on the synthetic testing dataset.
Figure 5: Qualitative comparison of our outputs without SR to EDI, CF and MR on the real dataset [25].

For the results without SR, the quantitative comparison is tabulated in Table 1. One can see that our method outperforms others by a large margin, especially for CF and MR. Note that since results of CF and MR are terrible in the beginning frames of videos, we only choose the middle reconstruction results (- frames) of each video when calculating PSNR and SSIM. The qualitative results are shown in Fig. 4 and Fig. 5. Our method consistently achieves the best visual performance in terms of effectively denoising and deblurring.

Figure 6: Qualitative comparison of our outputs to EDI, CF and MR with SR method on the synthetic testing dataset.
Figure 7: Qualitative comparison of our outputs to EDI, CF and MR with SR method on the real dataset [25].

For the results with SR, we show the quantitative results in Table 2. Compared with Table 1, all methods become worse when solving the three problems simultaneously. The reason why is that increased resolution aggravates the adverse effects of noise and blur. Even though, our method is still able to maintain a good performance, which means noise, motion blur, low-resolution are well solved simultaneously in our eSL-Net. And the visual comparisons on synthetic and real datasets verifies this inference, as shown in Fig. 6 and Fig. 7. Compared to other methods superimposing a complex SR network, our method performs much better on edge recovery and noise removal after super resolution, which proves the superiority of our method.

7.2 High Frame-Rate Video Experiments

Theoretically, we can generate a video with frame-rate as high as the DVS’s eps. However, when the frame-rate is too high, there are only a few events in the interval of two continuous frames, which results in a subtle difference between the reconstructed two frames, and even difficulty to distinguish them. Therefore, in order to balance the frame-rate and the visual effect, in our experiments, our method generates coutinuous frames with frame-rate times higher than the original APS frames. To show the changes among frames more clearly, 11 frames are taken at equal intervals from continuous frames recovered from an image and an event sequence by eSL-Net, as shown in Fig. 8 (e), which demonstrates the effectiveness of our method on HR and high-frame-rate video reconstruction. Without retraining, our method is able to generate continuous frames while preserving more realistic and richer details.

Fig. 8 shows qualitative comparisons between our method and EDI+RCAN about high frame-rate video reconstruction on real dataset. Obviously, our method is superior in edge recovery and noise removal.

Figure 8: An example of the reconstructed result on the real dataset [25]. (a) The LR blurry and noisy input image. On the right is the zoom in of the marked rectangular. (b) The reconstructed frame by EDI with SR of scale 4. (c) The reconstructed frame by eSL-Net with SR of scale 4. (d)-(e) show selected 11 frames of the reconstructed video through EDI+RCAN and eSL-Net from (a) and an event sequence respectively.

8 Conclusion

In this paper, we proposed a novel network named eSL-Net for high-quality image reconstruction from event cameras. Enhanced by events, the degeneration model of event cameras can easily cope with the motion blur, noises and low spatial resolution problems. Particularly, exploiting sparse learning framework, we proposed an explainable network, i.e. eSL-Net, which is obtained by unfolding the iterative soft thresholding algorithm. Besides, the eSL-Net can be easily extended to generate high frame-rate videos only by a simple transform of the event steam. Experiments on synthetic and real-world data demonstrate the effectiveness and superiority of our eSL-Net.

Acknowledge

The research was partially supported by the National Natural Science Foundation of China under Grants 61871297. And the research was partially supported by the Fundamental Research Funds for the Central Universities.

References

  • [1] M. M. Almatrafi and K. Hirakawa (2020) DAViS camera optical flow. IEEE Transactions on Computational Imaging 6, pp. 396–407. External Links: ISSN 2573-0436 Cited by: §1.
  • [2] P. Bardow, A. J. Davison, and S. Leutenegger (2016) Simultaneous optical flow and intensity estimation from an event camera. In

    Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition

    ,
    pp. 884–892. Cited by: §2.
  • [3] S. Barua, Y. Miyatani, and A. Veeraraghavan (2016) Direct face detection and video reconstruction from event cameras. In 2016 IEEE winter conference on applications of computer vision (WACV), pp. 1–9. Cited by: §1.
  • [4] C. Brandli, R. Berner, M. Yang, S. Liu, and T. Delbruck (2014) A 240180 130 db 3 s latency global shutter spatiotemporal vision sensor. IEEE Journal of Solid-State Circuits 49 (10), pp. 2333–2341. Cited by: §1, §1, §3.1.
  • [5] J. Choi, K. Yoon, et al. (2019) Learning to super resolve intensity images from events. arXiv preprint arXiv:1912.01196. Cited by: §1, §2.
  • [6] M. Cook, L. Gugelmann, F. Jug, C. Krautz, and A. Steger (2011) Interacting maps for fast visual interpretation. In The 2011 International Joint Conference on Neural Networks, pp. 770–776. Cited by: §2.
  • [7] I. Daubechies, M. Defrise, and C. De Mol (2004) An iterative thresholding algorithm for linear inverse problems with a sparsity constraint. Communications on Pure and Applied Mathematics 57 (11), pp. 1413–1457. Cited by: §4.1.
  • [8] P. A. M. Dirac (1981) The principles of quantum mechanics. Oxford university press. Cited by: §3.1.
  • [9] C. Dong, C. C. Loy, K. He, and X. Tang (2015) Image super-resolution using deep convolutional networks. IEEE Transactions on Pattern Analysis and Machine Intelligence 38 (2), pp. 295–307. Cited by: §4.1.
  • [10] D. L. Donoho et al. (2006) Compressed sensing. IEEE Transactions on Information Theory 52 (4), pp. 1289–1306. Cited by: §4.1.
  • [11] M. Elad and M. Aharon (2006) Image denoising via sparse and redundant representations over learned dictionaries. IEEE Transactions on Image processing 15 (12), pp. 3736–3745. Cited by: §4.1.
  • [12] G. Gallego, T. Delbruck, G. Orchard, C. Bartolozzi, B. Taba, A. Censi, S. Leutenegger, A. Davison, J. Conradt, K. Daniilidis, et al. (2019) Event-based vision: a survey. arXiv preprint arXiv:1904.08405. Cited by: §1, §2.
  • [13] D. Gehrig, A. Loquercio, K. G. Derpanis, and D. Scaramuzza (2019) End-to-end learning of representations for asynchronous event-based data. In Proceedings of the IEEE International Conference on Computer Vision, pp. 5633–5643. Cited by: §1, §1.
  • [14] K. Gregor and Y. LeCun (2010) Learning fast approximations of sparse coding. In

    Proceedings of the 27th International Conference on International Conference on Machine Learning

    ,
    pp. 399–406. Cited by: §4.2.
  • [15] H. Kim, A. Handa, R. Benosman, S. Ieng, and A. Davison (2014) Simultaneous mosaicing and tracking with an event camera. In BMVC 2014-Proceedings of the British Machine Vision Conference, Cited by: §1, §2.
  • [16] H. Kim, S. Leutenegger, and A. J. Davison (2016) Real-time 3d reconstruction and 6-dof tracking with an event camera. In European Conference on Computer Vision, pp. 349–364. Cited by: §1, §2.
  • [17] P. Lichtsteiner, C. Posch, and T. Delbruck (2008) A 128 128 120 db 15 s latency asynchronous temporal contrast vision sensor. IEEE Journal of Solid-State Circuits 43 (2), pp. 566–576. Cited by: §1.
  • [18] S. Liu and T. Delbruck (2010) Neuromorphic sensory systems. Current opinion in neurobiology 20 (3), pp. 288–295. Cited by: §1.
  • [19] G. Munda, C. Reinbacher, and T. Pock (2018) Real-time intensity-image reconstruction for event cameras using manifold regularisation. International Journal of Computer Vision 126 (12), pp. 1381–1393. Cited by: Figure 1, §1, §1, §2, §7.
  • [20] S. Nah, S. Baik, S. Hong, G. Moon, S. Son, R. Timofte, and K. Mu Lee (2019) Ntire 2019 challenge on video deblurring and super-resolution: dataset and study. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Cited by: §6, §6, §6.
  • [21] S. Nah, T. Hyun Kim, and K. Mu Lee (2017) Deep multi-scale convolutional neural network for dynamic scene deblurring. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3883–3891. Cited by: §6.
  • [22] S. Niklaus, L. Mai, and F. Liu (2017) Video frame interpolation via adaptive separable convolution. In Proceedings of the IEEE International Conference on Computer Vision, pp. 261–270. Cited by: §6.
  • [23] J. Pan, D. Sun, H. Pfister, and M. Yang (2016) Blind image deblurring using dark channel prior. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1628–1636. Cited by: §4.1.
  • [24] L. Pan, R. Hartley, C. Scheerlinck, M. Liu, X. Yu, and Y. Dai (2019) High frame rate video reconstruction based on an event camera. arXiv preprint arXiv:1903.06531. Cited by: §2.
  • [25] L. Pan, C. Scheerlinck, X. Yu, R. Hartley, M. Liu, and Y. Dai (2019) Bringing a blurry frame alive at high frame-rate with an event camera. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6820–6829. Cited by: Figure 1, §1, §2, §2, §3.1, §6, Figure 5, Figure 7, Figure 8, §7.
  • [26] V. Papyan, Y. Romano, and M. Elad (2017) Convolutional neural networks analyzed via convolutional sparse coding. The Journal of Machine Learning Research 18 (1), pp. 2887–2938. Cited by: §4.2.
  • [27] C. Posch, D. Matolin, and R. Wohlgenannt (2010) A qvga 143 db dynamic range frame-free pwm image sensor with lossless pixel-level video compression and time-domain cds. IEEE Journal of Solid-State Circuits 46 (1), pp. 259–275. Cited by: §1.
  • [28] H. Rebecq, D. Gehrig, and D. Scaramuzza (2018) ESIM: an open event camera simulator. In Conference on Robot Learning, pp. 969–982. Cited by: §6.
  • [29] H. Rebecq, R. Ranftl, V. Koltun, and D. Scaramuzza (2019) Events-to-video: bringing modern computer vision to event cameras. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3857–3866. Cited by: §1, §1.
  • [30] C. Scheerlinck, N. Barnes, and R. Mahony (2018) Continuous-time intensity estimation using event cameras. In Asian Conference on Computer Vision, pp. 308–324. Cited by: Figure 1, §2, §2, §7.
  • [31] B. Son, Y. Suh, S. Kim, H. Jung, J. Kim, C. Shin, K. Park, K. Lee, J. Park, J. Woo, et al. (2017) A 640 480 dynamic vision sensor with a 9m pixel and 300meps address-event representation. In 2017 IEEE International Solid-State Circuits Conference (ISSCC), pp. 66–67. Cited by: §1.
  • [32] R. Tibshirani (1996) Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society Series B-Methodological 58 (1), pp. 267–288. Cited by: §4.1.
  • [33] A. R. Vidal, H. Rebecq, T. Horstschaefer, and D. Scaramuzza (2018) Ultimate SLAM? Combining events, images, and IMU for robust visual SLAM in HDR and high-speed scenarios. IEEE Robotics and Automation Letters 3 (2), pp. 994–1001. Cited by: §1.
  • [34] Z. Wang, E. P. Simoncelli, and A. C. Bovik (2003) Multiscale structural similarity for image quality assessment. In The Thrity-Seventh Asilomar Conference on Signals, Systems & Computers, 2003, Vol. 2, pp. 1398–1402. Cited by: §7.
  • [35] Z. W. Wang, W. Jiang, K. He, B. Shi, A. Katsaggelos, and O. Cossairt (2019) Event-driven video frame synthesis. In Proceedings of the IEEE International Conference on Computer Vision Workshops, Cited by: §1.
  • [36] P. Wieschollek, M. Hirsch, B. Scholkopf, and H. Lensch (2017) Learning blind motion deblurring. In Proceedings of the IEEE International Conference on Computer Vision, pp. 231–240. Cited by: §6.
  • [37] K. Zhang, W. Zuo, Y. Chen, D. Meng, and L. Zhang (2017) Beyond a gaussian denoiser: residual learning of deep cnn for image denoising. IEEE Transactions on Image Processing 26 (7), pp. 3142–3155. Cited by: §4.1.
  • [38] Y. Zhang, K. Li, K. Li, L. Wang, B. Zhong, and Y. Fu (2018) Image super-resolution using very deep residual channel attention networks. In Proceedings of the European Conference on Computer Vision, pp. 286–301. Cited by: Figure 1, §7.1.