I2V-GAN: Unpaired Infrared-to-Visible Video Translation

08/02/2021
by   Shuang Li, et al.
Beijing Institute of Technology
4

Human vision is often adversely affected by complex environmental factors, especially in night vision scenarios. Thus, infrared cameras are often leveraged to help enhance the visual effects via detecting infrared radiation in the surrounding environment, but the infrared videos are undesirable due to the lack of detailed semantic information. In such a case, an effective video-to-video translation method from the infrared domain to the visible light counterpart is strongly needed by overcoming the intrinsic huge gap between infrared and visible fields. To address this challenging problem, we propose an infrared-to-visible (I2V) video translation method I2V-GAN to generate fine-grained and spatial-temporal consistent visible light videos by given unpaired infrared videos. Technically, our model capitalizes on three types of constraints: 1)adversarial constraint to generate synthetic frames that are similar to the real ones, 2)cyclic consistency with the introduced perceptual loss for effective content conversion as well as style preservation, and 3)similarity constraints across and within domains to enhance the content and motion consistency in both spatial and temporal spaces at a fine-grained level. Furthermore, the current public available infrared and visible light datasets are mainly used for object detection or tracking, and some are composed of discontinuous images which are not suitable for video tasks. Thus, we provide a new dataset for I2V video translation, which is named IRVI. Specifically, it has 12 consecutive video clips of vehicle and monitoring scenes, and both infrared and visible light videos could be apart into 24352 frames. Comprehensive experiments validate that I2V-GAN is superior to the compared SOTA methods in the translation of I2V videos with higher fluency and finer semantic details. The code and IRVI dataset are available at https://github.com/BIT-DA/I2V-GAN.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 4

page 5

page 6

page 7

page 8

12/16/2020

C2F-FWN: Coarse-to-Fine Flow Warping Network for Spatial-Temporal Consistent Motion Transfer

Human video motion transfer (HVMT) aims to synthesize videos that one pe...
08/26/2019

Mocycle-GAN: Unpaired Video-to-Video Translation

Unsupervised image-to-image translation is the task of translating an im...
08/15/2018

Recycle-GAN: Unsupervised Video Retargeting

We introduce a data-driven approach for unsupervised video retargeting t...
09/15/2016

Visible Light-Based Human Visual System Conceptual Model

There is a widely held belief in the digital image and video processing ...
02/13/2020

PCSGAN: Perceptual Cyclic-Synthesized Generative Adversarial Networks for Thermal and NIR to Visible Image Transformation

In many real world scenarios, it is difficult to capture the images in t...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1. Introduction

In real-world applications (J. et al., 2016; S. et al., 2020), human vision is limited in nighttime scenarios and adverse weather conditions. Some vehicle navigation and monitoring systems using visible (VI) light cameras to enhance visual effects still obtain undesirable responses due to natural obstacles like different light conditions (K. et al., 2017)

. Instead, infrared (IR) sensors gain advantages in capturing visual signals related to heat when visible-light cameras do not work well. However, compared with VI images, IR images have low color contrast and representation quality, which is difficult for people to recognize objects. In other words, VI images are easy to be recognized and contain more fine-grained semantic information but worse target contrast under bad luminance conditions, while IR images have better hot contrast but fewer environmental semantic details. Therefore, it is of great importance to generate VI color videos according to the corresponding collected IR visual signals. And the infrared-to-visible (I2V) video translation will have a broad application value in the multimedia and computer vision community, such as automatic driving and security fields. Unfortunately, the power of I2V video translation has yet to be fully unleashed in the previous works.

Classic I2V translation techniques are mainly based on image colorization methods, which can be divided into two main categories. The first type is the color morphing model via learning color mapping functions corresponding to the reference colors  

(E. et al., 2002; Y. et al., 2006; G. et al., 2012; B. et al., 2014; G. and Z., 2015; Z. et al., 2016; A. et al., 2016). But these methods are often time-consuming due to intensive manual interventions. For instance,  (Y. et al., 2006; B. et al., 2014; G. et al., 2012; A. et al., 2016) require pre-set reference color to paint on the gray-scale images or depend heavily on the manually selected reference color image. The converted images variate heavily according to the references. Another limitation is the confused orientation for VI images.  (T. et al., 2002; E. et al., 2002; Toet, 2003; Hogervorst and Toet, 2007)

try to morph colors by finding filters or mapping functions from different color representation spaces. However, IR images are produced by heat, straight color morphing strategies are not effective to be applied. With the rapid development of deep learning methods  

(J. et al., 2020, 2021; A. et al., 2020; S. et al., 2019)

and transfer learning methods  

(S. et al., 2018a, 2021)

recently, image colorization methods delve into leveraging Generative Adversarial Networks (GANs)  

(J. et al., 2014) to colorize images automatically  (S. et al., 2017a, b; S. et al., 2018b; B. and D., 2020) by playing a min-max game. Although their translated images look similar to the visible light ones, they can not directly apply to video translation tasks owing to the temporal coherence among the adjacent frames.

As image-to-image translation methods advance rapidly, video-to-video translation goes one step further and grabs much attention  

(B. et al., 2018; chen et al., 2019). Specifically, Bansal et al. propose a data-driven video retargeting approach Recycle-GAN  (B. et al., 2018) to consider both spatial and temporal constraints jointly based on the famous Cycle-GAN  (Z. et al., 2017). However, the motion changes between consecutive frames have not been fully exploited in Recycle-GAN.  (G. et al., 2018; C. et al., 2020b; chen et al., 2019) utilize optical flow to maintain temporal consistency by considering the motion knowledge to alleviate the flickering artifacts effectively. Nevertheless, these methods rely heavily on optical flow extraction methods  (Dosovitskiy et al., 2015; Ilg et al., 2017)

and image warping algorithms with occlusion masks. Defects in each step could cause artifacts and blurry. The low efficiency of optical flow extraction for IR frames makes it ineffective for I2V video translation. As a consequence, the detailed semantic information in the converted VI videos are often unsatisfactory. Another solution to ensure temporal coherence for video translation is capitalizing on Recurrent Neural Networks (RNNs)  

(T. et al., 2018a; L. et al., 2020). To gather temporal and spatial information for the current frame,  (L. et al., 2020) applies forward and backward RNN units. Since the translation of each frame is related to all frames before and after it, these methods are time-consuming and hard to be applied for the real-time scenario.

To address the aforementioned problems, we propose an unpaired I2V video translation method I2V-GAN, which can generate spatial-temporal consistent videos with more elaborated semantic details and reach real-time performance at the inference phase inspired by Recycle-GAN  (B. et al., 2018). The main idea of Recycle-GAN is to simultaneously train a generator and a predictor to learn spatial and temporal consistency. The generator and predictor synthesize frames in the target domain according to the input source frames and the past synthesized frames. By applying cyclic losses similar to (Z. et al., 2017), Recycle-GAN improves translation performance for videos compared to images. However, as analysis in  (B. et al., 2018; chen et al., 2019) and practical results of some I2V tasks, the cyclic constraints are not sufficient to guarantee the synthesized frames strongly continuous and usually suffer from severe flickering artifacts. Therefore, to conduct detailed pixel-to-pixel alignment, we introduce perceptual cyclic losses and two similarity losses across and within the domains to improve the translation performance.

To be specific, the perceptual loss is an additional constraint for each cyclic loss shown in Figure  2

. We apply a VGG loss network to optimize the feature extraction for the content and style. The optimization of style is combined with the Gram matrix of the representation, and content is related to the features extracted from certain layers of VGG. As such, the original constraints are equipped with fine-grained perceptual information from a pixel-wise perspective. Moreover, aiming to capture more semantic details in both domains, we propose an external similarity loss across domains to maximize the mutual information between the patches at the same location of the input frames and the synthesized frames. Besides, an internal similarity loss within the domain is introduced to keep the same motion variation degree of synthesized consecutive frames as the corresponding input frames. In this sense, the visual content in each frame can be transferred in a fine-grained way, and the spatial-temporal coherence is ensured to be realistic and consolidated simultaneously. The network flow of I2V-GAN is shown in Figure  

3. Our main purpose is to generate fine-grained transferred videos with high fluency and continuity.

Additionally and the same important as above, the current most-used public IR and VI datasets are unsuited for I2V video translation tasks. For example,  (Davis and Sharma, 2007) offers monitoring frames of still scenes, which have no motion variation over half of the time and lack diversity.  (FLIR, 2018) focuses on object detection and the image pairs are not strictly consecutive.  (M. et al., 2019) is organized for object tracing and the length of each video clip is limited, which is not tailored to video translation evaluation. In this situation, we provide a new dataset named IRVI for IR and VI video translation. Our dataset contains 12 video clips for vehicle and monitoring scenarios. Both infrared and visible light videos could be apart into 24352 frames. More detailed descriptions and comparison will be illustrated in Section  4.

In summary, the contributions of our work are highlighted as below:

(1) To our best knowledge, our proposed I2V-GAN is the first end-to-end unpaired infrared-to-visible video translation network.

(2) To generate visible videos with higher fluency and finer semantic details, the improved cyclic constraints with content and style perceptual losses as well as external and internal similarity losses across and within domains are proposed.

(3) We provide a new dataset for infrared-to-visible video translation task named IRVI. Moreover, we present the benchmark of the IRVI for some state-of-the-arts and open-source methods.

Figure 2. Flow graph for Recycle-GAN (left) and I2V-GAN (right). The graph shows the translation flow of . The black dash lines in I2V-GAN are different perceptual cyclic constraints corresponding to Recycle-GAN. The orange and purple dash lines are internal and external similarity losses. More details are illustrated in section  3.

2. Related Work

Infrared-to-Visible Translation. Many vehicles navigation and monitoring systems apply infrared sensors to help enhance visual signals. However, the synthesis of IR images is related to heat, it is difficult for people to capture information. This motivates us to colorize IR images to the visible domain. Some basic transfer methods for I2V image translation, e.g., (G. and Z., 2015; Z. et al., 2016; S. et al., 2018b), are proposed to translate IR images into gray-scale, not color ones. On the other hand, current I2V image translation works are mostly proposed for human face tasks  (Z. et al., 2018; D. et al., 2019). Although  (S. et al., 2017a; N. et al., 2019) transfer IR images to VI images, the color and quality are limited. Moreover, these methods cannot be directly applied to the I2V video translation task. Since it not only requires each video frame to look realistic but also should produce temporally coherent frames.

Video-to-Video Translation. The development of video-to-video translation is stimulated by the advance of image-to-image translation, which aims to learn a mapping to translate images from the source domain to the target domain.  (I. et al., 2017; Z. et al., 2017; C. et al., 2018a; C. et al., 2018b) are proposed to solve different graphics tasks in image-to-image translation, e.g., semantic labels to photo, edges to photo, and photo to animation. In particular, Cycle-GAN  (Z. et al., 2017) utilizes a cycle consistency loss to constrain output in the target domain under unpaired conditions. Soon after,  (W. et al., 2018) and  (C. et al., 2020a) are proposed to improve the translation details. However, directly applying the image translation techniques to videos, the generated video will inevitably be affected by severe flickering artifacts. Besides, image translation only guarantees spatial consistency and does not involve temporal continuity.

In order to effectively eliminate these issues,  (G. et al., 2018; T. et al., 2018b, 2019; C. et al., 2020b; chen et al., 2019) utilize optical flow and image warping algorithms to maintain temporal consistency. For instance, Mocycle-GAN  (chen et al., 2019) proposes the motion translation and consistency strategies based on Cycle-GAN to preserve the spatial and temporal consistency.  (T. et al., 2018b) relies heavily on labeled data and works in a supervised way, which is not suitable for I2V task.  (T. et al., 2019) is improved on the basis of (T. et al., 2018b) and adapts for the few-shot manner. However, these methods are ineffective for I2V video translation due to the low efficiency of optical flow extraction under low contrast between consecutive frames. Inspired by Cycle-GAN, Recycle-GAN  (B. et al., 2018) proposes to translate video via additional recurrent loss and recycle loss, enabling a spatial-temporal consistency. While as analysis in  (B. et al., 2018; chen et al., 2019) and results in some I2V video translation tasks, these constraints are not sufficient to make the synthesized frames strongly continuous as realistic visual videos. Moreover, these methods are all not tailored to I2V video translation, since how to guarantee the semantic details preservation is also crucial when given low contrastive infrared videos.

In this paper, we take a step further by introducing perceptual cyclic losses and similarity losses for detailed I2V video translation, which can achieve higher fluency and finer semantic information. The perceptual cyclic losses aim to improve translation performance for both content and style, and the similarity losses could improve video consistency in both spatial and temporal spaces.

Figure 3. The I2V-GAN network architecture with the flow of . The opposite direction is similar. The frames on the blue background is from the source domain, and the other frames on the green background are expected to lay in the target domain. The network contains two generators ( and ) and two predictors ( and ) to synthesize frames across and within domain, respectively. Meanwhile, two discriminators ( and ) are to distinguish real frames from synthesized ones. Given three consecutive frames , , from domain , we first synthesize frames by , which are further reconstructed to by . Meanwhile, the synthesized frame by with the former synthesized frames is also mapped to domain as by . As above, we optimize: 1) cycle consistency loss and 2) recycle loss. Moreover, in this direction, we simultaneously optimize according to and , which named as 3) recurrent loss. To further enhance the performance of video translation at image level and video level, we introduce 4) external similarity loss and 5) internal similarity loss.

3. The Proposed Method

3.1. Notation and Problem Setting

For two video collections x in source domain and y in target domain, we denote the video in x as ordered frames and the video in y as ordered frames , respectively. In particular, denotes the -th frame in video at time . The goal of video-to-video translation is to learn two mapping functions between the source domain and the target domain . In more specific, is a generator to translate video frames from domain to domain as and is a generator to translate video frames from domain to domain as . Our main purpose is to translate IR video to VI video with spatial-temporal consistency and make it in line with people’s cognitive habits. Moreover, a discriminator is applied to distinguish real frames from translated frames . Similarly, another discriminator distinguishes real frames from . In the rest of section  3, we mainly illustrate our method in the direction of .

3.2. Adversarial Constraint

For GAN  (J. et al., 2014) based methods  (Z. et al., 2017; C. et al., 2018a; B. et al., 2018; chen et al., 2019), the generators and discriminators are adversarially trained to improve their mutual performance. Specifically, given synthesized frames generated by and real frames , the discriminator is trained to correctly distinguish the fake synthesized frames. Meanwhile, the generator is trained to synthesis high-quality frames in order to fool the discriminator. Therefore, the adversarial network could generate frames similar to targets via the following loss:

(1)

3.3. Recycle-GAN Revisit

Cycle Consistency Loss. Cycle-GAN  (Z. et al., 2017) proposes cycle consistency loss for unpaired image-to-image translation. Specifically, the generator synthesizes a fake frame corresponding to the input frame , the other generator should reconstruct frame from , which can denote as: , and the reconstruction loss is then formulated as:

(2)

Recycle-GAN  (B. et al., 2018) takes a step further based on Cycle-GAN. Besides cycle consistency loss in the spatial space, it introduces two additional losses for translation consistency in the temporal space.

Recurrent Loss. Firstly, it applies a temporal predictor to predict future frame based on an ordered frames sequence . The frame should be the same as :

(3)

where we write ordered frames sequence as .

Recycle Loss. Along with the above-mentioned predictor, a richer constraint across the domain and time is proposed:

(4)

where . This approach finds a way to constrain the generators , and the predictor in both spatial and temporal spaces. As for implementation, the predicted future frame is related to the past frames and .

3.4. Perceptual Loss

Actually, Cycle-GAN has achieved satisfying style translation performance, which only focuses on image-to-image translation. This paper concentrates on video-to-video translation, we propose an additional perceptual loss for fine-grained translation at the image level. Our perceptual loss works from two aspects of content and style. The content perceptual loss calculates the feature discrepancy between synthesized frames and real frames. The style perceptual loss computes the gram loss between synthesized frames and real frames at layer of the VGG network. Accordingly, our perceptual loss for direction is formulated as:

(5)
(6)
(7)

where represents -th layer of the VGG network, represents the channels of the feature map and , represent the size of the feature map, respectively at layer . is calculation of Gram matrix. The losses for direction is similar.

We apply this perceptual loss as additional constraint in , and . Each refined loss can be formulated as below:

(8)
(9)
(10)

here we alter to for more precise translation. As for implementation, we keep the same three-consecutive-frames training mechanism as Recycle-GAN. The network structure and training flow are presented in Figure  3.

3.5. Similarity Loss

In practical scenarios, although the perceptual cyclic losses help the network learn video translation in a more detailed way, there still exist mismatches of color between consecutive frames, which makes it not good enough for real application. In this circumstance, we introduce the

to our network, which helps detailed translating by maximizing mutual information. This noise contrastive estimation idea is to mining the relationship between two kinds of signal patches (normally notated as “query” and “positive”). Meanwhile, contrast the

to other patches (referred to as “negatives”). We apply this similarity estimation across domains and within domain to maximize spatial-temporal consistency, which named as

and , respectively.

Figure 4. External similarity loss. The frame is the input from domain and is the corresponding synthesized frame in domain . For each synthesized patch in , it should correlate to the same spatial location in as much as possible, i.e., the synthesized van patch as “query” should be close to the “positive” and increase the distance from “negatives”.

External Similarity Loss. The noise contrastive estimation  (A. et al., 2018) can be regarded as an -way classification problem. The input frames and synthesized frames go through the encoder at each layer of the generator, then a two-layer  (T. et al., 2020) helps to project selected patches to a shared embedding space. The , and are mapped into

-dimensional vectors as

, and , which can be formulated by cross-entropy as:

(11)

where is temperature scale parameter. Moreover, for each patch in synthesized frames, the corresponding patch is the same location in the input frames and the other patches are random selected from inputs for , as shown in Figure 4 and we formulate it as:

(12)

where represents the output features at spatial location of at layer for synthesized frame. represents corresponding positive features and represents the other negative features. The purpose is to restrict translation between corresponding patches, i.e. the white van should be more closely associated to the van patch, rather than the car and sky patches.

Figure 5. Internal similarity loss. The frames , , are inputs from the domain . , and are corresponding synthesized frames by I2V-GAN. We first compute the mutual information via NCE between consecutive frames ¡, ¿ and ¡, ¿, then treat the ratio of the two as the standard motion variation degree. After we got the standard ratio, we further compute the motion degree ratio of ¡, ¿ and ¡, ¿. By restricting the two motion variation degrees we could improve spatial-temporal consistency for synthesized video.

Internal Similarity Loss. Inventively, we utilize mutual information to represent the similarity between consecutive frames and propose within the domain.

Intuitively, classification confidence associate to the appearance of the object: the more appearance of the object, the higher degree of confidence. Combined with the conversion of noise contrastive estimation for mutual information, we can further regard the variation of as a motion variation degree between consecutive frames. In order to measure this variation, we first compute the discrepancy between consecutive input frames and treat it as a criterion. In more specific, the mapped feature vectors at layer of represent current spatial-temporal information at level . We contrast the feature vectors of two consecutive frames at the same layer by calculating their . Since we selected layers of interests, we could obtain an -dimensional vector to present discrepancy of levels. Meanwhile, the corresponding synthesized frames could also obtain an -dimensional vector . Our goal is to constrain the degree of motion between the generated consecutive frames to be consistent with the corresponding real frames. Thus, we simultaneously compute vector and , as shown in Figure 5 and we formulate our :

(13)
(14)

where represents the variation degree of three consecutive frames. Moreover, since our goal is to constrain this variation degree, we need a rigorous standard from input frames and keep the corresponding relationship between relevant patches. The are selected in the same spatial order for each optimization step, rather than random selection as in .

3.6. Overall Optimization

In summary, our final objective function is organized as below:

(15)

where we set perceptual cyclic losses tradeoff parameters . and are and tradeoff parameters, respectively.

4. IRVI Dataset

In general, infrared sensors include two types: near-infrared camera and long-wave infrared camera. The radiation band of the human body is within the range of the latter one. Therefore, the long-wave infrared camera is more in line with the requirements of recognition, security, and vehicle driving scenes. We use the long-wave type equipments to collect data.

SUBSET TRAIN TEST TOTAL FRAME
Traffic 17000 1000 18000
Monitoring sub-1 1384 347 1731
sub-2 1040 260 1300
sub-3 1232 308 1540 6352
sub-4 672 169 841
sub-5 752 188 940
Table 1. The structure of IRVI
NAME FRAME CLIP TASK
IRVI 24352 12 video translation
VOT2019 (RGBTIR) 20083 60 object tracking
FLIR 4224 1 object detection
KAIST (DAY ROAD) 16176 9
Table 2. Datasets Comparision Information

The dataset collects video streams through a binocular infrared color camera (DTC equipment) and performs scene alignment on the two domains at the hardware level. In different periods, the infrared and visible light data of traffic and monitoring scenes are obtained through vehicle-mounted and fixed-point brackets ways. Examples for each scene are shown in Figure 6. Each example includes infrared frames and visible light frames. The composition and quantity of the dataset are detailed in the Table 1.

(a) (b)
(c) (d)
(e) (f)
Figure 6. Examples for IRVI. We present 6 consecutive frames from each subset of IRVI. In particular, (a) is from Traffic and (b)(f) are from 5 subsets of Monitoring. Infrared on the left and corresponding visible light on the right.

We comprehensively select three public available infrared and visible light video datasets for comparison, which are VOT2019-RGBTIR  (M. et al., 2019), FLIR  (FLIR, 2018) and KAIST  (S. et al., 2015) in day road scene. The detailed comparison information is shown in Table  2.

Specifically, VOT2019-RGBTIR provides 60 video clips, while the duration and quality are varied. Although FLIR offers other infrared and visible light image pairs besides the 4224 frames listed in Table  2, they are inconsecutive and not counted in as videos. KAIST is the largest dataset among the three. It contains 6 subsets for training and another 6 subsets for testing. All the subsets are focus on traffic scene differs from place and time. Our IRVI dataset contains 6 clips for training and another 6 clips for testing. All our video clips are continuous as shown in Figure  6. On the other hand, the other three datasets collect for object tracking and detection. Our dataset IRVI is collected for I2V video translation, which contains limited luminance scenes. Moreover, IRVI incorporates monitoring scenes besides traffic, which makes it practical for real-world applications.

5. Experiments

5.1. Datasets and Setup

IRVI Dataset. Our experiments are mainly based on IRVI dataset. This dataset consists of two parts: traffic and monitoring, as introduced in Section  4 . The resolution of each video is scaled to 256 × 256. This translation task aims to colorize infrared video to visible light video as much similar as real color videos. Meanwhile, the translation results should eliminate the impact of adverse environmental factors, and within the scope of human cognition.

Flower Video Dataset. This dataset records the life cycle of different flowers through time-lapse photography, depicting the blooming or fading without any sync. The resolution of each video is 256 × 256. We evaluate the translation between different types of flowers as Recycle-GAN  (B. et al., 2018) and Mocycle-GAN  (chen et al., 2019). This translation task aims to align two flowers simultaneously bloom or fade.

Implementation Details.

We implement I2V-GAN network on Pytorch  

(A. et al., 2019). In detail, we use a Resnet-based generator  (K. et al., 2016), which is a PatchGAN structure encoder-decoder. To embed the features extracted from each layer of the encoder, we apply a two-layer MLP with 256 units after encoding. In all experiments, we set the tradeoff parameters for perceptual cyclic losses, and , for and , respectively. During the training, the batch size is set as 1 to achieve better performance.

Figure 7. I2V traffic samples. In this scene, the visible light camera is exposed when the vehicle passes under the bridge. Our goal is to eliminate this adverse via translating infrared frames and make them consistent with human cognition.

5.2. Evaluation Metrics

For infrared-to-visible video translation and flower-to-flower translation tasks, we first evaluate Fréchet Inception Distance (FID(M. et al., 2017) at image level. FID calculates the distance between the real frames and the synthesized frames in the feature space. And the feature representations are extracted from the inception network (C. et al., 2016)

. The lower the value indicates that the distribution of the generated frames is closer to the real distribution. Then we evaluate Peak Signal-to-Noise Ratio  (

PSNR) and Structural Similarity  (SSIM) from the pixel and structure levels, respectively. PSNR is generally regarded as an indicator of colorization methods to express translation quality. The higher value of PSNR indicates less distortion of the images. SSIM reveals the structure information of the objects carried by the interdependence of pixels. The higher value of SSIM means the more similar of the two comparison objects. Moreover, since FID, PSNR and SSIM could only represent performance to a certain extent and not fully suitable for video performance evaluation, we conduct an additional user study for application feedback as  (B. et al., 2018; chen et al., 2019; T. et al., 2018b, 2019).

Method FID
Flower Traffic Monitoring
sub-1 sub-2 sub-3 sub-4 sub-5 all
CUT 0.7861 0.5739 1.4348 0.6731 2.2294 1.0553 2.1973 1.0893
PCSGAN 0.5680 0.6436 1.0963 0.7315 2.0843 1.1034 1.6808 0.8912
Cycle-GAN 0.8164 0.6714 1.4027 0.8056 2.1497 1.0359 1.6266 0.8792
MoCycle-GAN 0.7135 0.7911 1.5556 0.9847 2.5013 1.1040 2.1171 1.0515
Recycle-GAN 0.6306 0.5255 1.6680 0.7521 2.0387 1.2959 1.8518 1.0609
I2V-GAN 0.4891 0.4425 1.4840 0.5905 1.7916 0.9189 1.6015 0.8715
Table 3. Fréchet Inception Distance for different translation methods. Lower is better.
Method PSNR/SSIM
Flower Traffic Monitoring
sub-1 sub-2 sub-3 sub-4 sub-5 all
CUT 9.27/0.33 16.86/0.56 15.87/0.45 20.83/0.50 13.13/0.41 19.06/0.53 14.13/0.15 17.00/0.43
PCSGAN 9.43/0.34 15.32/0.55 15.18/0.48 20.49/0.49 13.37/0.40 18.52/0.52 14.52/0.14 17.19/0.43
Cycle-GAN 9.65/0.31 14.87/0.54 15.78/0.48 19.43/0.50 14.02/0.41 18.43/0.52 14.15/0.14 17.14/0.43
MoCycle-GAN 9.39/0.37 15.60/0.56 14.99/0.43 19.39/0.50 11.51/0.40 18.84/0.54 13.83/0.15 17.09/0.43
Recycle-GAN 9.41/0.37 16.84/0.56 14.64/0.44 20.32/0.49 13.15/0.43 18.28/0.54 13.13/0.14 16.34/0.43
I2V-GAN 10.58/0.40 17.02/0.60 14.81/0.51 21.20/0.52 14.11/0.47 19.26/0.59 13.96/0.15 17.30/0.46
Table 4. Peak Signal-to-Noise Ratio and Structural Similarity for different translation methods. Larger is better.
Method Flower Traffic Monitoring - all
FID PSNR/SSIM FID PSNR/SSIM FID PSNR/SSIM
Mocycle-GAN 0.7135 9.39/0.37 0.7911 15.60/0.56 1.0515 17.09/0.43
Mocycle-GAN + 0.6028 9.76/0.38 0.6539 16.07/0.57 0.9002 17.20/0.44
Mocycle-GAN + 0.6540 9.72/0.39 0.7834 16.58/0.57 0.9529 17.14/0.44
Mocycle-GAN + 0.6238 9.67/0.37 0.6828 16.30/0.58 0.9632 17.21/0.44
Recycle-GAN 0.6306 9.41/0.37 0.5255 16.84/0.56 1.0609 16.34/0.43
Recycle-GAN + 0.5387 9.61/0.39 0.4520 16.84/0.57 0.9134 17.01/0.43
Recycle-GAN + 0.6194 10.01/0.39 0.5259 16.88/0.58 1.0209 16.65/0.44
Recycle-GAN + 0.5813 9.83/0.37 0.4610 16.98/0.58 0.9353 16.92/0.44
I2V-GAN w/o 0.6016 10.37/0.39 0.4987 16.24/0.59 1.0539 17.15/0.44
I2V-GAN w/o 0.5141 9.91/0.38 0.4688 16.65/0.58 0.9011 17.20/0.44
I2V-GAN w/o 0.5311 10.28/0.39 0.4514 16.24/0.57 0.9996 17.19/0.44
Table 5. Ablation study and further investigation experimental results.

5.3. Compared Approaches

We include the following state-of-the-arts and the most relevant unpaired translation methods for performance comparison: (1) CUT  (P. et al., 2020) is an unpaired image translation method which utilizes contrastive learning. (2) PCSGAN  (B. and D., 2020) is an image-level I2V image translator base on GAN. (3) Cycle-GAN  (Z. et al., 2017) pursuits an inverse translation at the image level to improve image translation performance. (4) Mocycle-GAN  (chen et al., 2019) is a motion-guided Cycle-GAN for video translation that applies optical flow for motion estimation. (5) Recycle-GAN  (B. et al., 2018) is the main baseline which leverages a recurrent temporal predictor to generate future frames and pursues a new cycle consistency across domains and time for unpaired video-to-video translation. (6) I2V-GAN is the proposed method in this paper. We compare the experimental results for each method and list the FID, PSNR and SSIM scores in Table  3 and Table  4.

The sub-1 clip in IRVI has many drastic camera movements, which cause large semantic gaps between consecutive frames. While the camera is steady in other clips. In this situation, sub-1 is a hard case for video methods which consider temporal dependencies among frames. Image methods show advantages when frames are nearly individual, thus perform better than video ones. Our method performs better in most tasks, especially compared with video ones.

Figure 8. Flower samples. Each row contains different life points of the flower, which generated by the methods noted on the left. Althougn some failures are not conspicuous at the image level, it is intolerable after combining them to a video since spatial-temporal consistency is not guaranteed.

5.4. Ablation Study and Further Investigation

In this section, we further study the three proposed constraints: 1) perceptual cyclic loss, 2) external similarity loss, and 3) internal similarity loss. We add one of them to Mocycle-GAN and Recycle-GAN, then evaluate FID, PSNR and SSIM on IRVI traffic and monitoring all scene, as well as flower in the same setting. The ablation study implements by removing one of the three from I2V-GAN, which also evaluate FID, PSNR and SSIM. The experimental results in Table  5 indicate our proposed constraints improve the performance.

Method Realism Fluency
CUT 7.74 / 10 5.21 / 10
PCSGAN 5.79 / 10 6.94 / 10
Cycle-GAN 4.57 / 10 4.41 / 10
Mocycle-GAN 4.14 / 10 7.23 / 10
Recycle-GAN 5.21 / 10 6.84 / 10
I2V-GAN 8.33 / 10 9.10 / 10
Table 6. User study results.

5.5. User study

Since FID, PSNR and SSIM are not able to fully represent the real performance at the video level, we conduct an additional user study as shown in Table  6. We first select traffic translation result videos for each method in the same time period, as well as monitoring and flower. Then randomly arrange these videos without any information about which method generates them. After that, we ask 20 professional researchers to judge their realism and fluency scores from 1 to 10. The higher score indicates the better performance.

Figure 9. I2V monitoring samples, 5 consecutive frames from the top to bottom. The results of I2V-GAN are more realistic and fluent than the other compared methods.

6. Conclusion

In this paper, we propose a novel infrared-to-visible video translation network named I2V-GAN, which could also be applied to other video translation tasks. Compared with existing state-of-the-arts image-to-image and video-to-video translation methods, our method simultaneously improved the details and fluency for video translation in the unpaired setting. Extensive experiments show the efficacy of our proposal. In particular, we have compared detail effects for each part of our improvements in the ablation study. Moreover, additional user study from different perspectives demonstrate that I2V-GAN is more effective and suitable for real application scenarios.

Acknowledgements

This work was supported by the National Natural Science Foundation of China (61902028).

References

  • (1)
  • A. et al. (2020) Naofumi A., Akio H., Andrew S., and Takuya N. 2020. Reference-Based Video Colorization with Spatiotemporal Correspondence, Vol. abs/2011.12528. arXiv:2011.12528
  • A. et al. (2018) Oord A., Li Y., and Vinyals O. 2018. Representation Learning with Contrastive Predictive Coding. In CoRR, Vol. abs/1807.03748. arXiv:1807.03748
  • A. et al. (2019) Paszke A., Gross S., Massa F., Lerer A., Bradbury J., Chanan G., Killeen T., Lin Z., Gimelshein N., Antiga L., et al. 2019. PyTorch: An imperative style, high-performance deep learning library. In NIPS.
  • A. et al. (2016) Ulhaq A., Yin X., He J., and Zhang Y. 2016. FACE: Fully Automated Context Enhancement for night-time video sequences. In VCIR.
  • B. et al. (2018) Aayush B., Shugao M., Deva R., and Sheikh Y. 2018. Recycle-GAN: Unsupervised Video Retargeting. In ECCV.
  • B. and D. (2020) Kancharagunta K. B. and Shiv R. D. 2020. PCSGAN: Perceptual cyclic-synthesized generative adversarial networks for thermal and NIR to visible image transformation. In Neurocomputing.
  • B. et al. (2014) Sheng B., Sun H., Magnor M., and Li P. 2014. Video Colorization Using Parallel Optimization in Feature Space. In IEEE.
  • C. et al. (2016) Szegedy C., Vanhoucke V., Ioffe S., Shlens J., and Wojna Z. 2016. Rethinking the inception architecture for computer vision. In CVPR.
  • C. et al. (2020b) Xinghao C., Yiman Z., Yunhe W., Han S., Chunjing X., and Chang X. 2020b. Optical Flow Distillation: Towards Efficient and Stable Video Style Transfer. In ECCV.
  • C. et al. (2018a) Yunjey C., Minje C., Munyoung K., JungWoo H., Sunghun K., and Jaegul C. 2018a. Stargan: Unified generative adversarial networks for multi-domain image-to-image translation. In CVPR.
  • C. et al. (2018b) Yang C., Yu K. L., and Yong J. L. 2018b.

    CartoonGAN: Generative Adversarial Networks for Photo Cartoonization. In

    CVPR.
  • C. et al. (2020a) Yunjey C., Youngjung U., Jaejun Y., and JungWoo H. 2020a. StarGAN v2: Diverse Image Synthesis for Multiple Domains. In CVPR.
  • chen et al. (2019) Y chen, Y Pan, T Yao, X Tian, and T Mei. 2019. Mocycle-GAN: Unpaired Video-to-Video Translation. In ACM MM.
  • D. et al. (2019) Naser D., Fadi B., Khawla M., Florian K., Jean-Luc D., and Arjan K. 2019. Cascaded Generation of High-quality Color Visible Face Images from Thermal Captures. In CoRR, Vol. abs/1910.09524. arXiv:1910.09524
  • Davis and Sharma (2007) J. Davis and V. Sharma. 2007. Background-Subtraction using Contour-based Fusion of Thermal and Visible Imagery. In CVIU.
  • Dosovitskiy et al. (2015) A. Dosovitskiy, P. Fischer, E. Ilg, P. Häusser, C. Hazırbaş, V. Golkov, P. v.d. Smagt, D. Cremers, and T. Brox. 2015. FlowNet: Learning Optical Flow with Convolutional Networks. In ICCV.
  • E. et al. (2002) Reinhard E., Ashikhmin M., Gooch B., and Shirley P. 2002. Color transfer between images. In IEEE CGA.
  • FLIR (2018) FLIR. 2018. FREE FLIR Thermal Dataset for Algorithm Training. https://www.flir.com/oem/adas/adas-dataset-form/
  • G. and Z. (2015) Bhatnagar G. and Liu Z. 2015. A novel image fusion framework for night-vision navigation and surveillance. In SIVP.
  • G. et al. (2018) Chang G., Derun G., Fangjun Z., and Yizhou Y. 2018. ReCoNet: Real-time Coherent Video Style Transfer Network. In ACCV.
  • G. et al. (2012) Raj K. G., Alex Y. S. C., Deepu R., Ee S. N., and Zhiyong H. 2012. Image colorization using similar images. In ACM MM.
  • Hogervorst and Toet (2007) M. A. Hogervorst and A. Toet. 2007. Fast and true-to-life application of daytime colours to night-time imagery. In ICIF.
  • I. et al. (2017) Phillip I., Jun Y. Z., Tinghui Z., and Alexei A. 2017. Image-to-image translation with conditional adversarial networks. In CVPR.
  • Ilg et al. (2017) E. Ilg, N. Mayer, T. Saikia, M. Keuper, A. Dosovitskiy, and T. Brox. 2017. FlowNet 2.0: Evolution of Optical Flow Estimation with Deep Networks. In CVPR.
  • J. et al. (2014) Goodfellow I. J., Pouget-Abadie J., Mirza M., Xu B., Warde-Farley D., Ozair S., Courville A., and Bengio Y. 2014. Generative Adversarial Networks. In NIPS.
  • J. et al. (2020) Li J., Chen E., Ding Z., Zhu L., Lu K., and Shen H. 2020. Maximum density divergence for domain adaptation. In IEEE TPAMI.
  • J. et al. (2021) Li J., Jing M., Su H., Lu K., Zhu L., and Shen H. 2021. Faster Domain Adaptation Networks. In IEEE TKDE.
  • J. et al. (2016) Zhang J., Cao Y., and Wang Z. 2016. Nighttime Haze Removal with Illumination Correction, Vol. abs/1606.01460. arXiv:1606.01460
  • K. et al. (2016) He K., Zhang X., Ren S., and Sun J. 2016. Deep residual learning for image recognition. In CVPR.
  • K. et al. (2017) Yash K., Karishma S., and Vandit G. 2017. Human Detection for Night Surveillance using Adaptive Background Subtracted Image, Vol. abs/1709.09389. arXiv:1709.09389
  • L. et al. (2020) Kangning L., Shuhang G., Andres R., and Radu T. 2020. Unsupervised Multimodal Video-to-Video Translation via Self-Supervised Learning. In CoRR, Vol. abs/2004.06502. arXiv:2004.06502
  • M. et al. (2017) Heusel M., Ramsauer H., Unterthiner T., Nessler B., and Hochreiter S. 2017. Gans trained by a two time-scale update rule converge to a local nash equilibrium. In NIPS.
  • M. et al. (2019) Kristan M., Matas J., Leonardis A., Felsberg M., Pflugfelder R., Kamarainen J., Cehovin Z., Drbohlav O., Lukezic A., Berg A., et al. 2019. The seventh visual object tracking vot2019 challenge results. In ICCV Workshops.
  • N. et al. (2019) Adam N., Abdelrahman E., David B., and David G. 2019. Unpaired Thermal to Visible Spectrum Transfer using Adversarial Training. In CoRR, Vol. abs/1904.02242. arXiv:1904.02242
  • P. et al. (2020) Taesung P., Alexei A., Zhang R., and Zhu J. 2020. Contrastive Learning for Unpaired Image-to-Image Translation. In ECCV.
  • S. et al. (2015) Hwang S., Park J., Kim N., Choi Y., and Kweon I. S. 2015. Multispectral Pedestrian Detection: Benchmark Dataset and Baselines. In CVPR.
  • S. et al. (2020) Li S., Xie B., Wu J., Zhao Y., Liu C., and Ding Z. 2020. Simultaneous Semantic Alignment Network for Heterogeneous Domain Adaptation. In ACM MM.
  • S. et al. (2019) Li S., Liu C., Xie B., Su L., Ding Z., and Huang G. 2019. Joint Adversarial Domain Adaptation. In ACM MM.
  • S. et al. (2021) Li S., Liu C., Lin Q., Wen Q., Su L., Huang G., and Ding Z. 2021. Deep Residual Correction Network for Partial Domain Adaptation. IEEE TPAMI.
  • S. et al. (2018a) Li S., Song S., Gao H., Ding Z., and Cheng W. 2018a. Domain Invariant and Class Discriminative Feature Learning for Visual Domain Adaptation. In IEEE TIP.
  • S. et al. (2018b) Liu S., John V., Blasch E., Liu Z., and Huang Y. 2018b. IR2VI: Enhanced Night Environmental Perception by Unsupervised Thermal Image Translation. In CVPR Workshops.
  • S. et al. (2017a) Patricia L. S., Angel D. S., and Boris X. V. 2017a. Infrared Image Colorization based on a Triplet DCGAN Architecture. In CVPR.
  • S. et al. (2017b) Patricia L. S., Angel D. S., and Boris X. V. 2017b. Learning to Colorize Infrared Images. In PAAMS.
  • T. et al. (2020) Chen T., Kornblith S., Norouzi M., and Hinton G. 2020. A Simple Framework for Contrastive Learning of Visual Representations. In ICML.
  • T. et al. (2018a) Sergey T., Ming Y. L., Xiao D. Y., and Jan K. 2018a. MoCoGAN: Decomposing Motion and Content for Video Generation. In CVPR.
  • T. et al. (2002) Welsh T., Ashikhmin M., and Mueller K. 2002. Transferring Color to Greyscale Images. In TOG.
  • T. et al. (2018b) Wang T., Liu M., Zhu J., Liu G., Andrew T., Jan K., and Bryan C. 2018b. Video-to-Video Synthesis. In NeurIPS.
  • T. et al. (2019) Wang T., Liu M., Andrew T., Liu G., Jan K., and Bryan C. 2019. Few-shot Video-to-Video Synthesis. In NeurIPS.
  • Toet (2003) Alexander Toet. 2003. Natural colour mapping for multiband nightvision imagery. In Information Fusion.
  • W. et al. (2018) Ting C. W., Ming Y. L., Jun Y. Z., Andrew T., Jan K., and Bryan C. 2018. High-Resolution Image Synthesis and Semantic Manipulation with Conditional GANs. In CVPR.
  • Y. et al. (2006) Qu Y., Wong T., and Heng P. 2006. Manga Colorization. In TOG.
  • Z. et al. (2018) He Z., Benjamin S. R., Shuowen H., Nathaniel J. S., and Vishal M. P. 2018. Synthesis of High-Quality Visible Faces from Polarimetric Thermal Faces using Generative Adversarial Networks. In CoRR, Vol. abs/1812.05155. arXiv:1812.05155
  • Z. et al. (2017) Jun Y. Z., Taesung P., Phillip I., and Alexei A. E. 2017. Unpaired image-to-image translation using cycle-consistent adversarial networks. In ICCV.
  • Z. et al. (2016) Zhou Z., Dong M., Xie X., and Gao Z. 2016. Fusion of infrared and visible images for night-vision context enhancement. In Appl Opt.