Autonomous driving is becoming a reality after almost four decades of incubation, and object detection using deep neural networks is a key element in this success. The autonomous vehicle has to offer broader access to mobility, and while doing so, the safety of a vehicle and the surroundings are primary concerns. SOTIF (Safety of the intended functionality) reflects in detail towards safety violations that happen without technical system failure. For example, failure to perceive an object in an environment or fog occluding the vision. The autonomous vehicle should be capable of operating safely in such situations. The perception of the environment plays a vital role in the safety of the autonomous vehicle. Environmental perception is generally defined as awareness of, or knowledge about the surroundings, and the understanding of the situation by the visual perception .
The sensors commonly used for perception in the autonomous vehicle includes lidar, RGB cameras, and Radar. One of the essential aspects of perception is object detection. All the aforementioned sensors are employed in object detection. Each sensor has its own drawback. Lidar gives a sparse 3D map of the environment, but small objects like pedestrians and cyclists are hard to detect at a distance. The RGB camera performs poorly in unfavorable illumination conditions such as low lighting, sun glare, and glare from the headlight of the vehicle. Radar has a low spatial resolution to detect pedestrians accurately. There exists a gap in object detection in adverse lighting conditions . The inclusion of a thermal camera in the sensor’s suite will fill the blind spots in environmental perception. A thermal camera is robust against illumination variation and advantageous to be deployed during day and night. The object detection and classification are indispensable for visual perception, which provides a basis for computing perception in an autonomous vehicle.
. However, the accuracy of object detection in thermal images has not yet attained the state-of-the-art results compared to the visible spectrum. The aforementioned object detection algorithms depend on networks that have been trained on sizable RGB datasets such as ImageNet, PASCAL-VOC , and MS-COCO . There exist a comparable scarcity of such large scale public datasets in the thermal domain. Two primary datasets for urban thermal imagery that are available include, FLIR ADAS image dataset  and KAIST Multi-Spectral dataset . KAIST Multi-Spectral dataset only gives annotations for persons, while the FLIR ADAS dataset gives annotation for four classes. In order to overcome the absence of the large scale labeled dataset, here, a domain adoption technique for object detection in the thermal domain is presented.
Currently, numerous approaches for domain adaptation have been introduced, which aims to narrow down the gap between source and target domain. Among many, generative adversarial networks (GAN)  and domain confusion  for the feature adaptation are noteworthy. The domain adaptation prospects in data starved thermal images domain motivate this study, which explores a derivative of closing the gap between visible and infrared spectrum in the context of object detection. Domain adaptation is influenced by generative models, for instance, CycleGAN  that translates the single instance of source domain to target domain without translating the style attributes to the target domain. The low-level visual cues have an implicit impact on the performance of object detection . The delegation of these visual cues in the target domain from the source domain can be beneficial for robust object detection in the target domain.
This work explores the translation of low-level features adopted from a source domain (RGB) to a target domain (thermal) using domain adaptation to improve object detection in the target domain. The multi-style transfer is applied to transfer the low-level features such as curvatures and edges from the source domain to the target domain. Deep learning-based object detection architectures that rely on classical backbone like VGG, ResNet , are trained on the multi-style transfer images from scratch for the robust object detection in the infrared spectrum (target domain). Moreover, we have proposed a cross-domain model transfer method for object detection in thermal images supplementing the domain adaptation. The cross-domain model transfer for which the object detection deep neural networks have trained in the source domain (visible spectrum). The trained models, referred to as cross-domain models, are evaluated with multi-style transfer images and without multi-style transfer images in the target domain (infrared spectrum). The proposed techniques are evaluated with FLIR ADAS  and KAIST Multi-Spectral , and PASCAL-VOC evaluation is used to determine the average mean precision of the detected objects.
Major contributions in this paper are highlighted below:
Improved object detection in the infrared spectrum (thermal images) by exploring the low-level features using style consistency. The proposed object detection framework outperformed existing benchmarks in terms of mean average precision.
Cross-domain model transfer paradigm not only enhances the object detection in the infrared spectrum (thermal images) but also provides an alternative yet effective method for labeling the unlabelled dataset.
The rest of the paper is organized as follows: Section II discusses the related literature. In Section III, the proposed methodology is discussed. Section IV focuses on experimentation and results. Section V shows the comparison and discussion about the proposed method. Section VI concludes the study.
Ii Related Work
Ii-a Object Detection
Human vision is robust to identify the objects in countless challenging conditions, but it is not a trivial task for the autonomous vehicle. The ultimate goal of object detection in images is to localize and identify all instances of the same object or different objects present in the image. The significant work is done on person detection in thermal images by taking into account the temperature sensed in the surrounding. Classical image processing techniques can be used for detection like thresholding is used in 
. The HOG features and local binary patterns are used to extract features from thermal images, and the features are used to train SVM classifiers. Deep neural networks have gained repute in object detection tasks in RGB images, and are used for object detection in thermal images . The feature maps from multispectral images are extracted and fed to object detector, i.e., faster-RCNN and YOLO.  augment multispectral images with their saliency map such that it focuses attention on pedestrians during the daytime. They train Faster-RCNN for pedestrians detection and fine-tune it on extracted feature maps.  uses CycleGAN to generate the thermal images from RGB images, to remove the dependency of pairing the RGB and thermal images in the dataset. They have used a variant of Faster-RCNN, which used both the thermal and RGB images to detect objects.
Ii-B Domain Adaptation
Typically neural networks encounter performance degradation when they are tested upon different datasets due to environmental changes. In some cases, the dataset is not large enough to train and optimize a network. Therefore techniques like domain adaptation provide a crucial tool to the research community.
The domain adaptation for object detection includes techniques like the generation of synthetic data or augmentation to real data to train the network.  have used publicly available object detection labeled datasets from various domains and multiple classes and merged them. For example, the fashion dataset Modanet is merged with the MS-COCO dataset by leveraging Faster-RCNN using domain adaptation. In , Faster-RCNN is used to make image and instance-level adaptation.  have introduced a two-step method, where they have optimized a detector to low-level features, and then it is developed as a robust classifier for high-level features by enforcing distance minimization between content and style image. 
has proposed a cross-domain semi-supervised learning structure that takes advantage of pseudo annotations to learn optimal representations of the target domain. They have used the fine-grained domain transfer, progressive confidence based annotation augmentation, and annotation sampling strategy.
Ii-C Style Transfer
Image Style transfer is a process that renders the content of the image from one domain with the style of another image from another domain. 
has demonstrated the use of feature representation from the convolution neural network for style transfer between two images. They have shown that features obtained from CNN are separable. They manipulate the feature representation between style and content images to generate new and visually meaningful images. have proposed style transfer based on a single object. They have used patch permutation to train a GAN to learn the style and apply it to the content image.  has introduced XGAN, consisting of auto-encoder, which captures the shared features from style and content images in an unsupervised way and along which it learns the translation of style onto the content image.  has proposed the CoMatch layer, which learns second-order statistics of features and matches them with style image. Using the CoMatch layer, they have developed the Multi-style Generative Network giving a real-time performance.
The dawn of deep learning has significantly improved the object detection paradigm by training the neural network models on the large dataset of the visible spectrum (RGB images). A novel approach to improve object detection for thermal images is introduced in this study by domain adaptation through style transfer. The scarcity or non-existence of labeled data provides a challenge to the research community, and labeling is not an easy task. The proposed approach can be used to perform domain adaptation for other datasets, like introducing foggy weather in the Kitti dataset or convert day images to night images.
Iii Proposed Method
This section presents the proposed methods for thermal object detection through style consistency and cross-domain model transfer for object detection in thermal images.
Iii-a Object Detection in Thermal Images through Style Consistency (ODSC)
The recent advances in deep learning have revolutionized the domain of object detection in the RGB image domain. However, in the infrared image domain, it lacks accuracy. Deep neural networks for object detection perform feature computation at a low-level and also at a high-level  . In this part of the proposed work, we argue that by transferring the low-level features from the source domain (RGB) using domain adaption increases the object detection performance in the target domain (thermal).
For the domain adaption between thermal images (content images ) and visible spectrum (RGB) images (style images ), we have adopted the multi-style generative network (MSGNet) for style transfer . The leverage of translating the specific style from the source to the target domain through the multi-style generative network provides an extra edge over the CycleGAN . The CycleGAN generates one translated image from the source image of a specific style. MSGNet provides the capability to translate multi-style from the source domain to the target domain while closing the gap between two domains. The network extracts low-level features such as texture and edges from the source domain while keeping the high-level features consistent in the target domain. Fig. 2(a) shows the framework for transferring the style from the visible spectrum (RGB) images to thermal images.
The architecture of the MSGNet is shown in Fig. 2(a). MSGNet network takes both the content image and style image as input, while the previously known architectures, like, Neural Style  that takes only the content image and then generates the transferred image. The Generator network is composed of an encoder consisting of the siamese network , which shares its network weights with the transformation network through the CoMatch layer. The CoMatch layer matches the second-order feature statistics of content image to the style images . For a given content image and a style image, the activation of the descriptive network at the scale represents the content image where , , are the number of feature map channels, the height of feature map and width respectively. The distribution of features in style image is represented using the Gram Matrix given by equation. 1. In order to find the desired solution in the CoMatch layer that preserves the semantic content of source image as well as matches the feature statics of target style, an iterative approximation approach is adopted by incorporating the computational cost in the training stage as shown in the equation. 2.
where, is a reshaping function in Gram Matrix for zero-centered data.
where is a learnable matrix.
The minimization of a weighted combination of the content and style difference between the generator network output and targets for a given pre-trained loss network . The generator network is given by and parameterizes by , weights. The learning is done by sampling the content image and style image
, and estimate the weights,of the generator to minimize the loss:
where and are the regularization parameters for content and style losses. The content image is consider at scale and style image is considered at scales . The total variational regularization is , which is used for the smoothness of the generated image .
The proposed framework for object detection through style consistency is presented in Fig. 2. The network consists of two modules; the first part consists of a multi-style network. It generates the style images by adapting low-level features transformation between the content image consisting of thermal image and style image consisting of the RGB image. As compared to the thermal images, the transferred style images contain low-level features, but the semantic shapes are preserved in these generated images keeping the high-level semantic features consistent. The second module is comprised of the state-of-the-art detection architectures: Faster-RCNN  backbone with ResNet-101 , SSD-300 and 512  with backbone VGG16 , MobileNet  and EfficientNet . The networks are trained on the styled images, which bridge the gap between the visible spectrum and thermal images. The backbone in the Faster-RCNN and SSD are initialized with pre-trained weights obtained from training on the imageNet dataset . The trained detection network is evaluated on style images and thermal images. The accuracy of testing on thermal images shows the efficacy of object detection.
Iii-B Cross-Domain Model Transfer for Object detection in Thermal Images (CDMT)
This study aims to use the advantage of domain adaptation through style consistency and transfer of low-level features from the thermal images (source domain) to the visible spectrum (RGB images) target domain. For cross-domain model transfer, the source and target domain are swapped compared to the first part of the proposed work. Fig. 3 shows the overall framework for cross-domain model transfer object detection in thermal images. The detection networks ( Faster-RCNN backbone with ResNet-101, SSD-300 with backbone VGG16, MobileNet, and EfficientNet, SSD-512 with VGG16 backbone) are trained on the visible spectrum (RGB images) and then the trained model is tested on the thermal images. As the detection networks are trained on a different domain, in this case, visible spectrum (RGB) images, the performance of these networks on thermal images will be marginal. The efficacy of thermal object detection can be increased by using the style consistency. The MSGNet is trained with RGB images as the content image, and the style is borrowed from the thermal images. The style transferred images are then passed to the same detection networks that are trained earlier on the visible spectrum (RGB) images, which improves the object detection in thermal images. This cross-domain model transfer can be applied as a weak object detection module for the unlabeled dataset, as in our case for thermal images.
Iv Experimentation and Results
We have used two thermal image datasets in this study. First is the FLIR ADAS dataset , and the second one is the KAIST Multi-Spectral dataset . FLIR dataset consists of 9214 images with objects annotated using a bounding box as an evaluation measure. The objects are classified into four categories i.e., car, person, bicycle, and dog. However, the dog category has very few annotations, so it is not considered in this study. The images have a resolution of and obtained from FLIR Tau2 Camera. The dataset consists of day and night images, approximately images are captured during the daytime, and images are capture during nighttime. The dataset consists of both visible spectrum (RGB images) and thermal images, but annotations are only available for thermal images. The visible spectrum (RGB images) and thermal images are not paired so that the thermal annotations cannot be used with a visible spectrum (RGB images). Thermal images with annotations are only considered in this study. A standard split of the dataset into a training and a validation data is considered during experimentation. The training dataset consists of images, and the validation contains images.
The KAIST Multi-Spectral dataset contains images from both the visible spectrum (RGB images) and the thermal spectrum, and for each category, the dataset has both daytime and nighttime images. Annotations are only provided for the person class with a given bounding box. The visible spectrum (RGB images) and thermal images are paired, which means annotations for the thermal and the visible spectrum (RGB images) are the same. Images are captured using a FLIR A35 camera with a resolution of . We have applied a standard split of the dataset, using of the images in the dataset in training, and of the images in the dataset for validation.
Iv-B Object Detection in Thermal Images through Style Consistency
The evaluation of the proposed method is demonstrated using state-of-the-art object detection networks. The object detection networks include Faster-RCNN, SSD-300, and SSD-512. These object detection networks are implemented with different backbone architecture; for instance, ResNet-101 is used as a backbone network in Faster-RCNN; VGG16, MobileNet, and EfficientNet are used with SSD-300; SSD-512 uses VGG16 as backbone architecture. The dataset comprises of FLIR ADAS and KAIST Multi-Spectral dataset. The FLIR ADAS dataset is partitioned into training and testing using standard split, while the KAIST dataset is only used in testing the object detection networks. All the networks are implemented in Pytorch, having formulated the data in PASCAL-VOC format. The standard PASCAL-VOC evaluation criteria are used in this study.
A baseline approach is experimented first for the competitive analysis with the proposed methodology. The object detection networks are trained with their specific training configurations. In training the Faster-RCNN, a pre-trained model of ResNet-101 is adapted and fine-tuned on the thermal image dataset. The network is trained using Adam optimizer with a learning rate of and a momentum of
for total epochs of.
The experimental evaluation with the SSD object detection network constitutes two different architectures, i-e SSD-300 and SSD-512. In the case of training the SSD-300, the pre-trained models of backbone networks are fine-tuned on the training data. The learning rate for VGG16, MobileNet, and EfficientNet used as the backbone network for SSD-300 are ,, and , respectively. For the SSD-512 experimentation, only pre-trained VGG-16 is used as a backend for training with a learning rate of . All the networks have used a batch size of on the Nvidia-GTX- having GB of computational memory.
Iv-B2 Experimental Configuration
In the proposed methodology, the MSGNet is trained with thermal images to serve as a content image, whereas the RGB images correspond to style images. In training the MSGNet, VGG16 is used as a loss network. The pre-trained weights of the loss network on the ImageNet dataset are employed for training the MSGNet. In a loss network, the balancing weights as referred to in the equation. 3 are and respectively while the total variational regularization for content and style is . In the experimental configuration, the size of the style image is iteratively updated, having a size of , respectively. The size of the content images is resized to . The Adam optimizer is used with a learning rate of in the training configuration. The MSGNet is trained for a total of epochs with a batch of on the Nvidia-GTX-.
The trained model of MSGNet results in the generation of style images, as shown in Fig. 1 (a). These style images are used in training the object detection networks. The detection networks trained on style images are evaluated on the test data comprise of thermal images. The training configuration of these object detection networks is kept similar as the baseline configuration to make a comparative analysis.
Iv-B3 Experimental Results
For the evaluation of our experimental configuration, we have tested the baseline and proposed method, on both thermal datasets (FLIR ADAS and KAIST Multi-Spectral). Table-I shows the mean average precision (mAP) scores of the baseline configuration for each detection network, i.e., the networks are trained on thermal images and evaluated on thermal images. Table-II shows that the quantitative results of the proposed method. The best model configuration for the proposed method is (SSD512+VGG16) as shown in experimental results. The mAP score of the best model configuration of the proposed method has a better evaluation score compared to the baseline configuration. On the contrary, the detection networks trained on the thermal images tested on the style images show the marginal efficacy, as shown by Table-III. Fig. 1(a) shows the qualitative result of object detection in thermal images through style consistency. The qualitative results of best model configuration (SSD512+VGG16) is shown in Fig. 4 ().
|FLIR ADAS Dataset||KAIST Multi-Spectral Dataset|
|Network Architecture||Backbone||car||bicycle||person||Average mAP||person|
|FLIR ADAS Dataset||KAIST Multi-Spectral Dataset|
|Network Architecture||Backbone||car||bicycle||person||Average mAP||person|
|FLIR ADAS Dataset||KAIST Multi-Spectral Dataset|
|Network Architecture||Backbone||car||bicycle||person||Average mAP||person|
Iv-C Cross Domain Model Transfer for Object detection in Thermal Images (CDMT)
The cross-domain model evaluation employs the training of object detectors on the visible spectrum (RGB images). The KAIST dataset is used in this experiment, considering that the labels are available for both domains. The object detection networks incorporated in this study include Faster-RCNN, SSD-300, and SSD-512. The network model configuration is similar to ODSC. The Faster-RCNN is backend with ResNet-101 backbone. The SSD-300 network is experimented with VGG16, MobileNet, and EfficientNet backbone. Furthermore, SSD-512 is backend with VGG16 architecture. The learning rate for training all detection networks is except for the SSD-300 with EfficientNet backbone, which is tested with . The batch size is for all the aforementioned detection networks.
Similar to the ODSC, MSGNet is used to generate styled images, as shown by Fig. 1(b). In this case, the content images consist of the visible domain (RGB images), and the style is transferred from thermal images, which signifies that the style transfer between the content image (RGB images) and style image (thermal images) increase the object detection efficacy. The hyper-parameters for the MSGNet are kept the same as described in the experimental configuration of object detection in thermal images through style consistency. The detection networks are then tested on these generated styled images.
Iv-C1 Experimental Results
The method’s assessment is investigated by evaluating the trained network on the styled images and non-styled images. Table-IV shows the quantitative results of cross-domain model transfer. The quantitative results show that using the cross-domain model transfer with style transfer increases the object detection efficacy compared to cross-domain model transfer without style transfer. In addition to that, the method of using cross-domain model transfer will overcome the gap of annotating the unlabelled dataset and assists as a weak detector for the unlabelled dataset. The qualitative evaluation of using style transfer for CDMT is shown in Fig. 1(b) and Fig. 4 () shows the qualitative results of object detection using CDMT with style transfer.
|KAIST Multi-Spectral Dataset|
|Domain||CDMT without Style Transfer||CDMT with Style Transfer|
|Dataset||FLIR ADAS||KAIST Multi-Spectral|
|SSD300+ Mobilenet V2||0.5434||0.2798||0.3638||0.3957||0.7465|
|SSD300+ Mobilenet V2||-||-||-||-||0.7012|
For the efficacy of the proposed methodology, an extensive analysis is conducted of the proposed methods with state-of-the-art methods. Table-V shows a comparison between the proposed methods (ODSC and CDMT) and state-of-the-art methods. In our analysis, we have considered those methods, in which the standard PASCAL-VOC evaluation is used on both FLIR ADAS and KAIST Multi-Spectral dataset.
In addition to the mAP scores, class mAP scores are also compared with state-of-the-art methods in comparison to the proposed approach. Further, the comparison of the proposed method is not limited to the methods that only include domain adaptation. The object detection results are compared with the general object detection methods like PiCA-Net  and RNet , which have used saliency maps for object detection. It is apparent from the Table-V that in most of the categories, our proposed strategies have better performance in comparison to the existing benchmark.
In future work, we aim to improve the perception of autonomous vehicles under low lighting conditions. Lane detection and segmentation are essential aspects, which are challenging to do in the visible domain. Achieving these tasks in the thermal domain will contribute to the enhanced visual perception of autonomous vehicles.
This study focuses on improving object detection in low lighting conditions for autonomous vehicles. A new approach is introduced to perform domain adaptation from visible domain to thermal domain through style consistency. We have utilized MSGNet to transfer low-level features from the source domain to the target domain while keeping high-level semantic features consistent. The proposed method outperforms the existing benchmark for object detection in the thermal domain. Moreover, the effectiveness of style transfer is strengthened by using a cross-domain model transfer between visible and thermal domains. The application of the proposed approach exists in the autonomous vehicle under low lighting conditions and also robots in general. Object detection is an integral aspect of perception, and failure to detect the object compromises the safety of the autonomous vehicle. Thermal images provide additional insight into the surroundings while exploring the infrared spectrum, and the proposed techniques improve the results of object detection in thermal images with a positive impact on the safety of autonomous driving.
-  https://newsroom.intel.com/wp-content/uploads/sites/11/2019/07/Intel-Safety-First-for-Automated-Driving.pdf
-  Zube, E. H. (1999). Environmental perception. Encyclopedia of Earth science. Springer, New York, NY, 214-216.
-  Van Brummelen, J., O’Brien, M., Gruyer, D., & Najjaran, H. (2018). Autonomous vehicle perception: The technology of today and tomorrow. Transportation research part C: emerging technologies, 89, 384-406.
-  Ren, S., He, K., Girshick, R., & Sun, J. (2015). Faster r-cnn: Towards real-time object detection with region proposal networks. In Advances in neural information processing systems (pp. 91-99).
Liu, W., Anguelov, D., Erhan, D., Szegedy, C., Reed, S., Fu, C. Y., & Berg, A. C. (2016, October). Ssd: Single shot multibox detector. In European conference on computer vision (pp. 21-37). Springer, Cham.
-  Redmon, J., & Farhadi, A. (2018). Yolov3: An incremental improvement. arXiv preprint arXiv:1804.02767.
Deng, J., Dong, W., Socher, R., Li, L. J., Li, K., & Fei-Fei, L. (2009, June). Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition (pp. 248-255). Ieee.
-  Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., & Berg, A. C. (2015). Imagenet large scale visual recognition challenge. International journal of computer vision, 115(3), 211-252.
-  Chen, X., Fang, H., Lin, T. Y., Vedantam, R., Gupta, S., Dollár, P., & Zitnick, C. L. (2015). Microsoft coco captions: Data collection and evaluation server. arXiv preprint arXiv:1504.00325.
-  F. A. Group. Flir thermal dataset for algorithm training. https://www.flir.in/oem/adas/adas-dataset-form/, 2018.
-  Hwang, S., Park, J., Kim, N., Choi, Y., & So Kweon, I. (2015). Multispectral pedestrian detection: Benchmark dataset and baseline. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 1037-1045).
-  Goodfellow, I. (2016). NIPS 2016 tutorial: Generative adversarial networks. arXiv preprint arXiv:1701.00160.
-  Tzeng, E., Hoffman, J., Zhang, N., Saenko, K., & Darrell, T. (2014). Deep domain confusion: Maximizing for domain invariance. arXiv preprint arXiv:1412.3474.
-  Chu, C., Zhmoginov, A., & Sandler, M. (2017). Cyclegan, a master of steganography. arXiv preprint arXiv:1712.02950.
-  Zhao, Z. Q., Zheng, P., Xu, S. T., & Wu, X. (2019). Object detection with deep learning: A review. IEEE transactions on neural networks and learning systems, 30(11), 3212-3232.
-  Simonyan, K., & Zisserman, A. (2014). Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556.
-  He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 770-778).
-  Soundrapandiyan, R., & Mouli, P. C. (2015). Adaptive pedestrian detection in infrared images using background subtraction and local thresholding. Procedia Computer Science, 58(1), 706-713.
-  Baek, J., Hong, S., Kim, J., & Kim, E. (2017). Efficient pedestrian detection at nighttime using a thermal camera. Sensors, 17(8), 1850.
-  Li, W., Zheng, D., Zhao, T., & Yang, M. (2012, May). An effective approach to pedestrian detection in thermal imagery. In 2012 8th International Conference on Natural Computation (pp. 325-329). IEEE.
-  Wang, W., Zhang, J., & Shen, C. (2010, September). Improved human detection and classification in thermal images. In 2010 IEEE International Conference on Image Processing (pp. 2313-2316). IEEE.
-  Liu, J., Zhang, S., Wang, S., & Metaxas, D. N. (2016). Multispectral deep neural networks for pedestrian detection. arXiv preprint arXiv:1611.02644.
-  Wagner, J., Fischer, V., Herman, M., & Behnke, S. (2016, April). Multispectral Pedestrian Detection using Deep Fusion Convolutional Neural Networks. In ESANN.
-  Vandersteegen, M., Van Beeck, K., & Goedemé, T. (2018, June). Real-time multispectral pedestrian detection with a single-pass deep neural network. In International Conference Image Analysis and Recognition (pp. 419-426). Springer, Cham.
-  Konig, D., Adam, M., Jarvers, C., Layher, G., Neumann, H., & Teutsch, M. (2017). Fully convolutional region proposal networks for multispectral person detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops (pp. 49-56).
-  Ghose, D., Desai, S. M., Bhattacharya, S., Chakraborty, D., Fiterau, M., & Rahman, T. (2019). Pedestrian Detection in Thermal Images using Saliency Maps. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops (pp. 0-0).
-  Devaguptapu, C., Akolekar, N., M Sharma, M., & N Balasubramanian, V. (2019). Borrow from Anywhere: Pseudo Multi-modal Object Detection in Thermal Imagery. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops (pp. 0-0).
-  Rame, A., Garreau, E., Ben-Younes, H., & Ollion, C. (2018). OMNIA Faster R-CNN: Detection in the wild through dataset merging and soft distillation. arXiv preprint arXiv:1812.02611.
-  Chen, Y., Li, W., Sakaridis, C., Dai, D., & Van Gool, L. (2018). Domain adaptive faster r-cnn for object detection in the wild. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 3339-3348).
-  Rodriguez, A. L., & Mikolajczyk, K. (2019). Domain Adaptation for Object Detection via Style Consistency. arXiv preprint arXiv:1911.10033.
-  Yu, F., Wang, D., Chen, Y., Karianakis, N., Yu, P., Lymberopoulos, D., & Chen, X. (2019). Unsupervised Domain Adaptation for Object Detection via Cross-Domain Semi-Supervised Learning. arXiv preprint arXiv:1911.07158.
-  Gatys, L. A., Ecker, A. S., & Bethge, M. (2016). Image style transfer using convolutional neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 2414-2423).
-  Zheng, Z., & Liu, J. (2020). P -GAN: Efficient Style Transfer Using Single Style Image. arXiv preprint arXiv:2001.07466.
Royer, A., Bousmalis, K., Gouws, S., Bertsch, F., Mosseri, I., Cole, F., & Murphy, K. (2020). Xgan: Unsupervised image-to-image translation for many-to-many mappings. In Domain Adaptation for Visual Understanding (pp. 33-49). Springer, Cham.
-  Zhang, H., & Dana, K. (2018). Multi-style generative network for real-time transfer. In Proceedings of the European Conference on Computer Vision (ECCV) (pp. 0-0).
-  Guo, Q., Feng, W., Zhou, C., Huang, R., Wan, L., & Wang, S. (2017). Learning dynamic siamese network for visual object tracking. In Proceedings of the IEEE International Conference on Computer Vision (pp. 1763-1771).
-  Sandler, M., Howard, A., Zhu, M., Zhmoginov, A., & Chen, L. C. (2018). Mobilenetv2: Inverted residuals and linear bottlenecks. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 4510-4520).
-  Tan, M., & Le, Q. V. (2019). Efficientnet: Rethinking model scaling for convolutional neural networks. arXiv preprint arXiv:1905.11946.
-  Roy, P., Ghosh, S., Bhattacharya, S., & Pal, U. (2018). Effects of degradations on deep neural network architectures. arXiv preprint arXiv:1807.10108.
Isola, P., Zhu, J. Y., Zhou, T., & Efros, A. A. (2017). Image-to-image translation with conditional adversarial networks. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 1125-1134).
-  Agrawal, K., & Subramanian, A. (2019). Enhancing Object Detection in Adverse Conditions using Thermal Imaging. arXiv preprint arXiv:1909.13551.