Recurrent Exposure Generation for Low-Light Face Detection

07/21/2020 ∙ by Jinxiu Liang, et al. ∙ South China University of Technology International Student Union Stony Brook University Peking University 15

Face detection from low-light images is challenging due to limited photos and inevitable noise, which, to make the task even harder, are often spatially unevenly distributed. A natural solution is to borrow the idea from multi-exposure, which captures multiple shots to obtain well-exposed images under challenging conditions. High-quality implementation/approximation of multi-exposure from a single image is however nontrivial. Fortunately, as shown in this paper, neither is such high-quality necessary since our task is face detection rather than image enhancement. Specifically, we propose a novel Recurrent Exposure Generation (REG) module and couple it seamlessly with a Multi-Exposure Detection (MED) module, and thus significantly improve face detection performance by effectively inhibiting non-uniform illumination and noise issues. REG produces progressively and efficiently intermediate images corresponding to various exposure settings, and such pseudo-exposures are then fused by MED to detect faces across different lighting conditions. The proposed method, named REGDet, is the first `detection-with-enhancement' framework for low-light face detection. It not only encourages rich interaction and feature fusion across different illumination levels, but also enables effective end-to-end learning of the REG component to be better tailored for face detection. Moreover, as clearly shown in our experiments, REG can be flexibly coupled with different face detectors without extra low/normal-light image pairs for training. We tested REGDet on the DARK FACE low-light face benchmark with thorough ablation study, where REGDet outperforms previous state-of-the-arts by a significant margin, with only negligible extra parameters.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 4

page 6

page 8

page 10

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

As the cornerstone for many face-related systems, face detection has been attracting long-lasting research attention [52, 43, 21, 24, 54]. It has extensive applications in human-centric analysis such as person re-identification [8, 20] and human parsing [13]

. Despite great progress in recent decade, face detection remains challenging particularly for images under bad illumination conditions. Images captured in low-light conditions typically have their brightness reduced and intensity contrast compressed, and thus confuse feature extraction and hurt the performance of face detection. Poor illumination also causes annoying noise that further damages the structural information for face detection. To make things even worse, the illumination status may spatially vary a lot within a single image. For systematic evaluation of face detection algorithms under adverse lighting conditions, a challenging benchmark named DARK FACE 

[55] is recently constructed, which shows clear performance degradation of state-of-the-art face detectors. For example, DSFD [27] produces an mAP of 15.3%, in a sharp contrast to above 90% on the hard subset of the popular WIDER FACE [54] benchmark. The dramatic performance degeneration of modern face detectors on the DARK FACE dataset clearly shows that it remains extremely challenging to detect faces under low-light conditions, which is the main focus of this paper.

(a) Low-light image (b) KinD [58] (c) LIME [15] (d) Ours
Fig. 1: Detection results of DSFD [27] on a low-light image (a) and its enhanced versions using KinD [58] (b) and LIME [15] (c). Green and red boxes indicate true positives and missed targets, respectively. It can be seen that the improvement brought by lighting enhancement is very limited. By contrast, our result in (d) (plotted on the same image of (c) for better visibility) show clear advantages.

Naturally, one may seek help from low-light image enhancement as preprocessing, as evidenced clearly by the experiments shown in [55]. However, as illustrated in Fig. 1 (b-c), there is still a large room for improvement. For one reason, image enhancement aims to improve visual/perceptual quality for the entire image, which is not fully aligned with the goal of face detection. For example, the smoothing operations for enhancing noisy images could compromise the feature discriminability that is critical for detection. This suggests a close integration between the enhancement and detection components, and points to an end-to-end ‘detection-with-enhancement’ solution.

Another reason lies in that the illumination in the original image may vary greatly in different regions. Consequently, it is hard to expect a single light-enhanced image to handle well facial regions under different lighting conditions in terms of detection. This suggests the use of a multiple enhancement strategy and brings our attention to the multi-exposure technique. In particular, when it is difficult to obtain a well-exposed image with a single shot, the technique takes multiple shots with varying camera settings. Such multi-exposure images are then fused for light enhancement. Similarly and intuitively, we may generate multi-exposure images and then detect faces from them to cover different exposure conditions. However, automatically deriving high quality multi-exposure images from a single image is nontrivial [47], let alone a low-light image – but such high quality is not required for face detection. It is the mechanism for capturing information at different exposures that matters.

Driven by the above motivations, we propose a novel end-to-end low-light face detection algorithm named REGDet. REGDet contains two sequentially connected modules, a Recurrent Exposure Generation (REG) module and a Multi-Exposure Detection (MED) module. From an input image, REG generates a sequence of pseudo-exposures to loosely mimic the effect of the highly non-linear process of in-camera multi-exposure. This is done by assembling a set of ConvGRUs marching in two directions: one direction points progressively and recurrently to the degree of exposure, while the other guides encoder-decoder structures to produce exposure compensated images. Then, these pseudo-exposures are fed into MED, which adapts generic face detectors so as to fuse ‘multi-exposure’ information of different pseudo-exposures smoothly. With the two modules collaborated together, REGDet not only encourages rich interaction and feature fusion across different illumination levels, but also enables end-to-end learning of effective low-light processing tailored for face detection. Moreover, as shown in our experiments, REG can be flexibly coupled with different face detectors without extra low/normal-light image pairs. We tested REGDet on the DARK FACE low-light face benchmark with thorough ablation study. In the experiments, REGDet outperforms previous state-of-the-arts by a significant margin, with only negligible extra parameters.

To summarize, we make the following contributions:

  • The first end-to-end ‘detection-with-enhancement’ solution, REGDet, for face detection under poor lighting conditions,

  • A novel and lightweight recurrent exposure generation module to tackle the non-uniform darkness issue,

  • A flexible framework compatible to existing face detectors,

  • New state-of-the-art performance on the publicly available benchmark.

Ii Related Work

The focus in this paper is on developing a learning solution for low-light face detection. In the following we describe previous studies from three aspects: low-light image enhancement, low-light face detection, and gated recurrent networks.

Ii-a Low-Light Image Enhancement

Low-light image enhancement has been a popular topic recently for improving the perceptual quality of images. Early solutions often rely on local statistics or intensity mapping, e.g., histogram equalization [2] and gamma correction [9]. Later solutions are often based on the Retinex theory [25] which assumes an image as a combination of a reflectance map that reflects the physical characteristic of scene objects and a spatially smooth illumination map. Thus developed solutions focus on resolving the ambiguity between illumination and reflectance by imposing certain priors on a variational model based on empirical observations (e.g., [46, 10, 15, 11, 28]

). More recently, deep learning-based solutions boost further the image enhancement quality. These recent methods often produce impressive results for enhancing low-light images (

e.g.[45, 48, 50, 58]). However, the performance gain, when applied to low-light face detection, is still far from saturated [55]. As discussed in previous section, this is partly due to their different goal with face detection, dealing with uneven illumination inside a single image, and weak collaboration with a face detection module.

The most related work to ours in low-light image enhancement is the multi-exposure fusion-based method BIMEF [56]. BIMEF first synthesizes a brighter image by a Brightness Transform Function (BTF) with fixed camera parameters, and then blends it with the original low-light image into a better one. Our method shares the idea of generating multi-exposure images, but is driven by a very different goal, i.e., face detection. Consequently our model is learned end-to-end for the goal. Moreover, BIMEF does not consider the inevitable noise in low-light images and does not leverage the powerful data-driven modeling capacity of deep learning.

Ii-B Low-Light Face Detection

With the advent of large-scale face detection datasets [21, 24, 54] and the proliferation of deep learning technologies [12, 37, 31, 30], face detection in unconstrained environments (a.k.a. ‘in the wild’) has made remarkable progress [35, 17, 19, 57, 40, 42, 36, 27]

. Most recent technological developments have focused on robustness to geometric variance. Typical geometric distortion includes scale variation, deformation, occlusion and so on. For scale variation, researchers have proposed many effective strategies based on the idea of multi-scale analysis: designing image pyramids with different image scales 

[19], designing a pre-defined set of anchor boxes with different sizes and aspect ratios [22, ming2019group, 36], detecting at different layers of the network [35, 57] and so on. Deformable part-based model improves deformation invariance by decomposing the task of face detection into detecting different facial parts [53]. The idea of face calibration is explored to obtain deformation invariance in [40]. Spatial context aggregation is a modern strategy for obtaining invariant features. Existing context aggregation techniques include enlarging receptive field by dilated convolution [6], multi-layer fusion [41] and top-down feature fusion [27, 42].

Low-light face detection has been attracting research attention for a long time. In the era of hand-crafted features, enduring efforts have been made to understand and handle the non-uniform illumination issue [16, 51, 26]. Recently, there are increasing interests in data-driven approaches for face detection on low-quality images such as low-resolution images and low-light images [59, 34, 55]. Illumination variation is known to be a major challenge for modern face detection algorithms [1, 59]. Pioneering approaches preprocess images by intensity mapping such as logarithmic transform [1] and gamma transform [39]. Photometric normalization is another commonly adopted method that counteracts the varying lighting conditions in hand-crafted feature [51, 5] and deep learning-based methods [59, 31]. Hand-crafted feature based methods derive the illumination invariance from various priors such as image differences or gradients [1, 16], while deep learning-based methods use random photometric distortions as augmentation to implicitly enhance the illumination invariance [57, 42, 27]. Despite previous studies, face detection in extremely adverse light conditions has been under explored, due partly to the lack of high quality labeled data. Addressing this issue, Yang et al. present a large manually labeled low-light face detection dataset, DARK FACE, and show that existing face detectors perform poorly on the task [55]. Our work is thus motivated and evaluated on the benchmark, and outperforms clearly previous arts. Baseline experiments have shown that, despite of the outstanding success achieved nowadays, even the best well-trained face detectors are less than ideal if the images are simply pre-processed using existing low-light enhancement methods [55].

Ii-C Gated Recurrent Networks

Gated Recurrent Networks are the most related work to ours from the learning aspect. Gated recurrent unit (GRU) in recurrent networks is a gating mechanism to adaptively control how much each unit remembers or forgets for sequence modeling 

[7]. It was first proposed and applied to task of machine translation. ConvGRU [3] extends the fully-connected layers in GRU with convolution operations to model correlations among image sequence. The design of the REG module is greatly inspired by [29]. However, the learning of the REG module is performed with a proposed pseudo-supervised pre-training strategy and the implicit guidance of a follow-up detection module instead of ground-truth data. Moreover, instead of predicting rain streak layer by residual learning, REB directly learns to generate various pseudo-exposures.

Iii The Proposed Method

Fig. 2: The main framework of the proposed REGDet for low-light face detection.

As shown in Fig. 2, the proposed REGDet involves two main modules, the Recurrent Exposure Generation module (REG) and the Multi-Exposure Detection module (MED). To loosely mimic the complex and highly non-linear in-camera multi-exposure process, REG generates progressively brighter images while encoding historical regional information. These pseudo-exposures are then fed into MED to produce face bounding boxes. The two modules are coupled together to form an end-to-end framework.

Iii-a The Recurrent Exposure Generation Module

To progressively generate pseudo-exposures from a low-light input image , a natural solution is to generate the next image by an NN conditioned on the previous image . However, as there exists non-uniform darkness in low-light images, such strategy could lead to locally over-smoothed or over-exposed regions, and consequently hurt the face detection task that relies seriously on discriminative details.

To address the above issue, the proposed Recurrent Exposure Generation (REG) module leverages historical generated images to maintain critical region details in a Recurrent Neural Network (RNN) framework. Starting from

and initial hidden state , REG generates recurrently intermediate pseudo-exposures formulated as

(1)

where and denote the encoder and the decoder of the proposed module, respectively, with corresponding parameters and . The encoder consisting of four cascaded convolutional recurrent layers is responsible for transforming the input image into features maps of multiple scales (layers), while the decoder consisting of two convolutional layers learns to decode the feature maps back to images, as shown in Fig. 2.

At stage , where denotes feature map from the -th layer. Initialized by , the feature maps are produced by our recurrent exposure generation unit (REGU) as

(2)

In particular, REGU is designed based on the Convolutional Gated Recurrent Unit (ConvGRU) [3] for performance and memory consideration, as shown in the right part of Fig. 2. An REGU in the -th layer can be described by the following equations:

(3)
(4)
(5)
(6)
(7)

where and are update and reset gates, respectively, which decide the degree to which the unit updates or resets its historical encoding information,

is sigmoid function,

denotes the Hadamard product, denotes a convolution operator, filters and are for dilated and regular convolution respectively.

denotes leaky ReLU 

[32]activation function

(8)

where denotes the negative slope. Given a feature map , the channel-wise attention (CA) [44] can be computed as

(9)

where is channel-wise global average pooling, denotes a 1D convolution kernel with kernel size 3 and

denotes channel-wise multiplication between the feature map and the obtained channel weighting vector.

REGB has several extensions compared with the standard ConvGRU. First, an important component in our REGU is the channel-wise attention, which is integrated in each unit before activation except for the last one. Like in other vision tasks [44], such an efficient mechanism enables appropriate cross-channel interaction inside a feature map and therefore helps aggregate spatial global information and recalibrate the feature map at each step. Second, REGU uses leaky ReLU [32] as the activation function to alleviate the ‘dying ReLU’ issue, i.e.

 , some neurons going through the flat side of zero slope stop being updated. Third, to tackle the issue of unevenly distributed darkness, different dilation rates (

in the -th layer) are used in different convolutional layers of the encoder to obtain progressively larger receptive fields while maintaining small parameter cost.

Iii-B Pseudo-Supervised Pre-Training of the REG Module

To enable good diversity and complementarity of the generated sequence, we adopt a pseudo-supervised pre-training strategy which leverages pseudo ground-truth images corresponding to different exposures. The pseudo ground-truth images are generated from by a camera response model [56] that characterises the relationship between pixel values and exposure ratios. A camera response model contains a camera response function (CRF), i.e., the nonlinear function relating camera sensor irradiance with image pixel value, and a brightness transform function (BTF), i.e., the mapping function between two images captured in the same scene with different exposures [38]

. Once the parameters of CRF corresponding to a specific camera is known, the parameters of BTF can be estimated by solving the comparametric equation 

[33]. However, the information about the cameras to estimate accurate camera response models is often far from enough in the publicly available low-light face detection dataset. Therefore, we adopt the camera response model proposed in [56] that can characterize a general relationship between the pixel values and exposure ratios when no camera information is available. Its BTF is in the form of Beta-Gamma Correction

(10)

where and denote the pixel value and the exposure ratio respectively, and the camera parameters are estimated by fitting the 201 real-world camera response curves in the DoRF database [14]. Specifically, the exposure ratios are , where the base ratio is empirically set as .

The REG module is then guided to generate images corresponding to diversified exposures. To measure the distance between the generated image and the pseudo ground-truth produced from with parameter , we use a combination of norm and the Structure Similarity (SSIM) index [49] that reflects the difference on luminance and contrast, which is formulated as

(11)

and the SSIM meature is defined as

(12)

where means and deviations are computed by applying a Gaussian filter at pixel of image and denotes the number of pixels in the image. Following common practice in image enhancement, we randomly crop patches followed by random mirror, resize and rotation for data augmentation.

As the pseudo ground-truth images have inevitable noise and artifacts, we adopt the early stopping strategy to prevent over-fitting to those noise and artifacts. Specifically, the pre-training stops when the average PSNR of compared to reaches around 25. We use the training split of the DARK FACE dataset to perform the pseudo-supervised pre-training. As our method does not rely on any external low/normal-light image pairs, it enjoys good scalability and can be fairly compared to other approaches. This pre-training practice can be expected to speedup the joint training process and boost the final detection performance. The performance comparison can be found in Table III.

To understand and verify the complementarity of the generated sequence from the REG module, we visualize them in Fig. 3. The detection results on the generated images using the pre-trained DSFD detector in the left four images show good complementarity between different generated images, indicating that the REGDet learns to generate a complementary detection-oriented image sequence to benefit subsequent face detection.

(a) Detect on (b) Detect on (c) Detect on (d) Detect on (e) REGDet
Fig. 3: The left four are detection results on intermediate and generated from the REG module, which show complementarity among the generated images, supporting the effectiveness of our proposed REG module. Note these ‘images’ are linearly normalized for visualization so that the minimum (maximum) value corresponds to 0 (255). The rightmost column shows our final detection result, where more faces (14 out of 15) are successfully localized, showing superiority of the proposed MED module. Green and red boxes indicate true positives and missed targets, respectively. The zoom-in versions on the second row are enhanced by LIME [15] for better visibility.

Iii-C The Multi-Exposure Detection Module

Once the multiple pseudo-exposures are created by the REG module, a straightforward strategy is to separately feed them into a face detector and fuse their corresponding detected bounding boxes, i.e., late fusion. This is however computationally expensive as it requires multiple runs of the detection process. Instead, we introduce a resource efficient strategy to fuse the low-level features extracted from in early stage of detection. Such strategy not only takes advantage of available pre-trained face detectors, but also allows the collaboration among different pseudo-exposures.

Specifically, the proposed Multi-Exposure Detector (MED) module integrates a generic pre-trained CNN-based face detection algorithm, named base detector with early fusion. We tailor its first convolutional layer using filter inflation technique [4] in the channel dimension so that the detector can simultaneously process multiple images and perform adaptive integration, as shown in Fig. 2. The weights of the convolutional layers are bootstrapped from the first layer in the pre-trained base detector, by duplicating and normalizing the pre-trained filter weights times, which helps maintain better discriminative and complementary regional clues across different pseudo-exposures. Formally, MED simultaneously predicts the confidences and the bounding box coordinates of anchor boxes indexed by as

(13)

where denotes the number of anchors, measures how confident the -th anchor is a face and is a vector representing the parameterized coordinates of the predicted face boxes. Following [31], we use weighted sum of the confidence loss and the localization loss:

(14)

where denotes the number of positive anchors, is used to balance the two loss terms, the ground-truth label represents whether the -th anchor is positive (a.k.a., is a face), and is the ground-truth bounding box assigned to the anchor. The confidence (classification) loss is a two-class (face or background) softmax loss,

(15)

where the in the second term means that the localization loss is only calculated for those positive anchors. Following [12], the localization loss is defined as the smooth loss, i.e., the distance between the predicted box and the ground-truth measured by Huber norm

(16)

where the Huber norm is defined as

(17)

The Huber norm is less sensitive to outliers than the

norm.

Being an end-to-end system, REGDet allows joint optimization of the REG and MED modules during learning. Intuitively, MED provides facial location information to guide REG such that the facial regions could be specially enhanced for the purpose of detection. An example detection result is shown in the rightmost column of Fig. 3, and it shows that REGDet successfully localizes far more faces than simply applying the base detector on different intermediate images.

It is worth noting that MED is flexible in choosing the base detector. In our experiments, several state-of-the-art algorithms such as DSFD [27], PyramidBox [42] and S3FD [57] all demonstrate clear performance improvement when embedded in REGDet.

(a) DSFD [27]
(b) PyramidBox [42]
(c) S3FD [57]
Fig. 4: Quantitative results of different approaches are shown. All the other approaches have both pre-trained version (marked with subscript ‘P’) and finetuned version (marked with subscript ‘F’) excepting for ours.

Ours

KinD [58]

DeepUPE [45]

RRM [28]

RetinexNet [50]

GLADNet [48]

LIME [15]

BIMEF [56]

SRIE [11]

MF [10]

Fig. 5: Qualitative comparison of different methods. For better visualization, we draw the results of REGDet on images enhanced by LIME [15]. Red arrows indicate those faces that are challenging to be detected by the other methods. Please Zoom in to see better.

Iv Experiments

Iv-a Setup

Iv-A1 Dataset and metric

We adopt the recently constructed DARK FACE dataset [55] as our testbed. 6,000 real-world low-light images captured under extreme low-light environment. The resolution of the images is . Totally 43,849 manually annotated faces are released. The annotated faces have large scale variance, ranging from to . There are usually to annotated faces in an image. Since the original test split [55] is withheld, we randomly leave 1000 images as our test set. Following prior work [27, 42, 57], face detection performance is measured by mean Average Precision (mAP), which is calculated as the area under precision-recall curve.

Iv-A2 Base detectors

To benefit from the publicly available pre-trained models, we build up REGDet on the base detectors pre-trained on the existing largest dataset for face detection in the wild, i.e., WIDER FACE [54] dataset. DSFD [27], PyramidBox [42] and S3FD [57], the state-of-the-art methods that achieve remarkable performance on WIDER FACE, are chosen as the base detectors. The weights of REGDet are initialized and bootstrapped as described in Section III-B and III-C. For reproducibility, we adopt public implementation of the base detectorswith VGG-16 backbone network. As photometric augmentation has been a common practice in modern detectors, data augmentation related to exposure levels is used in the compared baselines.

Iv-A3 Implementation details

Following [31, 27, 42]

, the batch size is 16 and multiple GPUs are used for speedup. The initial learning rate is 0.001, which is decreased by 0.1 at the 64-th and 96-th epoch. We adopt Adam 

[23] to train the REG module and SGD with momentum of 0.9 to train the MED module. Face anchor boxes that have over IoU with the ground-truth annotated faces are labeled as positive anchors. The ratio between sampled negative anchors and positive anchors is fixed to at each training iteration. For our proposed REGDet, we remove random photometric distortion in data augmentation as it has already involved an enhancement module. Note that we keep the photometric augmentation for the baselines following [27, 42, 57] for fair comparison. During inference, the image is first rescaled to make . Non-maximum suppression is applied with Jaccard overlap of 0.3 and the top 750 bounding boxes are kept.

Iv-A4 Compared methods

We compare REGDet against various face detectors with illumination pre-processing using the state-of-the-art low-light image enhancement approaches including MF [10], SRIE [11], LIME [15], BIMEF [56], GLADNet [48], RetinexNet [50], RRM [28], DeepUPE [45], and KinD [58] to preprocess the images. Baseline denotes the plain detector fed by the original low-light images as input. We evaluate all the aforementioned approaches with both pre-trained and finetuned version. The pre-trained version directly uses the pre-trained weights on WIDER FACE and performs inference on pre-processed DARK FACE images using the aforementioned methods. The finetuned version further finetunes the model using pre-processed DARK FACE images as input. As the performances reported in [55] are for the withheld test data split with only pre-trained version, we re-train the aforementioned methods on our train split and fairly compare them on our 1000-image test split.

(a) BEG (b) CEG (c) SEG (d) REG
Fig. 6: Alternative pseudo-exposure generation modules.
Method DSFD [27] PyramidBox [42] S3FD [57]
#Params mAP (%) #Params mAP (%) #Params mAP (%)
Finetuned Baseline 47.49M 71.42 54.53M 72.48 21.42M 54.99
Ours-BEG + 0.09M 75.60 + 0.09M 76.11 + 0.09M 56.78
Ours-CEG + 0.09M 74.07 + 0.09M 73.16 + 0.09M 54.30
Ours-SEG + 0.03M 73.52 + 0.03M 74.19 + 0.03M 52.82
Ours-REG + 0.12M 76.94 + 0.12M 77.69 + 0.12M 57.95
TABLE I: Results of ablation study on the proposed REG module.

Iv-B Result Analysis

The quantitative comparison of different approaches is shown in Fig. 4. The three pre-trained baseline detectors achieve results of 32.69%, 31.00%, and 26.58% mAP respectively. The relative performance disparity among the three detectors are consistent with their performance on WIDER FACE. The former two detectors perform much better as they apply modern context aggregation techniques such as feature enhancement using two shots [27] or context assisted pyramid anchors [42]

. Compared with the pre-trained detectors, all finetuned ones achieve much higher performance, indicating that the existing large-scale dataset WIDER FACE dominated by normal-light images carries very different lighting distribution compared to DARK FACE dataset. Compared with original image input, many of the image enhancement approaches improve the face detection performance. Specifically, the pre-trained detectors equipped with pre-processing using MF, LIME, BIMEF, DeepUPE, GLADNet, and SRIE outperform the baseline with respectively 4.87%, 5.08%, 5.33%, 4.60%, and 0.45% performance gain when using DSFD as the base detector. In the finetuned setting, MF, LIME, BIMEF, and DeepUPE improve the baseline with respectively 1.12%, 0.94%, 1.75%, and 1.05% performance gain when using DSFD as the base detector. While these image enhancement methods show clear advantages over the baseline with the pre-trained setting, they achieve less performance gain in the finetuned setting, as finetuning already greatly reduces the data distribution discrepancy between normal-light and low-light images. However, it is noticeable that KinD, RetinexNet, and RRM cause performance degeneration to different extents due probably to the severe over-smoothness (KinD, RRM) or artifacts (RetinexNet) on regions containing faces (also evidenced by Fig. 

5. Among them, the multi-exposure fusion method BIMEF performs best. The relatively good performance of BIMEF may also imply that it is promising to adaptively generate pseudo exposures with different light conditions, which is consistent with what we explored in this paper. In particular, compared with the finetuned baseline on original images equipped with photometric data augmentation [18], the proposed REGDet shows much higher detection mAP with respectively about 5.5%, 5.2%, and 3.0% performance gain using the three base detectors, with negligible extra parameters (as shown in Table I). The overwhelmingly high detection rates of REGDet demonstrates its superiority over existing state-of-the-arts.

The qualitative results of different approaches on sampled images from DARK FACE are shown in Fig. 5. While those large and clear faces can also be detected by other methods, our method has successfully found much more dark and tiny faces, as pointed out by the red arrows in the presented images. Although it is hard to detect those faces even by human eyes, the proposed method is able to localize most of them and clearly outperforms other approaches.

Iv-C Ablation Studies

Iv-C1 Effectiveness of the recurrent architecture

To examine the effectiveness of the proposed recurrent component, variant generation modules are designed as illustrated in Fig. 6, which includes

  • Branched Exposure Generation (BEG)  This module generates different exposures parallelly from the original image by a module with branches,

  • Chained Exposure Generation (CEG)  The -th image is generated at the -th stage of the module with non-shared weights conditioned on the image generated at the -th stage,

  • RecurSive Exposure Generation (SEG)  Similar with CEG, except that the module shares parameters at different stages,

  • Recurrent Exposure Generation (REG)  The module used in our proposed method. Different from the aforementioned modules, REG encodes historical feature maps in order to alleviate the probable unrecoverable information loss caused by the over-exposure and over-smoothness at the middle stages. The detailed description of the REG module is provided in Sec. III-A.

We replace REG with BEG, CEG, SEG respectively and conduct experiments on DARK FACE. As shown in Table I, all the designed lightweight modules introduce merely a few extra parameters while they almost all achieve improved detection results. BEG constructs multiple branches from the original image to generate different pseudo-exposures in parallel, and clearly boosts performance, indicating that the MED module does provide important guidance to the enhancement module for generating complementary information in different pseudo-exposures, as illustrated in Sec. III-C. In contrast, CEG and SEG that generate conditioned on with non-shared and shared weight, respectively, produce not so stable performance gain, due probably to unrecoverable information loss caused by the over-exposure and over-smoothness at the middle stages. This suggests that a proper modeling of the multi-exposure generation is the key to achieve good face detection performance. For the performance of using S3FD as base detector, Ours-CEG and Ours-SEG only achieve comparable or even decreased detection rates. We conjecture that the reason of the inferior performance is that S3FD has much less parameters and consequently much smaller model capacity compared with DSFD and PyramidBox, resulting in insufficient guidance effects for the generation modules. By encoding historical feature maps, the proposed REG alleviates the issue and performs the best. It indicates that the relationship between adjacent pseudo-exposures could be well modeled by maintained memory in the recurrent structure of REG. The consistent performance boost also demonstrates the scalability of REG across different base face detectors.

Iv-C2 Comparison of different numbers of stages

We provide experimental comparison of different numbers of stages (denoted as , Section III-A) using PyramidBox as base detector to support our choice of . The results corresponding to are shown in Table II. Setting is equivalent to a special case of REGDet, namely, a single-exposure ‘detection-with-enhancement’ model. It achieves much higher detection performance (mAP) than the finetuned baseline (72.48%), but achieves inferior result than the multi-exposure frameworks (). On one hand, it supports the claim that jointly performing enhancement and detection is superior compared to plain detection for low-light face detection. On the other hand, it verifies the superiority of the proposed multi-exposure framework over single-exposure framework. Setting achieves the best performance, indicating that it is a good practice. Setting to a higher number (e.g., 6) does not bring more performance gain, meaning that generating too many pseudo exposures is unnecessary. Also, setting large brings heavier computational cost.

Numbers of stages mAP (%)
76.73
77.16
77.69
77.63
TABLE II: Results of ablation study on the number of stages.

Iv-C3 Effectiveness of the pseudo-supervised pre-training

We provide experimental comparison on whether applying the proposed pseudo-supervised pre-training on the REG module or not. The performance of the resulted REGDet using PyramidBox as base detector are compared in Table III. When randomly initializing the REG module (w/o pre-training), the proposed REGDet remains good performance with an mAP of 76.36%. Equipped with the proposed pseudo-supervised pre-training technique, our method achieves the best performance with 1.33% absolute performance gain. As illustrated in Sec. III-B, the REG module is supervised and guided to generate images corresponding to diversified exposures with the designed pseudo-supervised pre-training. The collaborative and complementary information from different pseudo exposures can potentially be learnt from such a pre-training technique, which we believe is the key for better performance.

Setting mAP (%)
Ours w/o pre-training 76.36
Ours w/ pre-training 77.69
TABLE III: Ablation of the pseudo-supervised pre-training process for the REG module.

V Conclusion

In this work we proposed an end-to-end face detection framework, named REGDet, for dealing with low-light input images. The key component in REGDet is a novel recurrent exposure generation (REG) module that extends ConvGRU to mimic the multi-exposure technique used in photography. The REG module is then sequentially connected with a multi-exposure detection (MED) module for detecting faces from images under poor lighting conditions. The proposed method significantly outperforms previous algorithms on a public low-light face dataset, with detailed ablation study further validating the effectiveness of the proposed learning component.

References

  • [1] Y. Adini, Y. Moses, and S. Ullman. Face recognition: The problem of compensating for changes in illumination direction. IEEE Trans. Pattern Anal. Mach. Intell., 19(7):721–732, July 1997.
  • [2] T. Arici, S. Dikbas, and Y. Altunbasak. A Histogram Modification Framework and Its Application for Image Contrast Enhancement. IEEE Trans. Image Process., 18(9):1921–1935, Sept. 2009.
  • [3] N. Ballas, L. Yao, C. Pal, and A. Courville. Delving Deeper into Convolutional Networks for Learning Video Representations. In Proc. Int. Conf. Learn. Represent. (ICLR), Mar. 2016.
  • [4] J. Carreira and A. Zisserman. Quo Vadis, Action Recognition? A New Model and the Kinetics Dataset. In

    Proc. IEEE Int. Conf. Comput. Vis. Pattern Recognit. (CVPR)

    , pages 6299–6308, 2017.
  • [5] W. Chen, M. J. Er, and S. Wu. Illumination compensation and normalization for robust face recognition using discrete cosine transform in logarithm domain. IEEE Trans. Syst. Man Cybern. B Cybern., 36(2):458–466, 2006.
  • [6] C. Chi, S. Zhang, J. Xing, Z. Lei, S. Z. Li, and X. Zou. Selective Refinement Network for High Performance Face Detection. In Proc. AAAI Conf. Artif. Intell. (AAAI), Sept. 2019.
  • [7] K. Cho, B. van Merriënboer, C. Gulcehre, D. Bahdanau, F. Bougares, H. Schwenk, and Y. Bengio. Learning phrase representations using RNN Encoder–Decoder for statistical machine translation. In Proc. Conf. Empirical Methods Natural Lang. Process. (EMNLP), pages 1724–1734, 2014.
  • [8] G. Ding, S. Zhang, S. Khan, Z. Tang, J. Zhang, and F. Porikli. Feature Affinity-Based Pseudo Labeling for Semi-Supervised Person Re-Identification. IEEE Trans. Multimedia, 21(11):2891–2902, Nov. 2019.
  • [9] H. Farid. Blind inverse gamma correction. IEEE Trans. Image Process., 10(10):1428–1433, Oct. 2001.
  • [10] X. Fu, D. Zeng, Y. Huang, Y. Liao, X. Ding, and J. Paisley. A fusion-based enhancing method for weakly illuminated images. Signal Process., 129:82–96, Dec. 2016.
  • [11] X. Fu, D. Zeng, Y. Huang, X.-P. Zhang, and X. Ding. A Weighted Variational Model for Simultaneous Reflectance and Illumination Estimation. In Proc. IEEE Int. Conf. Comput. Vis. Pattern Recognit. (CVPR), pages 2782–2790, 2016.
  • [12] R. Girshick. Fast R-CNN. In Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pages 1440–1448, 2015.
  • [13] K. Gong, X. Liang, Y. Li, Y. Chen, M. Yang, and L. Lin. Instance-level Human Parsing via Part Grouping Network. In Proc. IEEE Eur. Conf. Comput. Vis. (ECCV), pages 770–785, 2018.
  • [14] M. Grossberg and S. Nayar. Modeling the space of camera response functions. IEEE Trans. Pattern Anal. Mach. Intell., 26(10):1272–1282, Oct. 2004.
  • [15] X. Guo, Y. Li, and H. Ling. LIME: Low-Light Image Enhancement via Illumination Map Estimation. IEEE Trans. Image Process., 26(2):982–993, Feb. 2017.
  • [16] H. Han, S. Shan, X. Chen, and W. Gao. A comparative study on illumination preprocessing in face recognition. Pattern Recognit., 46(6):1691–1699, June 2013.
  • [17] Z. Hao, Y. Liu, H. Qin, J. Yan, X. Li, and X. Hu. Scale-Aware Face Detection. In Proc. IEEE Int. Conf. Comput. Vis. Pattern Recognit. (CVPR), pages 6186–6195, 2017.
  • [18] A. G. Howard. Some Improvements on Deep Convolutional Neural Network Based Image Classification. arXiv:1312.5402 [cs], Dec. 2013.
  • [19] P. Hu and D. Ramanan. Finding Tiny Faces. In Proc. IEEE Int. Conf. Comput. Vis. Pattern Recognit. (CVPR), pages 1522–1530, July 2017.
  • [20] Y. Huang, J. Xu, Q. Wu, Z. Zheng, Z. Zhang, and J. Zhang. Multi-Pseudo Regularized Label for Generated Data in Person Re-Identification. IEEE Trans. Image Process., 28(3):1391–1403, Mar. 2019.
  • [21] V. Jain and E. Learned-Miller. FDDB: A Benchmark for Face Detection in Unconstrained Settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst, 2010.
  • [22] H. Jiang and E. Learned-Miller. Face Detection with the Faster R-CNN. In Proc. IEEE Int. Conf. Automat. Face Gesture Recognit. (FG), pages 650–657, May 2017.
  • [23] D. P. Kingma and J. Ba. Adam: A Method for Stochastic Optimization. In International Conference on Learning Representations (ICLR), 2015.
  • [24] B. F. Klare, B. Klein, E. Taborsky, A. Blanton, J. Cheney, K. Allen, P. Grother, A. Mah, M. Burge, and A. K. Jain. Pushing the frontiers of unconstrained face detection and recognition: IARPA Janus Benchmark A. In Proc. IEEE Int. Conf. Comput. Vis. Pattern Recognit. (CVPR), pages 1931–1939, June 2015.
  • [25] E. H. Land. The Retinex Theory of Color Vision. Sci. Amer., 237(6):108–129, 1977.
  • [26] K. Levi and Y. Weiss. Learning object detection from a small number of examples: The importance of good features. In Proc. IEEE Int. Conf. Comput. Vis. Pattern Recognit. (CVPR), volume 2, pages II–II, June 2004.
  • [27] J. Li, Y. Wang, C. Wang, Y. Tai, J. Qian, J. Yang, C. Wang, J. Li, and F. Huang. DSFD: Dual Shot Face Detector. In Proc. IEEE Int. Conf. Comput. Vis. Pattern Recognit. (CVPR), 2019.
  • [28] M. Li, J. Liu, W. Yang, X. Sun, and Z. Guo. Structure-Revealing Low-Light Image Enhancement Via Robust Retinex Model. IEEE Trans. Image Process., 27(6):2828–2841, June 2018.
  • [29] X. Li, J. Wu, Z. Lin, H. Liu, and H. Zha. Recurrent Squeeze-and-Excitation Context Aggregation Net for Single Image Deraining. In Proc. IEEE Eur. Conf. Comput. Vis. (ECCV), pages 262–277, 2018.
  • [30] T.-Y. Lin, P. Dollar, R. Girshick, K. He, B. Hariharan, and S. Belongie. Feature Pyramid Networks for Object Detection. In Proc. IEEE Int. Conf. Comput. Vis. Pattern Recognit. (CVPR), pages 2117–2125, 2017.
  • [31] W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C.-Y. Fu, and A. C. Berg. SSD: Single Shot MultiBox Detector. In Proc. IEEE Eur. Conf. Comput. Vis. (ECCV), pages 21–37, 2016.
  • [32] A. L. Maas, A. Y. Hannun, and A. Y. Ng. Rectifier nonlinearities improve neural network acoustic models. In Proc. Int. Conf. Mac. Learn. (ICML), volume 30, page 3, 2013.
  • [33] S. Mann. Comparametric equations with practical applications in quantigraphic image processing. IEEE Trans. Image Process., 9(8):1389–1406, Aug. 2000.
  • [34] H. Nada, V. A. Sindagi, H. Zhang, and V. M. Patel. Pushing the Limits of Unconstrained Face Detection: A Challenge Dataset and Baseline Results. In Proc. IEEE Int. Conf. Biometrics Theory Appl. Syst. (BTAS)), pages 1–10, Oct. 2018.
  • [35] M. Najibi, P. Samangouei, R. Chellappa, and L. S. Davis. SSH: Single Stage Headless Face Detector. In Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pages 4875–4884, 2017.
  • [36] M. Najibi, B. Singh, and L. S. Davis. FA-RPN: Floating Region Proposals for Face Detection. In Proc. IEEE Int. Conf. Comput. Vis. Pattern Recognit. (CVPR), pages 7723–7732, 2019.
  • [37] S. Ren, K. He, R. Girshick, and J. Sun. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. In Proc. Annu. Conf. Neural Inf. Process. Syst. (NeurIPS), pages 91–99, 2015.
  • [38] Y. Ren, Z. Ying, T. H. Li, and G. Li. LECARM: Low-Light Image Enhancement Using the Camera Response Model. IEEE Trans. Circuits Syst. Video Technol., 29(4):968–981, Apr. 2019.
  • [39] S. Shan, W. Gao, B. Cao, and D. Zhao. Illumination normalization for robust face recognition against varying lighting conditions. In Proc. IEEE Int. Conf. Comput. Vis. Workshop (ICCVW), AMFG ’03, page 157, USA, 2003. IEEE Computer Society.
  • [40] X. Shi, S. Shan, M. Kan, S. Wu, and X. Chen. Real-Time Rotation-Invariant Face Detection With Progressive Calibration Networks. In Proc. IEEE Int. Conf. Comput. Vis. Pattern Recognit. (CVPR), pages 2295–2303, 2018.
  • [41] X. Sun, P. Wu, and S. C. H. Hoi. Face detection using deep learning: An improved faster RCNN approach. Neurocomputing, 299:42–50, July 2018.
  • [42] X. Tang, D. K. Du, Z. He, and J. Liu. PyramidBox: A Context-Assisted Single Shot Face Detector. In Proc. IEEE Eur. Conf. Comput. Vis. (ECCV), pages 812–828, 2018.
  • [43] P. Viola and M. J. Jones. Robust real-time face detection.

    Int. J. Comput. Vision

    , 57(2):137–154, 2004.
  • [44] Q. Wang, B. Wu, P. Zhu, P. Li, W. Zuo, and Q. Hu. ECA-Net: Efficient Channel Attention for Deep Convolutional Neural Networks. arXiv:1910.03151 [cs], Oct. 2019.
  • [45] R. Wang, Q. Zhang, C.-W. Fu, X. Shen, W.-S. Zheng, and J. Jia. Underexposed Photo Enhancement using Deep Illumination Estimation. In Proc. IEEE Int. Conf. Comput. Vis. Pattern Recognit. (CVPR), page 9, 2019.
  • [46] S. Wang, J. Zheng, H.-M. Hu, and B. Li. Naturalness Preserved Enhancement Algorithm for Non-Uniform Illumination Images. IEEE Trans. Image Process., 22(9):3538–3548, Sept. 2013.
  • [47] T.-H. Wang, C.-W. Chiu, W.-C. Wu, J.-W. Wang, C.-Y. Lin, C.-T. Chiu, and J.-J. Liou. Pseudo-Multiple-Exposure-Based Tone Fusion With Local Region Adjustment. IEEE Trans. Multimedia, 17(4):470–484, Apr. 2015.
  • [48] W. Wang, C. Wei, W. Yang, and J. Liu. GLADNet: Low-Light Enhancement Network with Global Awareness. In Proc. IEEE Int. Conf. Automat. Face Gesture Recognit. (FG), pages 751–755, May 2018.
  • [49] Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process., 13(4):600–612, Apr. 2004.
  • [50] C. Wei, W. Wang, W. Yang, and J. Liu. Deep Retinex Decomposition for Low-Light Enhancement. In Br. Mac. Vis. Conf. (BMVC), Aug. 2018.
  • [51] S. Yan, S. Shan, X. Chen, and W. Gao. Locally Assembled Binary (LAB) feature with feature-centric cascade for fast and accurate face detection. In Proc. IEEE Int. Conf. Comput. Vis. Pattern Recognit. (CVPR), pages 1–7, June 2008.
  • [52] M.-H. Yang, D. Kriegman, and N. Ahuja. Detecting faces in images: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 24(1):34–58, Jan. 2002.
  • [53] S. Yang, P. Luo, C.-C. Loy, and X. Tang. From Facial Parts Responses to Face Detection: A Deep Learning Approach. In Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pages 3676–3684, 2015.
  • [54] S. Yang, P. Luo, C.-C. Loy, and X. Tang. WIDER FACE: A face detection benchmark. In Proc. IEEE Int. Conf. Comput. Vis. Pattern Recognit. (CVPR), June 2016.
  • [55] W. Yang, Y. Yuan, W. Ren, J. Liu, W. J. Scheirer, and Z. Wang. UG2+ Track 2: A Collective Benchmark Effort for Evaluating and Advancing Image Understanding in Poor Visibility Environments. arXiv:1904.04474 [cs], Apr. 2019.
  • [56] Z. Ying, G. Li, and W. Gao. A Bio-Inspired Multi-Exposure Fusion Framework for Low-light Image Enhancement. arXiv:1711.00591 [cs], Nov. 2017.
  • [57] S. Zhang, X. Zhu, Z. Lei, H. Shi, X. Wang, and S. Z. Li. S3FD: Single Shot Scale-Invariant Face Detector. In Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pages 192–201, Oct. 2017.
  • [58] Y. Zhang, J. Zhang, and X. Guo. Kindling the Darkness: A Practical Low-light Image Enhancer. In Proc. ACM Int. Conf. Multimedia (ACM MM), May 2019.
  • [59] Y. Zhou, D. Liu, and T. Huang. Survey of Face Detection on Low-Quality Images. In Proc. IEEE Int. Conf. Automat. Face Gesture Recognit. (FG), pages 769–773, May 2018.