Image brightness is determined by irradiance of the scene and the camera setting. Images captured in the insufficient irradiance environment usually suffer multiple degradations, such as poor visibility, low contrast, unexpected noise. Unfortunately, these images inevitably exist in our daily life, especially at night or indoors. Such images have poor visual effects and are difficult to use as input for other visual tasks such as target detection and recognition. Although the auto-exposure mechanism (e.g. ISO, shutter, flashlight, etc.) can correctly enhance image brightness, it often causes other unexpected artifacts (e.g. noise, blurring, over-saturation, etc.). Hence, restoring normally exposed high-quality images from low-light images plays an important role in practical application.
Recent years, numbers of methods [1, 2, 3] have been proposed for restoring low-light images, but there is still lots of room to be improved. Fig. 1 illustrates the limitations of existing methods. The reason why these methods like [2, 3] failed in extreme low-light environment is that they focus on increasing the contrast and brightness, while ignoring the influence of serious noise, which results in noise amplification. Although the networks proposed by [4, 5] can generate high-quality images with processing noise and increasing brightness simultaneously, there remains color artifacts which affect visual equality seriously.
To solve the problem of noise amplification and color artifacts in previous works, we propose an end-to-end network based on attention mechanism for processing low-light images. We observed that a larger receptive field is the key to reduce color artifacts in low-light images since a wider range of information can guide the network to learn what it should be when suffering serious noise. Different from simply stacking residual layers to enlarge receptive field , we design a new block, called mixed attention block, to effectively fuse local and global features in our network. The proposed mixed attention block (i.e.
channel attention and spatial attention modules) effectively suppresses undesired chromatic aberration and noise. The channel attention module guides the network to refine redundant color features. The spatial attention module focuses on denoising by taking advantage of the non-local correlation in the image. In addition, considering that the max pooling layer often brings about information loss, we employ a new pooling strategy, called Inverted Shuffle Layer (ISL), to adaptively select important information from feature maps. Overall, our contributions are in three folds:
We propose an end-to-end network based on mixed attention block to obtain normally exposed high-quality and noise-free images. The mixed attention block includes spatial attention and channel attention, which can take into account local and global information.
To reduce the information loss and select useful features flexibly, we employ the ISL to replace the max pooling layer.
We evaluate our method on the SID dataset, and the experimental results demonstrate that our method achieves state-of-the-art performance.
2 Related Work
Obtaining visually-friendly color images from raw images usually requires denoising, enhancement, etc. Therefore, we provide a literature review of the two tasks here.
2.1 Image Denoising
Image denoising is a fundamental task in computer vision. In order to recover clear images from noisy ones, a variety of image priors have been proposed in the past years, including sparsity, low-rank, and self-similarity. Many based on image priors methods have made great progress in image denoising, such as BM3D , WNNM 
. With the development of deep learning, researchers have applied deep neural networks to image denoising in recent years. For example, DnCNN
trained a deep residual network and used batch normalization layers to speed up the training process. CBDNet
considered the noise in the whole process of imaging and adopted the U-net architecture with a sub-network to estimate noise levels for improving denoising performance.
2.2 Low-light Image Enhancement
Image enhancement has a long history in low-level vision. Histogram equalization and gamma correction are simple but classical methods that usually been applied to increase image contrast. It is obvious that those methods only adjust the contrast of the whole image globally, ignoring local brightness differences.
With the rapid development of deep learning [10, 11, 12], many methods are based on the Retinex theory that assumes an image can be decomposed into illumination and reflectance components. Shen et al. 
regarded multi-scale Retinex as a feedforward convolutional neural network and proposed MSR-net to learn a mapping between dark and bright images. RetinexNet is another method inspired by the Retinex theory, which first decomposes the image into illuminance and reflection components with decompose subnetwork, and then performs image enhancement. Wang et al. 
proposed using the network to estimate the illuminance component of the image, and used the illuminance constraint and the prior in the loss function. Chenet al.  considered the low-light image enhancement directly from the raw data. They created the SID dataset and obtained enhanced images in sRGB space with trained U-net. Paras et al.  proposed to use residual learning to improve the enhancement performance for decreasing the amount of parameters and alleviating the impact of chromatic aberration.
Low-light image enhancement from the camera sensor is a complicated problem. Traditional Image Signal Process (ISP) method consists of a series of subtasks (e.g. white balance, demosaicking, denoising, etc.), however, it results in a high noise level and less vivid color . To mitigate these problems, we propose a novel Attention-based Low-light image Enhancement Network (ALEN) which directly converts raw image to color image.
For a given low-light raw image , the estimated color image can be defined as:
where denotes the proposed network, and represents the parameters of the network. We present the details of the architecture and loss function in the following.
3.1 Network Architecture
As shown in Fig. 2, our network is in the form of U-net, which demonstrates its advantages in many tasks. The proposed network consists of encoder, decoder and skip connections. In the raw data preprocess layer, inspired by multi-exposure, the image is multiplied by different amplification factors as input. In the encoder part, we employ several mixed attention blocks and ISLs to obtain semantic features. The mixed attention block which contains channel attention and spatial attention is beneficial to remove the color artifacts caused by multiplying amplification ratio. On the other hand, the decoder part adopts multiple convolutional layers and transposed convolution to restore high-resolution features from semantic features. Finally, the estimated image is obtained after a pixel shuffle operation from a 12-channel feature map.
Channel Attention Block Since the feature map of each channel has different contributions to the following network. Therefore, we introduce a channel attention strategy to extract more useful information for low-light image enhancement. The structure of channel attention is illustrated in Fig. 3(c). Like SEblock 
, we first use a global average pooling layer to get a representative value in each channel. Then we use two fully connection layers and activate functions to learn the significance between channels. The first fully connection layer is followed by a ReLU activate function and the second activate function is the Sigmoid function. The proposed channel attention block is well-motivated, it not only removes harmful features of inputs but also highlights the favorable color information.
Non-local Operation The larger receptive field is critical in many computer vision tasks. But convolution operation is capable of processing a local neighborhood in space, thus capturing long-range information from feature map demand repeating local operation which is computationally inefficient. Non-local operation is one way to tackle the above issue in recent years. From , non-local operation can be expressed as
where is the query position, and is one of possible positions in feature map. denotes the transform of . represents the relationship of and . is a normalization factor that is the sum of all as
Non-local operation aimed at strengthening the feature representation capability of the network. Equation 2 shows that the result of non-local operation is a weighted sum of features at all positions. Thus, to utilize non-local operation makes the network have a global receptive field via aggregating different position information in a feature map. It is significant to correct the color and suppress noise, especially in a low-light environment, since a wider range of information is able to guide the network to learn what it should be in a seriously degraded scene. The structure is illustrated in Fig. 3(a). In practical applications, non-local operation usually occupies a large memory and computation. Therefore, we adopt to downsample the feature to reduce computational complexity.
Mixed Attention Block As discussed above, channel attention block can model the interdependence between channels, while non-local operation can aggregate information from different positions in a feature map. In order to obtain better feature representation, we combine two attention blocks to a mixed attention block. Fig. 3(b) illustrates the structure of mixed attention block. In this block, we first employ non-local operation to obtain features with a wider range of information in the spatial domain. Then we concatenate them and feed the concatenated features to channel attention block to generate final feature representation. With the mixed attention block, the network can make full use of information from different channels and positions in the feature map to produce a more flexible structure.
Inverted Shuffle Layer As we all know, the pooling layer usually appears in neural networks for reducing the computation with smaller feature sizes. However, pooling operation usually abandons useful information in the forward process whether it is max pooling or average pooling. Inspired by pixel shuffle in , we proposed a new pooling operation, named ISL, which includes inverted shuffle and convolution operation. After an inverted shuffle operation, the size of the feature map reduces to half of the original and the number of channels quadruples. Convolution layer with kernels is performed after the inverted shuffle, which plays a role in selecting useful information while compressing the number of channels. In general, ISL not only has the effect of reducing the computation as a pooling layer but also makes the network more flexible to select features.
3.2 Loss Function
In our network, we combine L1 loss and SSIM loss with a weight which usually appears in image restoration methods. The loss function of our method can be expressed as
where is pixel wise L1 loss, and denotes SSIM loss, is the weight to balance L1 loss and SSIM loss. Note that we set in the train process.
4.1 Dataset and Evaluation Metrics
We adopt See-in-the-dark (SID) dataset  to evaluate the performance of our method. The SID dataset contains 5094 short-exposure images and 424 long-exposure images, which are raw sensor data captured by Sony 7SII and Fujifilm X-T2 in extreme low-light environment. In this dataset, each scene has a sequence of images with different short-exposure time and a long-exposure image as a reference image. The short-exposure times were set between 0.033s and 0.1s. And the long-exposure times of corresponding reference images were set between 10s and 30s. In our experiments, we train and test our network with images captured by Sony camera, and employ PSNR and SSIM to evaluate the network performance for low-light image enhancement.
4.2 Training and Testing
We implemented our network with Pytorch and trained the network with 4000 epochs on the SID dataset. During the training, we used Adam optimizer and set the initial learning rate to 0.0001. The learning rate decreased to 2e-5 after 2000 epochs and to 1e-5 after 3000 epochs. Before feeding to the network, we multiply the image patch by four amplification ratios, which provide multiple brightness images as input together. We set the amplification ratios as, where represents the exposure difference between the input and reference images similar to . In each iteration of training, we performed a random crop to get a patch from the raw image and flipped, rotated or transposed it randomly for data augmentation. The full images are taken as input in the testing for avoiding obvious boundary artifacts. The entire network is conducted on a PC with NVIDIA Tesla V100 GPU with 32 GB of memory.
4.3 Comparison with Other Methods
Comparing the results in Fig. 4, it can be observed that the quality of the enhanced images by our network is significantly higher than that of others. The method of SID  and Residual  generate incorrect color when it removes noise from low-light images. By visually comparison, we have noticed that our method has two improvements in enhancement with other methods. First, our method can recover more details and texture from low-light images with serious noise. As shown in red rectangle of Fig. 4, the images generated by our method look more smooth and satisfactory. Second, our method can restore correct and natural color and avoid color spreading, making the enhanced images more realistic and closer to ground truth. In quantitative comparison, we evaluate the performance of these methods using PSNR and SSIM. Table 1 illustrates the detailed comparison. Our method has achieved better performance in all subset with different amplification ratio while keeping a small number of parameters, which indicates the effectiveness of proposed method rather than the effect of network parameter.
4.4 Ablation Study
To validate the effectiveness of each component in our network, we performed several experiments and compared the results by adding blocks step by step. In these experiments, the hyper-parameters in the training process of each model were maintained, and all networks were trained 4000 epochs to reach the convergence state.
At first, we used a simple U-net structure similar to  as our backbone. Then CAB, MAB and ISL were added into the backbone one by one. We chose PSNR as an indicator to measure the impact of different modules on network performance. The results are shown in Table 2. It can be seen from the comparison of the results in the table that CAB can significantly improve the PSNR of images. Figure 5 illustrates the effect of adding different blocks. Through visual comparison, color artifacts and noise reduced greatly after adding these blocks, which means the blocks have a positive impact on improving image quality.
5 Conclusion and Future Work
In this paper, we propose an attention-based network to enhance the raw images to obtain color images with high contrast and noiseless. Our method uses the mixed attention block with combining spatial and channel attention to extract features, making the network more efficiency. In addition, we use inverted shuffle layers instead of max pooling layers to retain more information. Experiments demonstrate that our method can generate enhanced images with less noise and color artifacts, achieving the best performance on the SID dataset. In future work, we will explore a more effective attention module to decrease the computation cost and improve the network generalization ability.
Shi Guo, Zifei Yan, Kai Zhang, Wangmeng Zuo, and Lei Zhang,
“Toward convolutional blind denoising of real photographs,”
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2019, pp. 1712–1722.
-  Liang Shen, Zihan Yue, Fan Feng, Quan Chen, Shihao Liu, and Jie Ma, “Msr-net: Low-light image enhancement using deep convolutional network,” arXiv preprint arXiv:1711.02488, 2017.
-  Jingwen Chen, Jiawei Chen, Hongyang Chao, and Ming Yang, “Image blind denoising with generative adversarial network based noise modeling,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 3155–3164.
-  Chen Chen, Qifeng Chen, Jia Xu, and Vladlen Koltun, “Learning to see in the dark,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 3291–3300.
-  Paras Maharjan, Li Li, Zhu Li, Ning Xu, Chongyang Ma, and Yue Li, “Improving extreme low-light image denoising via residual learning,” in 2019 IEEE International Conference on Multimedia and Expo (ICME). IEEE, 2019, pp. 916–921.
-  Xiaojie Guo, Yu Li, and Haibin Ling, “Lime: Low-light image enhancement via illumination map estimation,” IEEE Transactions on image processing, vol. 26, no. 2, pp. 982–993, 2016.
-  Kostadin Dabov, Alessandro Foi, Vladimir Katkovnik, and Karen Egiazarian, “Image denoising by sparse 3-d transform-domain collaborative filtering,” IEEE Transactions on image processing, vol. 16, no. 8, pp. 2080–2095, 2007.
-  Shuhang Gu, Lei Zhang, Wangmeng Zuo, and Xiangchu Feng, “Weighted nuclear norm minimization with application to image denoising,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2014, pp. 2862–2869.
-  Kai Zhang, Wangmeng Zuo, Yunjin Chen, Deyu Meng, and Lei Zhang, “Beyond a gaussian denoiser: Residual learning of deep cnn for image denoising,” IEEE Transactions on Image Processing, vol. 26, no. 7, pp. 3142–3155, 2017.
-  Qingsen Yan, Lei Zhang, Yu Liu, Yu Zhu, Jinqiu Sun, Qinfeng Shi, and Yanning Zhang, “Deep hdr imaging via a non-local network,” IEEE Transactions on Image Processing, vol. 29, pp. 4308–4322, 2020.
-  Qingsen Yan, Dong Gong, Qinfeng Shi, Anton van den Hengel, Chunhua Shen, Ian Reid, and Yanning Zhang, “Attention-guided network for ghost-free high dynamic range imaging,” arXiv preprint arXiv:1904.10293, 2019.
-  Dong Gong, Zhen Zhang, Qinfeng Shi, Anton van den Hengel, Chunhua Shen, and Yanning Zhang, “Learning an optimizer for image deconvolution,” arXiv preprint arXiv:1804.03368, 2018.
-  Chen Wei, Wenjing Wang, Wenhan Yang, and Jiaying Liu, “Deep retinex decomposition for low-light enhancement,” arXiv preprint arXiv:1808.04560, 2018.
-  Ruixing Wang, Qing Zhang, Chi-Wing Fu, Xiaoyong Shen, Wei-Shi Zheng, and Jiaya Jia, “Underexposed photo enhancement using deep illumination estimation,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2019, pp. 6849–6857.
-  Zhetong Liang, Jianrui Cai, Zisheng Cao, and Lei Zhang, “Cameranet: A two-stage framework for effective camera isp learning,” arXiv preprint arXiv:1908.01481, 2019.
-  Jie Hu, Li Shen, and Gang Sun, “Squeeze-and-excitation networks,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 7132–7141.
-  Xiaolong Wang, Ross Girshick, Abhinav Gupta, and Kaiming He, “Non-local neural networks,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 7794–7803.
Wenzhe Shi, Jose Caballero, Ferenc Huszár, Johannes Totz, Andrew P Aitken,
Rob Bishop, Daniel Rueckert, and Zehan Wang,
“Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network,”in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 1874–1883.