Low-Light Image Enhancement
Low-Light Image Enhancement
6, JUNE 2018
Abstract— Low-light image enhancement methods based on and generate a noise-free image, however it breeds new prob-
classic Retinex model attempt to manipulate the estimated illu- lems such as motion blur. Thus, low-light image enhancement
mination and to project it back to the corresponding reflectance. technique at the software level is highly desired in consumer
However, the model does not consider the noise, which inevitably
exists in images captured in low-light conditions. In this paper, photography. Moreover, such technique can also benefit many
we propose the robust Retinex model, which additionally con- computer vision algorithms (object detection, tracking, etc.)
siders a noise map compared with the conventional Retinex since their performance highly relies on the visibility of the
model, to improve the performance of enhancing low-light images target scene.
accompanied by intensive noise. Based on the robust Retinex However, this is not a trivial task, for that images captured
model, we present an optimization function that includes novel
regularization terms for the illumination and reflectance. Specif- under low-light conditions have rather low SNRs, which means
ically, we use 1 norm to constrain the piece-wise smoothness the noises are highly intensive and may dominate over the
of the illumination, adopt a fidelity term for gradients of the image signals. Thus, low-light image enhancement algorithms
reflectance to reveal the structure details in low-light images, and need to tackle not only the low visibility, but also the high-
make the first attempt to estimate a noise map out of the robust level noises, in addition to low contrast.
Retinex model. To effectively solve the optimization problem,
we provide an augmented Lagrange multiplier based alternating An intuitive way to enhance low-light images is to directly
direction minimization algorithm without logarithmic transfor- amplify the illumination. However, relatively bright areas may
mation. Experimental results demonstrate the effectiveness of the be saturated and some details might be lost through the oper-
proposed method in low-light image enhancement. In addition, ation. Histogram equalization (HE) based methods [1], [2],
the proposed method can be generalized to handle a series of which aim to stretch the dynamic range of the observed image,
similar problems, such as the image enhancement for underwater
or remote sensing and in hazy or dusty conditions. can mitigate the problem to some extent. Nevertheless, their
purpose is to enhance the contrast other than adjusting the
Index Terms— Low-light image enhancement, Retinex model, illumination. Thus, results of these methods may be over- or
structure-revealing, noise suppression.
under-enhanced. Furthermore, HE based methods neglect the
intensive noise hidden in low-light images.
I. I NTRODUCTION Some researchers [3], [4] noticed that the inverted low-light
images look like haze images. Dehazing methods are therefore
I MAGES captured under low-light conditions suffer from
many degradations, such as low visibility, low contrast,
and high-level noise. Although these degradations can be
applied and the dehazing result is inverted once more as the
enhancement result. A joint-bilateral filter is applied in [4]
somewhat alleviated by professional devices and advanced to suppress the noise after the enhancement. Li et al. [3]
photographic skills, the inherent cause of the noise is inevitable attempted to further improve the visual quality by segmenting
and cannot be addressed at the hardware level. Without suf- the observed image into superpixels and adaptively denoising
ficient amount of light, the output of camera sensors is often different segments via BM3D [5]. Although these methods
buried in the intrinsic noise in the system. Longer exposure can generate reasonable results, a convincing physical expla-
time can effectively increase the signal-to-noise ratio (SNR) nation of their basic model has not been provided. Moreover,
the order of enhancing and denoising has always been a
Manuscript received June 17, 2017; revised November 17, 2017, problem. Performing enhancement method before denoising
January 13, 2018, and February 13, 2018; accepted February 14, 2018. may result in noise amplification, which increases the difficulty
Date of publication February 28, 2018; date of current version March 21,
2018. This work was supported in part by the National Natural Science of denoising. On the other hand, enhancement results may be
Foundation of China under Contract 61772043 and in part by Microsoft somewhat blurred after denoising.
Research Asia under Project ID FY17-RES-THEME-013. The associate editor In recent years, learning based image enhancement methods
coordinating the review of this manuscript and approving it for publication
was Dr. Alin M. Achim. (Corresponding author: Jiaying Liu.) have also been studied. Yang et al. [6] presented a low light
M. Li, J. Liu, W. Yang, and Z. Guo are with the Institute of Computer image enhancement method using coupled dictionary learning.
Science and Technology, Peking University, Beijing 100871, China (e-mail: Lore et al. [7] proposed a Low-Light Net (LLNet) using
martinli0822@pku.edu.cn; liujiaying@pku.edu.cn; yangwenhan@pku.edu.cn;
guozongming@pku.edu.cn). deep autoencoders to simultaneously (or sequentially) perform
X. Sun is with Internet Media Group, Microsoft Research Asia, contrast enhancement and denoising. In both works, low-light
Beijing 100080, China (e-mail: xysun@microsoft.com). data used for training is synthesized by applying gamma
Color versions of one or more of the figures in this paper are available
online at http://ieeexplore.ieee.org. correction on natural image patches since real data paired with
Digital Object Identifier 10.1109/TIP.2018.2810539 low-light and normal illumination is hard to collect. However,
1057-7149 © 2018 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.
See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
LI et al.: STRUCTURE-REVEALING LOW-LIGHT IMAGE ENHANCEMENT 2829
such measurement may not fully characterize the formation of transformation is provided to optimize the objective
natural low-light images, which may lead to unnatural results. function.
Retinex theory [8] has been studied extensively in the past • The proposed method can also be applied to other
few decades, which assumes that images can be decomposed practical applications in addition to low-light image
into two components, namely reflectance and illumination. enhancement, such as underwater image enhancement,
Single-scale Retinex [9] and multiscale Retinex [10] are the remote sensing image enhancement, image dehazing, and
pioneering studies in this field, which treat the reflectance dusty weather image
component as the final output. Wang et al. [11] proposed The rest of this paper is organized as follows. In Sec. II,
a bright-pass filter to decompose the observed image into we briefly review the conventional Retinex model, discuss its
reflectance and illumination, and attempted to preserve the drawback for low-light image enhancement, and present the
naturalness while enhancing the image details. Based on the robust Retinex model. Sec. III presents the proposed method
bright-pass filter proposed in [11], Fu et al. [12] fused multiple based on the robust Retinex model. Experimental results are
derivatives of the estimated illumination to combine different demonstrated in Sec. IV. Sec. V concludes the paper.
merits into a single output. The method proposed in [13]
refines the initial illumination map by imposing a structure- II. BACKGROUND
aware prior. Nevertheless, due to the lack of constraint on the The classic Retinex model decomposes images into
reflectance, these methods often amplify the latent intensive reflectance and illumination:
noise that exists in low-light images.
I = R ◦ L, (1)
Although the logarithmic transformation is widely adopted
for the ease of modeling by most Retinex based algorithms, where I is the observed image, R and L represent the
a recent work [14] argues that the logarithmic transformation reflectance and the illumination of the image, respectively.
is not appropriate in the regularization terms since pixels with The operator ◦ denotes the element-wise multiplication. Most
low magnitude dominate over the variation term in the high of the existing Retinex-based methods utilize the logarithmic
magnitude areas. Thus, a weighted variational model is pro- transformation to reduce computational complexity [15].
posed in [14] in order to impose better prior representation in Image intrinsic decomposition based methods are also able
the regularization terms. Even though this method shows rather to estimate illumination and reflectance [16]–[20]. However,
impressive results in the decomposition of reflectance and these methods are mostly based on the assumptions that
illumination, the method is not suitable for the enhancement of light sources are distant from the examined scene and the
low-light images as the noise often appears in low magnitude scene does not have multiple dominant illuminating colors,
regions. which do not hold in most low-light images (as can be
In this paper, we follow the conventional methods observed in Figs. 9 and 10). Thus, in this paper, we focus on
that manipulate the illumination component after the Retinex-based decomposition, and we argue that the classic
decomposition in order to re-light the input low-light image. Retinex model in (1) is not suitable for the low-light image
In the following sections, we first point out that existing enhancement problem, for that intensive noise inevitably exists
Retinex-based methods using logarithmic transformation in low-light images.
are not suitable for handling intensive noise hidden in We present the robust Retinex model and point out that the
low-light images. Then, based on the robust Retinex model model for the particular task should have a noise term N as
with an additional noise term, we present the proposed follows:
structure-revealing low-light image enhancement method.
I = R ◦ L + N. (2)
The method simultaneously estimates a structure-revealed
reflectance and a smoothed illumination component (and a This image formulation is similar to that of intrinsic image
noise map if the alternative optimization function is used). The decomposition, which originally involves three factors includ-
augmented Lagrange multiplier based algorithm is provided ing Lambertian shading (L), reflectance (R), and specular-
to solve the optimization problem. Without sophisticated ities (C). The specular term C is often used in computer
patch-based techniques such as nonlocal means and dictionary graphics and it accounts for light rays that reflect directly off
learning, the proposed method presents remarkable results via the surface, which creates visible highlights in the image [16].
simply using the refined Retinex model without logarithmic For simplicity, many works [16], [21], [22] often neglect
transformation regularized by few common terms. In summary, the specular component C. In our work, we still follow this
the contributions of this paper lie in three aspects: simplification, but a noise term N is added to the model.
• In this paper, we consider the noise term in the classic Different with the discretely distributed specular term C,
Retinex model in order to better formulate images cap- the noise term N distributes more uniformly in natural images.
tured under low-light conditions. Based on the model, Once the noise term is added as in (2), the logarithmic
we make the first attempt to explicitly predict the noise transformation of the classic model becomes questionable.
map out of the robust Retinex model, while simultane- First, since log(R)+log(L) = log(R ◦L+N), the fidelity term
ously estimate a structure-revealed reflectance map and a in the log-transformed domain (log(R) + log(L)) − log(I)2F
piece-wise smoothed illumination map. will deviate from the ideal value. Second, the existence of N
• An augmented Lagrange multiplier based alternating may significantly affect the gradient variation in the log-
direction minimization algorithm without logarithmic transformed domain. Specifically, taking the reflectance R
2830 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 27, NO. 6, JUNE 2018
Fig. 1. Comparisons of decomposition results and corresponding enhancement results. From top to bottom: results of SRIE [14], PIE [23], and the proposed
method. (a) Input. (b) Illumination. (c) Reflectance. (d) Enhancement.
as an example, its gradient variation in the log-transformed to simultaneously estimate the illumination and the reflectance
domain ∇(log(R)) = (1/R) · ∇R is highly affected by 1/R (and the noise) and their solutions are given subsequently.
when R is very small, which inevitably affects the overall
variation term. If R contains intensive noise, 1/R may become A. Overview
extremely unstable, and the enhancement result may be very Following [14] and [23], we perform the proposed method
noisy, which significantly affects the subjective visual quality. on the V channel in HSV color space. Given the input low-
Based on the above analysis, we argue that directly using log- light color image S, we first convert it into HSV space. Then,
transformed Retinex-model for low-light image enhancement the proposed decomposition is applied on the normalized V
is inappropriate. Thus, in this paper we do not apply logarith- channel image I and the illumination component L and the
mic transformation on Retinex model. reflectance component R can be obtained. After that, in order
For the particular task of enhancing low-light images, to light up the dark regions, we adjust the illumination L and
the noise term N is quite essential. Without it, intensive noise generate an adjusted illumination L̂. The adjusted illumination
hidden in the observed image I will eventually be assigned L̂ is then integrated with the reflectance component R, pro-
to either L or R. As introduced in the previous section, most ducing the enhanced V channel image Î. Finally, the enhanced
methods focus on the illumination component L and regard HSV image is converted back to RGB color space, and
the reflectance R = I/L as the final output, which inevitably the final enhancement result Ŝ is obtained. The details of
leads to a noisy result. This is the reason why a denoising the proposed structure-revealing low-light image enhancement
process is often required after the enhancement [12], [13]. method will be elaborated in the following subsections.
In Retinex based image enhancement methods, some pio-
neering works have been proposed considering the noise. B. Baseline Decomposition
Elad [24] proposed to constrain the smoothness of both the
illumination and the reflectance by two bilateral filters on log- In this subsection, a new decomposition model that simul-
transformed domain. The model handles the proximity of the taneously estimates the reflectance R and the illumination L
illumination to the observation and requires the reflectance of the input image I is formulated as follows:
to be close to the residual image, assuming the noise to be argmin R ◦ L − I2F + β∇L1 + ω∇R − G2F , (3)
multiplicative. Algorithms proposed in [25] and [26] both con- R,L
sider to directly apply denoising procedures on the estimated where β, and ω are the coefficients that control the importance
reflectance. Li et al. [25] employed edge-preserving smooth- of different terms. · F and · 1 represent the Frobenius
ing [27] while Yu et al. [26] used guided filter [28] to suppress norm and 1 norm, respectively. In addition, ∇ is the first
the noise in the reflectance map. In this paper, we attempt order differential operator, and G is the adjusted gradient of I,
to enhance the visibility of low-light images and mitigate the which will be discussed in Eq. (4). The role of each term in
effect of noise simultaneously in a joint optimization function, the objective (3) is interpreted below:
without using logarithmic transformation. The details of our • R ◦ L − I2F constrains the fidelity between the observed
method will be elaborated in the next section. image I and the recomposed one R ◦ L;
• ∇L1 corresponds to the total variation sparsity and
III. S TRUCTURE -R EVEALING L OW-L IGHT considers the piece-wise smoothness of the illumination
I MAGE E NHANCEMENT map L;
The proposed structure-revealing low-light image enhance- • ∇R − G2F minimizes the distance between the gradient
ment based on robust Retinex model will be presented in of the reflectance R and G (an adjusted version of the
this section. We first give the framework of the proposed gradient of the input I), so that the structural information
method. Then, we introduce two alternative decompositions of the reflectance can be strengthened.
LI et al.: STRUCTURE-REVEALING LOW-LIGHT IMAGE ENHANCEMENT 2831
Algorithm 1: The Solution of Problem (10) CPU. In our experiments, if not specifically stated, the para-
meters β, ω and δ are set as 0.05, 0.01 and 1, respectively.
Parameters λ and σ in Eq. (4) are both set to be 10. In most
cases, these empirical setting generates decent results.
Fig. 5. Comparisons of low-light image enhancement results for test image #18. Red arrows indicate artifacts or degradation. (a) Input. (b) HE. (c) LIME [13].
(d) NPE [11]. (e) PIE [23]. (f) SRIE [14]. (g) Proposed with model (3).
Fig. 6. Comparisons of low-light image enhancement results for test image #13. Red arrows indicate artifacts or degradation.(a) Input. (b) HE. (c) LIME [13].
(d) NPE [11]. (e) PIE [23]. (f) SRIE [14]. (g) Proposed with model (3).
i.e. colorfulness-based patch-based contrast quality index and CPCQI results of the input low-light images and the
(CPCQI) [37], to evaluate the enhancement results comprehen- enhancement results generated by aforementioned low-light
sively. Fig. 8 shows the average NFERM, BTMQI, NIQMC, image enhancement methods.
LI et al.: STRUCTURE-REVEALING LOW-LIGHT IMAGE ENHANCEMENT 2835
Fig. 7. Comparisons of low-light image enhancement results for test image #10. (a) Input. (b) HE. (c) LIME [13]. (d) NPE [11]. (e) PIE [23]. (f) SRIE [14].
(g) Proposed with model (3).
Fig. 8. Average NFERM, BTMQI, NIQMC, and CPCQI results for different methods on all 18 test images with model (3). (a) NFERM. (b) BTMQI.
(c) NIQMC. (d) CPCQI.
For NFERM and BTMQI, smaller values represent better achieves the highest CPCQI score, which indicates that our
image qualities. NFERM extracts features inspired by free method successfully enhances the overall quality of the image
energy based brain theory and classical human visual system without introducing much artifacts. As for NIQMC, it assesses
to measure the distortion of the input image. BTMQI assesses image quality by measuring local details and global histogram
image quality by measuring the average intensity, contrast, of the given image, and it particularly favors images with
and structure information of tone-mapped images. From the higher contrast. It can be observed that HE and LIME have
figure, we notice that the proposed method achieves the lowest higher NIQMC scores. The reason is that HE and LIME over-
NFERM score, which means that our results are more similar enhance the input image. For example, the reflectance on the
to natural images and have less distortion. The average BTMQI window in image #10 (shown in Fig. 7), and the lighthouse
value of the proposed method ranks 3rd among the compared in image #13 (observed in Fig. 6).
methods. Although NPE and SRIE have lower BTMQI scores, 2) Noise Suppression: We evaluate the performance of
their NFERM values are much larger than that of our method. our low-light image enhancement method under noisy cases
As can be observed in visual comparisons, some of the results using the alternative decomposition described in (6). In this
produced by NPE does not look natural, e.g. image #18 case, noise also exists in other channels apart from the
in Fig. 5 and image #13 in Fig. 6; SRIE cannot fully light V channel. Thus, the input image is processed in RGB
up the whole scene (Figs. 5, 9) and generates halo artifacts color space and the proposed method is applied in each
(Fig. 6). channel. Parameters β and ω are both set as 0.01 for this
For CPCQI and NIQMC, larger values indicate better task.
qualities of image contrast. CPCQI evaluates the perceptual Fig. 9 shows some enhancement results of low-light images
distortions between the enhanced image and the input image with intense noise. As can be observed in the figure, the noise
from three aspects: mean intensity, signal strength and signal hidden in very low-light condition is really intense. Although
structure components. CPCQI value < 1 means that the quality HE, LIME, and NPE can sufficiently enhance the visibility of
of the enhanced image is degraded rather than enhanced. low-light images, they also amplify the intensive noise. PIE
As can be observed in the figure, the proposed method cannot light up the input images, and its results also contains
2836 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 27, NO. 6, JUNE 2018
Fig. 9. Comparison of noise suppression. For each case, from left to right, they are the input images, results generated by HE, LIME, NPE, PIE, SRIE, and
the proposed method with model (6). (a) Input. (b) HE. (c) LIME [13]. (d) NPE [11]. (e) PIE [23]. (f) SRIE [14]. (g) Proposed.
Fig. 10. Comparison of denoising results with the proposed method. (a) is the input image; (b)–(f) are enhancement results of HE, LIME [13], NPE [11],
PIE [23], and SRIE [14] with a post-processing performed by BM3D with the denoising parameter σ = 30; (g) is the result obtained by the proposed method
with model (6).
Fig. 12. Average NFERM, BTMQI, NIQMC, and CPCQI results on all
18 test images using the proposed method (the baseline model) with different
regularization parameters.
C. Parameter Study
In this section, we evaluate the effect of regularization
Fig. 11. Comparisons of enhancement result generated by our baseline parameters. We first evaluate the impact of parameters β and
model (3) and the alternative model (6). (a) Input. (b) Model (3). (c) Model (6).
ω in the basic model (3). In Fig. 12, we give objective results
obtained with different (β, ω) pairs on all the test images,
where β ranges in 0.5, 0.05 and 0.005, and ω is selected from
(18 images shown in Fig. 4), results generated by model (6) 0.1, 0.01, and 0.001. Please note again that lower NFERM,
obtain averages of 15.60, 3.84, 5.14, and 0.98, which is inferior BTMQI and higher NIQMC, CPCQI values represent better
to that of the baseline model (10.70, 3.87, 5.14, and 1.13). For visual quality. As can be observed, results with (0.5, 0.01),
200 synthesized noisy images, the average PSNR and SSIM (0.5, 0.001), and (0.05, 0.01) have rather low NFERM values.
of the baseline model (followed by BM3D) are 18.14 and And among them, (0.05, 0.01) has the lowest BTMQI value.
0.4632, which is also inferior to that of the model in (6) From NIQMC and CPCQI values, we can discover a certain
(18.53 and 0.5097). pattern with respect to ω.
To summarize, for images with less noise, our baseline Fig. 13 plots nine curves, representing different convergence
model (3) works fine; for low-light images with noise, speeds using different (β, ω) pairs on image #10. From the
2838 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 27, NO. 6, JUNE 2018
Fig. 17. Convergence curves of the 18 test images with model (3).
[6] J. Yang, X. Jiang, C. Pan, and C.-L. Liu, “Enhancement of low light
level images with coupled dictionary learning,” in Proc. IEEE 23rd Int.
Conf. Pattern Recognit., Dec. 2016, pp. 751–756.
[7] K. G. Lore, A. Akintayo, and S. Sarkar, “LLNet: A deep autoencoder
approach to natural low-light image enhancement,” Pattern Recognit.,
vol. 61, pp. 650–662, Jan. 2017.
[8] E. H. Land, “The retinex theory of color vision,” Sci. Amer., vol. 237,
no. 6, pp. 108–129, 1977.
[9] D. J. Jobson, Z.-U. Rahman, and G. A. Woodell, “Properties and
performance of a center/surround retinex,” IEEE Trans. Image Process.,
vol. 6, no. 3, pp. 451–462, Mar. 1997.
[10] D. J. Jobson, Z.-U. Rahman, and G. A. Woodell, “A multiscale retinex
for bridging the gap between color images and the human observation
of scenes,” IEEE Trans. Image Process., vol. 6, no. 7, pp. 965–976,
Fig. 20. Comparison of enhancement results of images taken in dusty Jul. 1997.
weather. From left to right: observed images, results by a specialized method [11] S. Wang, J. Zheng, H.-M. Hu, and B. Li, “Naturalness preserved
[44] and the proposed method with model (6). (a) Input. (b) Results by [44]. enhancement algorithm for non-uniform illumination images,” IEEE
(c) Proposed. Trans. Image Process., vol. 22, no. 9, pp. 3538–3548, Sep. 2013.
[12] X. Fu, D. Zeng, Y. Huang, Y. Liao, X. Ding, and J. Paisley, “A
fusion-based enhancing method for weakly illuminated images,” Signal
Process., vol. 129, pp. 82–96, Dec. 2016.
are downloaded from the author’s website. The specialized [13] X. Guo, Y. Li, and H. Ling, “LIME: Low-light image enhancement
method takes different derivatives (generated by Gamma cor- via illumination map estimation,” IEEE Trans. Image Process., vol. 26,
rection with different γ ) of the original image as input. Three no. 2, pp. 982–993, Feb. 2017.
[14] X. Fu, D. Zeng, Y. Huang, X.-P. Zhang, and X. Ding, “A weighted
weight maps (sharpness, chromatic, and prominence maps) variational model for simultaneous reflectance and illumination estima-
calculated from each derivative are summed and normalized tion,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., Jun. 2016,
to obtain the final weight map, which is then used to fuse the pp. 2782–2790.
[15] E. Provenzi, L. De Carli, A. Rizzi, and D. Marini, “Mathematical
corresponding derivative to obtain the final result. As shown definition and analysis of the retinex algorithm,” J. Opt. Soc. Amer. A,
in the figure, results by [44] still look like images with haze, Opt. Image Sci., vol. 22, no. 12, pp. 2613–2621, 2005.
while our method produces images with better visibility. [16] R. Grosse, M. K. Johnson, E. H. Adelson, and W. T. Freeman, “Ground
truth dataset and baseline evaluations for intrinsic image algorithms,” in
Proc. IEEE Int. Conf. Comput. Vis., Sep./Oct. 2009, pp. 2335–2342.
V. C ONCLUSION [17] Q. Chen and V. Koltun, “A simple model for intrinsic image decomposi-
Low-light enhancement methods using the classic Retinex tion with depth cues,” in Proc. IEEE Int. Conf. Comput. Vis., Dec. 2013,
pp. 241–248.
model often fail to dealing with the noise, which inevitably [18] P.-Y. Laffont, A. Bousseau, and G. Drettakis, “Rich intrinsic image
exists in such condition. In this paper, we present the robust decomposition of outdoor scenes from multiple views,” IEEE Trans.
Retinex model by adding a noise term to handle low-light Vis. Comput. Graph., vol. 19, no. 2, pp. 210–224, Feb. 2013.
[19] S. Bell, K. Bala, and N. Snavely, “Intrinsic images in the wild,” ACM
image enhancement in the case of intensive noise. Moreover, Trans. Graph., vol. 33, no. 4, p. 159, 2014.
we impose novel regularization terms in our optimization [20] A. Meka, M. Zollhöfer, C. Richardt, and C. Theobalt, “Live intrinsic
problem for both illumination and reflectance to jointly esti- video,” ACM Trans. Graph., vol. 35, no. 4, p. 109, 2016.
[21] J. T. Barron and J. Malik, “Color constancy, intrinsic images, and shape
mate a piece-wise smoothed illumination and a structure- estimation,” in Proc. 12th Eur. Conf. Comput. Vis., 2012, pp. 57–70.
revealed reflectance. An ADM-based algorithm is provided [22] Y. Li and M. S. Brown, “Single image layer separation using relative
to solve the optimization problem. In addition to low-light smoothness,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit.,
Jun. 2014, pp. 2752–2759.
image enhancement, our method is also suitable for other [23] X. Fu, Y. Liao, D. Zeng, Y. Huang, X. Zhang, and X. Ding, “A proba-
similar tasks, such as image enhancement for underwater or bilistic method for image enhancement with simultaneous illumination
remote sensing, and in hazy or dusty conditions. Future works and reflectance estimation,” IEEE Trans. Image Process., vol. 24, no. 12,
pp. 4965–4977, Dec. 2015.
include accelerating our method and generalizing it to video [24] M. Elad, “Retinex by two bilateral filters,” in Proc. 5th Int. Conf. Scale
enhancement. Automatically deciding which model would be Space PDE Methods Comput. Vis., 2005, pp. 217–229.
optimal for an input image is also an appealing topic. [25] W.-J. Li, B. Gu, J.-T. Huang, and M.-H. Wang, “Novel retinex algorithm
by interpolation and adaptive noise suppression,” J. Central South Univ.,
vol. 19, no. 9, pp. 2541–2547, 2012.
R EFERENCES [26] X. Yu, X. Luo, G. Lyu, and S. Luo, “A novel retinex based enhancement
[1] S. M. Pizer, R. E. Johnston, J. P. Ericksen, B. C. Yankaskas, and algorithm considering noise,” in Proc. 16th Int. Conf. Comput. Inf. Sci.
K. E. Muller, “Contrast-limited adaptive histogram equalization: Speed (ICIS), May 2017, pp. 649–654.
and effectiveness,” in Proc. 1st Conf. Vis. Biomed. Comput., May 1990, [27] Z. Farbman, D. Lischinski, and R. Szeliski, “Edge-preserving decom-
pp. 337–345. positions for multi-scale tone and detail manipulation,” Trans. Graph.,
[2] M. Abdullah-Al-Wadud, M. H. Kabir, M. A. A. Dewan, and vol. 27, no. 3, p. 67, Aug. 2008.
O. Chae, “A dynamic histogram equalization for image contrast [28] K. He, J. Sun, and X. Tang, “Guided image filtering,” IEEE Trans.
enhancement,” IEEE Trans. Consum. Electron., vol. 53, no. 2, Pattern Anal. Mach. Intell., vol. 35, no. 6, pp. 1397–1409, Jun. 2013.
pp. 593–600, May 2007. [29] W. Chao and Y. Zhong-Fu, “Variational enhancement for infrared
[3] L. Li, R. Wang, W. Wang, and W. Gao, “A low-light image enhancement images,” J. Infr. Millim. Waves, vol. 25, no. 4, pp. 306–310, 2006.
method for both denoising and contrast enlarging,” in Proc. IEEE Int. [30] Y. Wang, W. Yin, and J. Zeng, “Global convergence of ADMM in
Conf. Image Process., Sep. 2015, pp. 3730–3734. nonconvex nonsmooth optimization,” Dept. Comput. Appl. Math., Univ.
[4] X. Zhang, P. Shen, L. Luo, L. Zhang, and J. Song, “Enhancement and California, Los Angeles, CA, USA, Tech. Rep. 62, 2015, vol. 15.
noise reduction of very low light level images,” in Proc. 21st Int. Conf. [31] Y. Xu, W. Yin, Z. Wen, and Y. Zhang, “An alternating direction algorithm
Pattern Recognit. (ICPR), Nov. 2012, pp. 2034–2037. for matrix completion with nonnegative factors,” Frontiers Math. China,
[5] K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, “Image denoising vol. 7, no. 2, pp. 365–384, 2012.
by sparse 3-D transform-domain collaborative filtering,” IEEE Trans. [32] Image Datasets of the Computational Vision Group at CALTECH.
Image Process., vol. 16, no. 8, pp. 2080–2095, Aug. 2007. [Online]. Available: http://www.vision.caltech.edu/archive.html
LI et al.: STRUCTURE-REVEALING LOW-LIGHT IMAGE ENHANCEMENT 2841
[33] NASA. (2001). Retinex Image Processing. [Online]. Available: Wenhan Yang received the B.S. degree in computer
https://dragon.larc.nasa.gov/retinex/pao/news science from Peking University, Beijing, China,
[34] K. Gu, W. Lin, G. Zhai, X. Yang, W. Zhang, and C. W. Chen, in 2012, where he is currently pursuing the Ph.D.
“No-reference quality metric of contrast-distorted images based on degree with the Institute of Computer Science and
information maximization,” IEEE Trans. Cybern., vol. 47, no. 12, Technology. He was a Visiting Scholar with the
pp. 4559–4565, Dec. 2017. National University of Singapore from 2015 to 2016.
[35] K. Gu et al., “Blind quality assessment of tone-mapped images His current research interests include image process-
via analysis of information, naturalness, and structure,” IEEE Trans. ing, sparse representation, image restoration, and
Multimedia, vol. 18, no. 3, pp. 432–443, Mar. 2016. deep-learning-based image processing.
[36] K. Gu, G. Zhai, X. Yang, and W. Zhang, “Using free energy principle
for blind image quality assessment,” IEEE Trans. Multimedia, vol. 17,
no. 1, pp. 50–63, Jan. 2015.
[37] K. Gu, D. Tao, J.-F. Qiao, and W. Lin, “Learning a no-reference
quality assessment model of enhanced images with big data,” IEEE
Trans. Neural Netw. Learn. Syst., to be published. [Online]. Available:
http://ieeexplore.ieee.org/document/7872424/
[38] D. Martin, C. Fowlkes, D. Tal, and J. Malik, “A database of human
segmented natural images and its application to evaluating segmentation
algorithms and measuring ecological statistics,” in Proc. 8th IEEE Int.
Conf. Comput. Vis. (ICCV), vol. 2. Jul. 2001, pp. 416–423. Xiaoyan Sun is currently a Lead Researcher
[39] L. Kratz and K. Nishino, “Factorizing scene albedo and depth from with Microsoft Research Asia. She is currently
a single foggy image,” in Proc. IEEE 12th Int. Conf. Comput. Vis., focusing on video analysis, image restoration,
Sep./Oct. 2009, pp. 1701–1708. and image/video coding. She has authored or
[40] G. Meng, Y. Wang, J. Duan, S. Xiang, and C. Pan, “Efficient image co-authored over 100 papers in journals and con-
dehazing with boundary constraint and contextual regularization,” in ferences, holds ten proposals to standards with one
Proc. IEEE Int. Conf. Comput. Vis., Dec. 2013, pp. 617–624. accepted, and holds over ten granted U.S. patents.
[41] J.-P. Tarel and N. Hautiere, “Fast visibility restoration from a single color Her current research interests include computer
or gray level image,” in Proc. IEEE 12th Int. Conf. Comput. Vis. (ICCV), vision, image/video processing, and machine learn-
Sep. 2009, pp. 2201–2208. ing. She is a TC Member of the IEEE Multimedia
[42] K. He, J. Sun, and X. Tang, “Single image haze removal using dark Systems and Applications. She received the Best
channel prior,” in Proc. IEEE Int. Conf. Comput. Vis. Pattern Recognit., Paper Award from the IEEE T RANSACTIONS ON C IRCUITS AND S YSTEMS
Miami, FL, USA, Jun. 2009, pp. 1956–1963. FOR V IDEO T ECHNOLOGY in 2009 and the Best Student Paper Award from
[43] X. Fu, P. Zhuang, Y. Huang, Y. Liao, X.-P. Zhang, and X. Ding, VCIP in 2016. She is an AE of the Signal Processing: Image Communication
“A retinex-based enhancing approach for single underwater image,” in Journal and an AE of the IEEE J OURNAL ON E MERGING AND S ELECTED
Proc. IEEE Int. Conf. Image Process., Oct. 2014, pp. 4572–4576. T OPICS IN C IRCUITS AND S YSTEMS . She also served as the session chair,
[44] X. Fu, Y. Huang, D. Zeng, X.-P. Zhang, and X. Ding, “A fusion-based the area chair, and the TC co-chair for several international conferences.
enhancing approach for single sandstorm image,” in Proc. IEEE Int. She received the B.S., M.S., and Ph.D. degrees in computer science from
Workshop Multimedia Signal Process., Sep. 2014, pp. 1–5. the Harbin Institute of Technology, Harbin, China, in 1997, 1999, and 2003,
respectively. She was an Intern with Microsoft Research Asia from 2000 to
2003, and joined Microsoft Research Asia in 2003. She is currently an Adjunct
Mading Li received the B.S. degree in computer Professor (a Ph.D. Supervisor) with the University of Science and Technology
science from Peking University in 2013, where he is of China.
currently pursuing the Ph.D. degree with the Institute
of Computer Science and Technology, being advised
by Z. Guo and J. Liu. He was a Visiting Scholar with
McMaster University in 2016. His current research
interests include image and video processing, image
interpolation, image restoration, and low-light image
enhancement.
Jiaying Liu (S’08–M’10–SM’17) received the B.E. Zongming Guo (M’09) received the B.S. degree
degree in computer science from Northwestern Poly- in mathematics and the M.S. and Ph.D. degrees in
technic University, Xian, China, in 2005, and the computer science from Peking University, Beijing,
Ph.D. degree (Hons.) in computer science from China, in 1987, 1990, and 1994, respectively.
Peking University, Beijing, China, in 2010. He is currently a Professor with the Institute of
She is currently an Associate Professor with Computer Science and Technology, Peking Univer-
the Institute of Computer Science and Technology, sity. His current research interests include video
Peking University. She has authored over 100 tech- coding, processing, and communication.
nical articles in refereed journals and proceedings Dr. Guo is an Executive Member of the China
and holds 24 granted patents. Her current research Society of Motion Picture and Television Engineers.
interests include image/video processing, compres- He was a recipient of the First Prize of the State
sion, and computer vision. Administration of Radio Film and Television Award in 2004, the First Prize
Dr. Liu was a Visiting Scholar with the University of Southern California, of the Ministry of Education Science and Technology Progress Award in 2006,
Los Angeles, CA, USA, from 2007 to 2008. In 2015, she was a Visiting the Second Prize of the National Science and Technology Award in 2007, and
Researcher with Microsoft Research Asia, supported by Star Track for Young the Wang Xuan News Technology Award and the Chia Tai Teaching Award
Faculties. She served as a TC Member in the IEEE CAS MSA and APSIPA in 2008. He received the Government Allowance granted by the State Council
IVM and a APSIPA Distinguished Lecturer from 2016 to 2017. She is a Senior in 2009. He received the Distinguished Doctoral Dissertation Advisor Award
Member of CCF. from Peking University in 2012 and 2013.