0% found this document useful (0 votes)
1 views14 pages

Low-Light Image Enhancement

This paper presents a robust Retinex model for low-light image enhancement that incorporates a noise map to improve the quality of images captured in low-light conditions. The proposed method optimizes illumination and reflectance components while addressing the challenges posed by high noise levels, leading to better visibility and detail in enhanced images. Experimental results demonstrate the effectiveness of this approach, which can also be applied to other image enhancement scenarios such as underwater and remote sensing images.

Uploaded by

Nirmal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
1 views14 pages

Low-Light Image Enhancement

This paper presents a robust Retinex model for low-light image enhancement that incorporates a noise map to improve the quality of images captured in low-light conditions. The proposed method optimizes illumination and reflectance components while addressing the challenges posed by high noise levels, leading to better visibility and detail in enhanced images. Experimental results demonstrate the effectiveness of this approach, which can also be applied to other image enhancement scenarios such as underwater and remote sensing images.

Uploaded by

Nirmal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 14

2828 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 27, NO.

6, JUNE 2018

Structure-Revealing Low-Light Image Enhancement


Via Robust Retinex Model
Mading Li, Jiaying Liu , Senior Member, IEEE, Wenhan Yang, Xiaoyan Sun, Senior Member, IEEE,
and Zongming Guo, Member, IEEE

Abstract— Low-light image enhancement methods based on and generate a noise-free image, however it breeds new prob-
classic Retinex model attempt to manipulate the estimated illu- lems such as motion blur. Thus, low-light image enhancement
mination and to project it back to the corresponding reflectance. technique at the software level is highly desired in consumer
However, the model does not consider the noise, which inevitably
exists in images captured in low-light conditions. In this paper, photography. Moreover, such technique can also benefit many
we propose the robust Retinex model, which additionally con- computer vision algorithms (object detection, tracking, etc.)
siders a noise map compared with the conventional Retinex since their performance highly relies on the visibility of the
model, to improve the performance of enhancing low-light images target scene.
accompanied by intensive noise. Based on the robust Retinex However, this is not a trivial task, for that images captured
model, we present an optimization function that includes novel
regularization terms for the illumination and reflectance. Specif- under low-light conditions have rather low SNRs, which means
ically, we use 1 norm to constrain the piece-wise smoothness the noises are highly intensive and may dominate over the
of the illumination, adopt a fidelity term for gradients of the image signals. Thus, low-light image enhancement algorithms
reflectance to reveal the structure details in low-light images, and need to tackle not only the low visibility, but also the high-
make the first attempt to estimate a noise map out of the robust level noises, in addition to low contrast.
Retinex model. To effectively solve the optimization problem,
we provide an augmented Lagrange multiplier based alternating An intuitive way to enhance low-light images is to directly
direction minimization algorithm without logarithmic transfor- amplify the illumination. However, relatively bright areas may
mation. Experimental results demonstrate the effectiveness of the be saturated and some details might be lost through the oper-
proposed method in low-light image enhancement. In addition, ation. Histogram equalization (HE) based methods [1], [2],
the proposed method can be generalized to handle a series of which aim to stretch the dynamic range of the observed image,
similar problems, such as the image enhancement for underwater
or remote sensing and in hazy or dusty conditions. can mitigate the problem to some extent. Nevertheless, their
purpose is to enhance the contrast other than adjusting the
Index Terms— Low-light image enhancement, Retinex model, illumination. Thus, results of these methods may be over- or
structure-revealing, noise suppression.
under-enhanced. Furthermore, HE based methods neglect the
intensive noise hidden in low-light images.
I. I NTRODUCTION Some researchers [3], [4] noticed that the inverted low-light
images look like haze images. Dehazing methods are therefore
I MAGES captured under low-light conditions suffer from
many degradations, such as low visibility, low contrast,
and high-level noise. Although these degradations can be
applied and the dehazing result is inverted once more as the
enhancement result. A joint-bilateral filter is applied in [4]
somewhat alleviated by professional devices and advanced to suppress the noise after the enhancement. Li et al. [3]
photographic skills, the inherent cause of the noise is inevitable attempted to further improve the visual quality by segmenting
and cannot be addressed at the hardware level. Without suf- the observed image into superpixels and adaptively denoising
ficient amount of light, the output of camera sensors is often different segments via BM3D [5]. Although these methods
buried in the intrinsic noise in the system. Longer exposure can generate reasonable results, a convincing physical expla-
time can effectively increase the signal-to-noise ratio (SNR) nation of their basic model has not been provided. Moreover,
the order of enhancing and denoising has always been a
Manuscript received June 17, 2017; revised November 17, 2017, problem. Performing enhancement method before denoising
January 13, 2018, and February 13, 2018; accepted February 14, 2018. may result in noise amplification, which increases the difficulty
Date of publication February 28, 2018; date of current version March 21,
2018. This work was supported in part by the National Natural Science of denoising. On the other hand, enhancement results may be
Foundation of China under Contract 61772043 and in part by Microsoft somewhat blurred after denoising.
Research Asia under Project ID FY17-RES-THEME-013. The associate editor In recent years, learning based image enhancement methods
coordinating the review of this manuscript and approving it for publication
was Dr. Alin M. Achim. (Corresponding author: Jiaying Liu.) have also been studied. Yang et al. [6] presented a low light
M. Li, J. Liu, W. Yang, and Z. Guo are with the Institute of Computer image enhancement method using coupled dictionary learning.
Science and Technology, Peking University, Beijing 100871, China (e-mail: Lore et al. [7] proposed a Low-Light Net (LLNet) using
martinli0822@pku.edu.cn; liujiaying@pku.edu.cn; yangwenhan@pku.edu.cn;
guozongming@pku.edu.cn). deep autoencoders to simultaneously (or sequentially) perform
X. Sun is with Internet Media Group, Microsoft Research Asia, contrast enhancement and denoising. In both works, low-light
Beijing 100080, China (e-mail: xysun@microsoft.com). data used for training is synthesized by applying gamma
Color versions of one or more of the figures in this paper are available
online at http://ieeexplore.ieee.org. correction on natural image patches since real data paired with
Digital Object Identifier 10.1109/TIP.2018.2810539 low-light and normal illumination is hard to collect. However,
1057-7149 © 2018 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.
See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
LI et al.: STRUCTURE-REVEALING LOW-LIGHT IMAGE ENHANCEMENT 2829

such measurement may not fully characterize the formation of transformation is provided to optimize the objective
natural low-light images, which may lead to unnatural results. function.
Retinex theory [8] has been studied extensively in the past • The proposed method can also be applied to other
few decades, which assumes that images can be decomposed practical applications in addition to low-light image
into two components, namely reflectance and illumination. enhancement, such as underwater image enhancement,
Single-scale Retinex [9] and multiscale Retinex [10] are the remote sensing image enhancement, image dehazing, and
pioneering studies in this field, which treat the reflectance dusty weather image
component as the final output. Wang et al. [11] proposed The rest of this paper is organized as follows. In Sec. II,
a bright-pass filter to decompose the observed image into we briefly review the conventional Retinex model, discuss its
reflectance and illumination, and attempted to preserve the drawback for low-light image enhancement, and present the
naturalness while enhancing the image details. Based on the robust Retinex model. Sec. III presents the proposed method
bright-pass filter proposed in [11], Fu et al. [12] fused multiple based on the robust Retinex model. Experimental results are
derivatives of the estimated illumination to combine different demonstrated in Sec. IV. Sec. V concludes the paper.
merits into a single output. The method proposed in [13]
refines the initial illumination map by imposing a structure- II. BACKGROUND
aware prior. Nevertheless, due to the lack of constraint on the The classic Retinex model decomposes images into
reflectance, these methods often amplify the latent intensive reflectance and illumination:
noise that exists in low-light images.
I = R ◦ L, (1)
Although the logarithmic transformation is widely adopted
for the ease of modeling by most Retinex based algorithms, where I is the observed image, R and L represent the
a recent work [14] argues that the logarithmic transformation reflectance and the illumination of the image, respectively.
is not appropriate in the regularization terms since pixels with The operator ◦ denotes the element-wise multiplication. Most
low magnitude dominate over the variation term in the high of the existing Retinex-based methods utilize the logarithmic
magnitude areas. Thus, a weighted variational model is pro- transformation to reduce computational complexity [15].
posed in [14] in order to impose better prior representation in Image intrinsic decomposition based methods are also able
the regularization terms. Even though this method shows rather to estimate illumination and reflectance [16]–[20]. However,
impressive results in the decomposition of reflectance and these methods are mostly based on the assumptions that
illumination, the method is not suitable for the enhancement of light sources are distant from the examined scene and the
low-light images as the noise often appears in low magnitude scene does not have multiple dominant illuminating colors,
regions. which do not hold in most low-light images (as can be
In this paper, we follow the conventional methods observed in Figs. 9 and 10). Thus, in this paper, we focus on
that manipulate the illumination component after the Retinex-based decomposition, and we argue that the classic
decomposition in order to re-light the input low-light image. Retinex model in (1) is not suitable for the low-light image
In the following sections, we first point out that existing enhancement problem, for that intensive noise inevitably exists
Retinex-based methods using logarithmic transformation in low-light images.
are not suitable for handling intensive noise hidden in We present the robust Retinex model and point out that the
low-light images. Then, based on the robust Retinex model model for the particular task should have a noise term N as
with an additional noise term, we present the proposed follows:
structure-revealing low-light image enhancement method.
I = R ◦ L + N. (2)
The method simultaneously estimates a structure-revealed
reflectance and a smoothed illumination component (and a This image formulation is similar to that of intrinsic image
noise map if the alternative optimization function is used). The decomposition, which originally involves three factors includ-
augmented Lagrange multiplier based algorithm is provided ing Lambertian shading (L), reflectance (R), and specular-
to solve the optimization problem. Without sophisticated ities (C). The specular term C is often used in computer
patch-based techniques such as nonlocal means and dictionary graphics and it accounts for light rays that reflect directly off
learning, the proposed method presents remarkable results via the surface, which creates visible highlights in the image [16].
simply using the refined Retinex model without logarithmic For simplicity, many works [16], [21], [22] often neglect
transformation regularized by few common terms. In summary, the specular component C. In our work, we still follow this
the contributions of this paper lie in three aspects: simplification, but a noise term N is added to the model.
• In this paper, we consider the noise term in the classic Different with the discretely distributed specular term C,
Retinex model in order to better formulate images cap- the noise term N distributes more uniformly in natural images.
tured under low-light conditions. Based on the model, Once the noise term is added as in (2), the logarithmic
we make the first attempt to explicitly predict the noise transformation of the classic model becomes questionable.
map out of the robust Retinex model, while simultane- First, since log(R)+log(L) = log(R ◦L+N), the fidelity term
ously estimate a structure-revealed reflectance map and a in the log-transformed domain (log(R) + log(L)) − log(I)2F
piece-wise smoothed illumination map. will deviate from the ideal value. Second, the existence of N
• An augmented Lagrange multiplier based alternating may significantly affect the gradient variation in the log-
direction minimization algorithm without logarithmic transformed domain. Specifically, taking the reflectance R
2830 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 27, NO. 6, JUNE 2018

Fig. 1. Comparisons of decomposition results and corresponding enhancement results. From top to bottom: results of SRIE [14], PIE [23], and the proposed
method. (a) Input. (b) Illumination. (c) Reflectance. (d) Enhancement.

as an example, its gradient variation in the log-transformed to simultaneously estimate the illumination and the reflectance
domain ∇(log(R)) = (1/R) · ∇R is highly affected by 1/R (and the noise) and their solutions are given subsequently.
when R is very small, which inevitably affects the overall
variation term. If R contains intensive noise, 1/R may become A. Overview
extremely unstable, and the enhancement result may be very Following [14] and [23], we perform the proposed method
noisy, which significantly affects the subjective visual quality. on the V channel in HSV color space. Given the input low-
Based on the above analysis, we argue that directly using log- light color image S, we first convert it into HSV space. Then,
transformed Retinex-model for low-light image enhancement the proposed decomposition is applied on the normalized V
is inappropriate. Thus, in this paper we do not apply logarith- channel image I and the illumination component L and the
mic transformation on Retinex model. reflectance component R can be obtained. After that, in order
For the particular task of enhancing low-light images, to light up the dark regions, we adjust the illumination L and
the noise term N is quite essential. Without it, intensive noise generate an adjusted illumination L̂. The adjusted illumination
hidden in the observed image I will eventually be assigned L̂ is then integrated with the reflectance component R, pro-
to either L or R. As introduced in the previous section, most ducing the enhanced V channel image Î. Finally, the enhanced
methods focus on the illumination component L and regard HSV image is converted back to RGB color space, and
the reflectance R = I/L as the final output, which inevitably the final enhancement result Ŝ is obtained. The details of
leads to a noisy result. This is the reason why a denoising the proposed structure-revealing low-light image enhancement
process is often required after the enhancement [12], [13]. method will be elaborated in the following subsections.
In Retinex based image enhancement methods, some pio-
neering works have been proposed considering the noise. B. Baseline Decomposition
Elad [24] proposed to constrain the smoothness of both the
illumination and the reflectance by two bilateral filters on log- In this subsection, a new decomposition model that simul-
transformed domain. The model handles the proximity of the taneously estimates the reflectance R and the illumination L
illumination to the observation and requires the reflectance of the input image I is formulated as follows:
to be close to the residual image, assuming the noise to be argmin R ◦ L − I2F + β∇L1 + ω∇R − G2F , (3)
multiplicative. Algorithms proposed in [25] and [26] both con- R,L
sider to directly apply denoising procedures on the estimated where β, and ω are the coefficients that control the importance
reflectance. Li et al. [25] employed edge-preserving smooth- of different terms.  ·  F and  · 1 represent the Frobenius
ing [27] while Yu et al. [26] used guided filter [28] to suppress norm and 1 norm, respectively. In addition, ∇ is the first
the noise in the reflectance map. In this paper, we attempt order differential operator, and G is the adjusted gradient of I,
to enhance the visibility of low-light images and mitigate the which will be discussed in Eq. (4). The role of each term in
effect of noise simultaneously in a joint optimization function, the objective (3) is interpreted below:
without using logarithmic transformation. The details of our • R ◦ L − I2F constrains the fidelity between the observed
method will be elaborated in the next section. image I and the recomposed one R ◦ L;
• ∇L1 corresponds to the total variation sparsity and
III. S TRUCTURE -R EVEALING L OW-L IGHT considers the piece-wise smoothness of the illumination
I MAGE E NHANCEMENT map L;
The proposed structure-revealing low-light image enhance- • ∇R − G2F minimizes the distance between the gradient
ment based on robust Retinex model will be presented in of the reflectance R and G (an adjusted version of the
this section. We first give the framework of the proposed gradient of the input I), so that the structural information
method. Then, we introduce two alternative decompositions of the reflectance can be strengthened.
LI et al.: STRUCTURE-REVEALING LOW-LIGHT IMAGE ENHANCEMENT 2831

Previous works [14], [23] use an 2 prior on illumination


gradients and an 1 prior on reflectance gradients. However,
we observe that the illumination of most natural low-light
images is not uniformly distributed (i.e. there exist rela-
tively bright regions), which indicates that using an 2 norm
enforcing a spatially smooth illumination is not appropriate
for these images. As can be observed in Fig. 1, previous
works generates observable halo artifacts. This is because that Fig. 2. Comparison of results using different constraint. (a) w/o constraint.
2 norm generates blurred boundaries around areas where (b) ∇R2F . (c) ∇R − G2F .
the illumination changes dramatically, which is quite com-
mon in low-light images. Even though they use 1 norm
to encourage the piece-wise smoothness of the reflectance,
the decomposed reflectance is still affected by the blurred
illumination (observed in Fig. 1), as the data fidelity term
constrains the product of the illumination and the reflectance
to be close to the halo-free input image. Using 1 prior to
constrain illumination gradients as in our work, maintains the
overall structure of illumination images and presents better
visual quality.
As mentioned in the Sec. I, apart from low visibility
and high-level noise, low-light images also suffer from low
contrast. Since lower contrast often indicates smaller gradient Fig. 3. Comparisons of results using different models. (a) Input; (b) results
magnitudes (and vice versa), we attempt to manipulate the gra- generated by the basic model (3); (c) results generated by the alternative
dient magnitudes of the reflectance so that the contrast of the model (6); (d) corresponding noise maps (normalized for visualization).
enhancement result can be boosted. To this end, we present the
third term ∇R − G2F in our objective function to constrain we here attempt to directly estimate a noise map from the input
the fidelity between ∇R and a guidance matrix G. Matrix G is image. In order to explicitly estimate the noise map, we also
obtained by amplifying the gradient of the input image with a present the following optimization problem,
factor K. To balance the overall magnitude of G, the factor K
is designed to adaptively make less (more) adjustment in areas argmin R ◦ L + N − I2F + β∇L1
with higher (lower) magnitudes. The formulation of G is given R,L,N
as follows [29], + ω∇R − G2F + δN2F , (6)
G = K ◦ ∇I, (4) where N is the estimated noise map, the term N2F constrains
−|∇I|/σ
K = 1 + λe . (5) the overall intensity of the noise. The fidelity term with a noise
map is used to guarantee the accuracy of the model, which
Specifically, ∇I is amplified by the factor K that decreases means that we expect the estimated illumination, reflectance
with the increment of the gradient magnitude. Note that and noise map to accurately reconstruct the input image.
this amplification factor makes less adjustment in areas with As stated in Sec. IV-B.2, the parameters β, ω are significantly
higher gradient magnitude, while areas with lower gradient smaller than 1 to address the importance of the fidelity term
magnitude are strongly enhanced. So that after the amplifi- in the optimization.
cation, the adjusted gradient G tends to have similar magni- To avoid amplifying the intensive noise of extremely low-
tude. Furthermore, λ controls the degree of the amplification; light images, we also modify the formulation of the matrix G
σ controls the amplification rate of different gradients. In our as follows,
experiments, parameters λ and σ are all set as 10. For each
observed image, matrix G only needs to be calculated once. G = K ◦ ∇ Î, (7)
Fig. 2(a) and (c) give an example of a pair of enhancement K = 1 + λe −|∇ Î|/σ
, (8)
results without and with our contrast constraint term. Fig. 2(b)
shows the result obtained by substituting the proposed term where
∇R − G2F with ∇R2F while keeping the parameters 
unchanged. It can be observed that structure details in the 0, if |∇I| < ε,
∇ Î = (9)
result with the proposed constraint is clearly revealed. ∇I, otherwise.

Different with the formulation in Eq. (4), small gradients


C. Alternative Decomposition (i.e., the noise) are suppressed before the amplification.
As stated previously, the existence of noise is inevitable An example of results generated by different models is
in low-light images. Moreover, the noise observed in natural shown in Fig. 3. As illustrated, the alternative model (6) effi-
images is far more complicated than additive white Gaussian ciently extracts the noise from the input image, and generates
noise. Thus, instead of estimating the distribution of the noise, result with better visual quality.
2832 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 27, NO. 6, JUNE 2018

D. Solution • L sub-problem: Collecting the terms related to L leads


Optimization problems (3) and (6) are both non-convex to the problem as follows:
due to R ◦ L. In this paper, we find the alternating direction argmin R(k+1) ◦ L + N(k) − I2F + (Z(k) , ∇L − T(k) ). (15)
minimization technique (ADM) efficient solving the problem. L
Although ADM was first proposed for convex optimiza- Similar to the former derivation, we provide the solution of L
tions, there are recent works providing convergence guarantee as follows:
of ADM for non-convex optimization problems [30], [31].  −1
In practice, we observe that the algorithm converges with l(k+1) = 2 f (r̃(k+1) ) + μf (D)
reasonable regularization parameters for all the test images  
(k)
(see Sec. IV-D for more details), which also confirms the (k+1) (k+1) T (k) z
× 2r̃ (i−n )+μD (t − ) . (16)
effectiveness of ADM in our case. μ
In this subsection, we give the solution of problem (6). The
solution of (3) can be obtained similarly. • N sub-problem: Fixing variables other than N, the prob-
By substituting ∇L in the second term with an auxiliary lem becomes:
variable T, the objective (6) can be rewritten in the following argmin R(k+1) ◦ L(k+1) + N − I2F + δN2F . (17)
equivalent form: N

The closed form solution for this quadratic problem is


argmin R ◦ L + N − I2F + βT1 + δN2F
R,L,N,T given as:
+ ω∇R − G2F , s.t. T = ∇L. (10) N(k+1) = (I − R(k+1) ◦ L(k+1) )/(1 + δ), (18)
By introducing a Lagrange multiplier Z to remove the equal- where the division is performed element-wise.
ity constraint, we have the augmented Lagrangian function • T sub-problem: Dropping the terms without T gives the
of (10): following problem:
L(R, L, N, T, Z) = R ◦ L + N − I2F + βT1 argmin βT1 + (Z(k) , ∇L(k+1) − T). (19)
T
+ ω∇R − G2F + δN2F
The solution of (19) can be obtained by performing a shrinkage
+ (Z, ∇L − T), (11)
operation:
where (A, B) = A, B + μ2 B2F and ·, · represents the  
Z(k)
matrix inner product. μ is a positive scalar. The equivalent T(k+1) = S β ∇L(k+1) + (k) . (20)
objective function can be solved by iteratively updating each μ(k) μ
variable while regarding other variables that have been esti- where Sε (x) = sign(x) max(|x| − ε, 0), in which the calcula-
mated in the previous iteration as constants. Here we give the tions are performed element-wise.
solutions for the k-th iteration of the sub-problems. • Updating Z and μ: The auxiliary matrix Z and the
• R sub-problem: Neglecting the terms unrelated to R, penalty scalar μ are updated through:
we have the following optimization problem:  
Z(k+1) = Z(k) + μ(k) ∇L(k+1) − T(k+1) ,
argmin L(k) ◦ R + N(k) − I2F + ω∇R − G2F . (12)
R μ(k+1) = μ(k) ρ, ρ > 1. (21)
We reformulate the first term to make the problem become a
The whole iteration is stopped only if the difference between
classic least squares problem:
R(k) and R(k+1) (or the difference between L(k) and L(k+1) )
argmin l̃(k) r + n(k) − i2F + ω∇R − G2F , (13) is smaller than a threshold, say 10−3 in practice, or if the
R maximal number of iterations is reached.
The entire procedure of the solution to optimization prob-
where l is the vectorized version of matrix L and l̃ represents lem (6) is summarized in Algorithm 1, which also includes
a diagonal matrix with l as its entries. The same notation is our initializations of different variables.
used with other matrices (r, i, n, t, g, and z correspond to
R, I, N, T, G, and Z, respectively). By differentiating (13)
with respect to R and setting the derivative to 0, we have the E. Illumination Adjustment
following equation: After the estimation of the illumination and the reflectance
components L and R, the final task is to adjust L to improve
(l̃(k) )T (l̃(k) r + n(k) − i) + 2ωDT (Dr − g) = 0
  the visibility of the input image. In our work, Gamma cor-
f (l̃(k) ) + ω f (D) r = l̃(k) (i − n(k) ) + ωDT g rection is applied in order to adjust the illumination. The
 −1  enhanced V channel image Î is generated by:
r(k+1) = f (l̃(k) ) + ω f (D) l̃(k) (i − n(k) ) + ωDT g , (14)
Î = R ◦ L̂,
1
where D is the discrete gradient operator, and f (x) = xT x. L̂ = L γ , (22)
LI et al.: STRUCTURE-REVEALING LOW-LIGHT IMAGE ENHANCEMENT 2833

Algorithm 1: The Solution of Problem (10) CPU. In our experiments, if not specifically stated, the para-
meters β, ω and δ are set as 0.05, 0.01 and 1, respectively.
Parameters λ and σ in Eq. (4) are both set to be 10. In most
cases, these empirical setting generates decent results.

B. Low-Light Image Enhancement


1) Low-Light Image Enhancement With Less Noise: We
compare the proposed method with several state-of-the-art
methods, including histogram equalization (HE), naturalness
preserved enhancement algorithm (NPE) [11], PIE [23],
SRIE [14], and low-light image enhancement via illumination
map estimation (LIME) [13]. HE is performed by using the
MATLAB built-in function histeq. The results of NPE, PIE,
SRIE, and LIME are generated by the code downloaded
from the authors’ websites, with recommended experiment
settings.
a) Subjective comparisons: Figs. 5, 6, 7 show several
comparisons between enhancement results generated by dif-
ferent methods. Red arrows on these figures indicate notice-
able artifacts. HE attempts to stretch the narrowly distributed
histograms of low-light images in order to enhance the con-
trast. However, this method produce noticeable artifacts in
flat regions as the continuous values of adjacent pixels are
stretched apart. For instance, the sky regions in image #18
(observed in Fig. 5) and image #13 (observed in Fig. 6). Our
Fig. 4. 18 test images used in our experiments. They are denoted as 1 to 18. method, by contrast, can generate artifact-free images with
visually pleasing appearance.
As discussed previously, SRIE and PIE generates observable
where γ is empirically set as 2.2. Please note that the illumina- halo artifacts in some regions, such as the halo around the
tion does not need a normalization before Gamma correction tower in image #13 (observed in Fig. 6). Also, SRIE and PIE
since the input V channel image I is already normalized cannot sufficiently improve the visibility of the input image,
to [0, 1]. Finally, the enhanced HSV image is transformed as can be observed in the bottom of image #18 (Fig. 5).
to RGB color space, and we have the final enhancement In contrast, our method can avoid halo artifacts and produces
result Ŝ. satisfying results in most cases.
NPE is designed to preserve the naturalness of images,
and most of its results have vivid color. But some details
IV. E XPERIMENTAL R ESULTS in its results are lost, e.g. the textures on the lighthouse in
In this section, we evaluate the performance of the proposed image #13 (observed in Fig. 6), the textures on the girl’s
method. First, we present our experiment settings. Then, dress in image #10 (shown in Fig. 7). In fact, among all
we evaluate the proposed method by comparing it with the compared methods, only the proposed method successfully
state-of-the-art low-light image enhancement methods in both preserves these textures.
subjective and objective aspects. Noise suppression results LIME shows impressive performance lighting up dark
are presented afterwards. Then, we conduct an extensive regions. Nevertheless, this method can easily over-enhance
parameter study to evaluate the impact of regularization regions with relatively high intensities, such as the dress in
parameters. Finally, we discuss the computational complexity image #10 (Fig. 7), textures on the lighthouse in image #13
of our method, and provide experiments on other similar (Fig. 6). Comparatively, the proposed method produces more
applications. natural results, while successfully enhances the visibility of
low-light images.
b) Objective quality assessments: Besides subjective
A. Experiment Settings visual comparisons, we also apply objective measurements
To fully evaluate the proposed method, we test our method to evaluate the performance of the proposed method objec-
on images from various scenes. Test images come from the tively. Since assessing the quality of enhanced images is
dataset provided by Wang et al. [11] and Guo et al. [13], not a trivial task, we adopt three blind quality assess-
frontal face dataset [32], and NASA image dataset [33]. Fig. 4 ments, i.e. no-reference image quality metric for contrast
shows the 18 images tested in our experiments. distortion (NIQMC) [34], blind tone-mapped quality index
All experiments are conducted in MATLAB R2015b on a (BTMQI) [35], no-reference free energy based robust metric
PC running Windows 10 OS with 64G RAM and 3.5GHz (NFERM) [36] and a reference based quality assessment,
2834 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 27, NO. 6, JUNE 2018

Fig. 5. Comparisons of low-light image enhancement results for test image #18. Red arrows indicate artifacts or degradation. (a) Input. (b) HE. (c) LIME [13].
(d) NPE [11]. (e) PIE [23]. (f) SRIE [14]. (g) Proposed with model (3).

Fig. 6. Comparisons of low-light image enhancement results for test image #13. Red arrows indicate artifacts or degradation.(a) Input. (b) HE. (c) LIME [13].
(d) NPE [11]. (e) PIE [23]. (f) SRIE [14]. (g) Proposed with model (3).

i.e. colorfulness-based patch-based contrast quality index and CPCQI results of the input low-light images and the
(CPCQI) [37], to evaluate the enhancement results comprehen- enhancement results generated by aforementioned low-light
sively. Fig. 8 shows the average NFERM, BTMQI, NIQMC, image enhancement methods.
LI et al.: STRUCTURE-REVEALING LOW-LIGHT IMAGE ENHANCEMENT 2835

Fig. 7. Comparisons of low-light image enhancement results for test image #10. (a) Input. (b) HE. (c) LIME [13]. (d) NPE [11]. (e) PIE [23]. (f) SRIE [14].
(g) Proposed with model (3).

Fig. 8. Average NFERM, BTMQI, NIQMC, and CPCQI results for different methods on all 18 test images with model (3). (a) NFERM. (b) BTMQI.
(c) NIQMC. (d) CPCQI.

For NFERM and BTMQI, smaller values represent better achieves the highest CPCQI score, which indicates that our
image qualities. NFERM extracts features inspired by free method successfully enhances the overall quality of the image
energy based brain theory and classical human visual system without introducing much artifacts. As for NIQMC, it assesses
to measure the distortion of the input image. BTMQI assesses image quality by measuring local details and global histogram
image quality by measuring the average intensity, contrast, of the given image, and it particularly favors images with
and structure information of tone-mapped images. From the higher contrast. It can be observed that HE and LIME have
figure, we notice that the proposed method achieves the lowest higher NIQMC scores. The reason is that HE and LIME over-
NFERM score, which means that our results are more similar enhance the input image. For example, the reflectance on the
to natural images and have less distortion. The average BTMQI window in image #10 (shown in Fig. 7), and the lighthouse
value of the proposed method ranks 3rd among the compared in image #13 (observed in Fig. 6).
methods. Although NPE and SRIE have lower BTMQI scores, 2) Noise Suppression: We evaluate the performance of
their NFERM values are much larger than that of our method. our low-light image enhancement method under noisy cases
As can be observed in visual comparisons, some of the results using the alternative decomposition described in (6). In this
produced by NPE does not look natural, e.g. image #18 case, noise also exists in other channels apart from the
in Fig. 5 and image #13 in Fig. 6; SRIE cannot fully light V channel. Thus, the input image is processed in RGB
up the whole scene (Figs. 5, 9) and generates halo artifacts color space and the proposed method is applied in each
(Fig. 6). channel. Parameters β and ω are both set as 0.01 for this
For CPCQI and NIQMC, larger values indicate better task.
qualities of image contrast. CPCQI evaluates the perceptual Fig. 9 shows some enhancement results of low-light images
distortions between the enhanced image and the input image with intense noise. As can be observed in the figure, the noise
from three aspects: mean intensity, signal strength and signal hidden in very low-light condition is really intense. Although
structure components. CPCQI value < 1 means that the quality HE, LIME, and NPE can sufficiently enhance the visibility of
of the enhanced image is degraded rather than enhanced. low-light images, they also amplify the intensive noise. PIE
As can be observed in the figure, the proposed method cannot light up the input images, and its results also contains
2836 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 27, NO. 6, JUNE 2018

Fig. 9. Comparison of noise suppression. For each case, from left to right, they are the input images, results generated by HE, LIME, NPE, PIE, SRIE, and
the proposed method with model (6). (a) Input. (b) HE. (c) LIME [13]. (d) NPE [11]. (e) PIE [23]. (f) SRIE [14]. (g) Proposed.

Fig. 10. Comparison of denoising results with the proposed method. (a) is the input image; (b)–(f) are enhancement results of HE, LIME [13], NPE [11],
PIE [23], and SRIE [14] with a post-processing performed by BM3D with the denoising parameter σ = 30; (g) is the result obtained by the proposed method
with model (6).

noticeable noise. Our method presents satisfying performance TABLE I


handling low-light images with intensive noise. AVERAGE PSNR AND SSIM R ESULTS OF D IFFERENT E NHANCEMENT
M ETHODS (F OLLOWED BY BM3D) ON 200 N ATURAL
We also provide the comparison of the proposed method C OLOR I MAGES F ROM BSDS [38]
with the results of other methods post-processed by BM3D [5].
As shown in Fig. 10, the noise amplified by HE and NPE
are not properly handled, and many false tiny structures are
generated. LIME over-enhanced the input image, especially
on regions with higher illumination. Moreover, the denoising
process inevitably blurs the whole image. By contrast, our
result looks sharper and more natural. method achieves the highest PSNR and SSIM values among
To quantify the effectiveness of our method, we compare the competing methods.
with competing method (followed by BM3D) on 200 images 3) Comparison of Different Models: Generally, our baseline
from the Berkeley segmentation dataset (BSDS) [38]. We syn- model (3) is more suitable for images without much noise,
thesize low-light images by first applying Gamma correction while the alternative (6) is more effective dealing with low-
(with γ = 2.2) on images from BSDS, and then adding light images with noise. As can be observed in Fig. 11,
Poisson noise and white Gaussian noise to Gamma corrected compared with the baseline model, the model in (6) effectively
images. In our work, we use the built-in function of MATLAB remove most of the noise. However, for images with less
imnoise to generate Poisson noise. For Gaussian noise, we use noise, model (6) may slightly blur some of the tiny details,
σ = 5 to simulate the noise level in most natural low-light as shown in the fourth row in Fig. 11 (please observe the
images. Average PSNR and SSIM results of the 200 images details of the roof, the wall and the tree). We also com-
by competing methods are listed in Table I, while the best pare results quantitatively using objective criteria NFERM,
result is highlighted in bold. It can be observed that our BTMQI, NIQMC, and CPCQI. For images with less noise
LI et al.: STRUCTURE-REVEALING LOW-LIGHT IMAGE ENHANCEMENT 2837

Fig. 12. Average NFERM, BTMQI, NIQMC, and CPCQI results on all
18 test images using the proposed method (the baseline model) with different
regularization parameters.

Fig. 13. Convergence speed on image #10 with different regularization


parameters using our baseline model.

the model (6) may be a better choice. Combining our models


with noise detection/estimation methods and making automatic
decisions for which model would be optimal for an input
image may be our next research topic.

C. Parameter Study
In this section, we evaluate the effect of regularization
Fig. 11. Comparisons of enhancement result generated by our baseline parameters. We first evaluate the impact of parameters β and
model (3) and the alternative model (6). (a) Input. (b) Model (3). (c) Model (6).
ω in the basic model (3). In Fig. 12, we give objective results
obtained with different (β, ω) pairs on all the test images,
where β ranges in 0.5, 0.05 and 0.005, and ω is selected from
(18 images shown in Fig. 4), results generated by model (6) 0.1, 0.01, and 0.001. Please note again that lower NFERM,
obtain averages of 15.60, 3.84, 5.14, and 0.98, which is inferior BTMQI and higher NIQMC, CPCQI values represent better
to that of the baseline model (10.70, 3.87, 5.14, and 1.13). For visual quality. As can be observed, results with (0.5, 0.01),
200 synthesized noisy images, the average PSNR and SSIM (0.5, 0.001), and (0.05, 0.01) have rather low NFERM values.
of the baseline model (followed by BM3D) are 18.14 and And among them, (0.05, 0.01) has the lowest BTMQI value.
0.4632, which is also inferior to that of the model in (6) From NIQMC and CPCQI values, we can discover a certain
(18.53 and 0.5097). pattern with respect to ω.
To summarize, for images with less noise, our baseline Fig. 13 plots nine curves, representing different convergence
model (3) works fine; for low-light images with noise, speeds using different (β, ω) pairs on image #10. From the
2838 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 27, NO. 6, JUNE 2018

Fig. 16. The effect of different regularization parameters on average NFERM,


BTMQI, NIQMC, and CPCQI results using the proposed method for all
18 test images. (a) and (b) are evaluated on the baseline model and (c) on the
alternative model. We fix β = 0.05 in (a), ω = 0.01 in (b), and β = 0.05,
ω = 0.01 in (c).

We further study one parameter each time in Fig. 16.


From the figure, we notice several interesting things. First,
both NFERM and BTMQI prefer an intermediate ω. Second,
NIQMC (who favors higher contrast) always prefers larger
parameters. This is consistent with the following observations:
a larger β generates a more smooth illumination, which leads
Fig. 14. Examples of impact of (β, ω) pairs. First row: estimated illumination to enhancement results with higher contrast (observed in the
maps. Second row: estimated reflectance maps. Third row: enhanced images. first and the last column of Fig. 14); a larger ω strengthens the
Default settings are highlighted in bold. The baseline model is used in this
case. gradients of the enhancement results, which generates results
with higher contrast (observed in the third and the fourth
column of Fig. 14); compared with a smaller δ that smooths
out most of the noise, a larger δ also leads to higher contrast
(observed in Fig. 15). Third, in Fig. 16(c), the CPCQI scores
are all lower than 1, indicating that the alternative model is not
suitable for images with less noise (observed in the fourth row
of Fig. 11). Fourth, although all assessment metrics indicate
larger δ in Fig. 16(c), we find that a large δ cannot effectively
deal with noise (observed in Fig. 15), which is reasonable
since δ constrains the intensity of the noise map.

D. Computational Complexity and Convergence Speed


For an image of size 600 × 400, HE, LIME, NPE, PIE,
SRIE, and the proposed method with the baseline model
Fig. 15. Examples of parameter impact of δ. First row: estimated noise maps. require about 0.04, 0.41, 10.36, 1.49, 7.64, and 15.67 seconds,
Second row: enhanced images. Our model (6) is used in this case. (a) δ = 0.1. respectively. The alternative model in (6) process each channel
(b) δ = 1. (c) δ = 10. of input images in RGB color space and takes three times as
much as the baseline model. Although the proposed method
curves, we can see that most settings converge within 10 iter- needs more time, our results are the best in terms of objective
ations, despite some bumps appeared in (0.5, 0.01) and and subjective aspects. Also, it should be noticed that since
(0.5, 0.001). our method is implemented in MATLAB and not well opti-
Fig. 14 demonstrate subjective comparisons of the nor- mized, it could be further accelerated by adopting fast Fourier
malized illumination, reflectance, and enhanced results with transformation (FFT) and implementing the code in C/C++.
different (β, ω) pairs on image #10. As can be observed, illu- Fig. 17 plots the convergence curves for all the 18 test images,
mination maps become smoother since the 1 norm constraint and gives an intuitive example of the convergence speed of
decrease as β increases. The details of estimated reflectance the proposed method. From the curves, we can see that the
maps are strengthened as ω increases since a larger ω requires algorithm converges within 15 iterations for all the 18 test
the gradient of R to be more similar to the adjusted gradient G. images. In our experiments, we find that setting the maximum
In our experiments, we use (0.05, 0.01) as our default setting. number of iterations to be 10 is sufficient to generate satisfying
As for the parameter δ in model (6), we vary its value in 0.1, results.
1, and 10 and demonstrate the results in Fig. 15. From the
figure, we can see that a smaller δ over-smooths the result E. Other Applications
and a larger δ preserves too much noise, which is reasonable It is worth mentioning that, besides low-light image
since parameter δ constrains the strength of the noise map. enhancement, the proposed model can also be applied to
LI et al.: STRUCTURE-REVEALING LOW-LIGHT IMAGE ENHANCEMENT 2839

Fig. 17. Convergence curves of the 18 test images with model (3).

a series of degraded images with minor modification. For


instance, images captured underwater, haze/fog/smoke images,
images taken in dusty weather, remote sensing images can all
be enhanced by the proposed method. The formation of haze Fig. 18. Comparison of enhancement results of underwater images. From
left to right: observed images, results by a specialized method [43] and the
images is described as follows [39]–[42]: proposed method with model (6). (a) Input. (b) Results by [43]. (c) Proposed.
I = T ◦ J + A ◦ (1 − T), (23)
where J is the scene radiance (i.e. the desired image with
high visibility), A is the global atmospheric light, and T is
the transmission. In our work, we use the same model for
all kinds of degraded images mentioned above. Regarding the
inverted illumination component 1−L obtained by our method
as the transmission T, and with the global atmospheric light
A being set to be a constant 1, the desired image J can be
easily recovered as follows:
I−L
J= . (24)
max(1 − L, t0 )
The lower bound t0 of the transmission is set to be 0.1 as
suggested by [42]. In our experiments, to further increase the
contrast, J is multiplied by the reflectance map R estimated
by our method to generate the final result.
Specifically, since images taken in dusty weather and under- Fig. 19. Comparison of enhancement results of two hazy images taken
through thick smoke and a very typical image taken from Earth orbit with
water have severe color cast problems, these images are first low contrast and dark areas. From left to right: observed images, results by
processed by a simple color correction scheme mentioned in the classic dehazing method [42] and the proposed method with model (6).
[43] and [44] and then fed to our method. The color corrected (a) Input. (b) Results by [42]. (c) Proposed.
image IC R is calculated by
Ic − Imin
c the two enhanced components by pixel-wise multiplication.
ICc R = c , c ∈ {R, G, B} (25) As illustrated, compared to the specialized method, our method
c
Imax − Imin
presents visually appealing results with higher contrast.
where Fig. 19 shows some smoke removal/dehazing results. Test
c
Imax = mean(Ic ) + var(Ic ), images come from the NASA image dataset [33]. Our
c method is compared with the classic dehazing method [42].
Imin = mean(Ic ) − var(Ic ). (26)
He et al. [42] noticed that a major difference between haze-
mean(Ic ) is the mean value of Ic , and var(Ic ) denotes the free outdoor images and haze images is that, the minimum
variance of Ic . Figs. 18, 19, and 20 give several enhancement intensities on each channel of a haze image tend to have higher
results. value than haze-free images. Thus, they proposed the dark
Fig. 18 presents several examples of underwater image channel prior and used the prior to estimate the transmission
enhancement. Test images and the source code of [43] map. From the figure, we can see that the method proposed
come from the author’s website. The specialized underwater in [42] fails to look through the thick smoke in the first test
image enhancement method [43] utilizes Retinex model to image, while our method successfully removes most of the
decompose the input image. The decomposed reflectance is smoke. The proposed method also produces higher contrast.
enhanced by CLAHE and the illumination is enhanced by his- Enhancement results of images taken in dusty weather are
togram specification. The final result is obtained by combining illustrated in Fig. 20. Test images and the source code of [44]
2840 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 27, NO. 6, JUNE 2018

[6] J. Yang, X. Jiang, C. Pan, and C.-L. Liu, “Enhancement of low light
level images with coupled dictionary learning,” in Proc. IEEE 23rd Int.
Conf. Pattern Recognit., Dec. 2016, pp. 751–756.
[7] K. G. Lore, A. Akintayo, and S. Sarkar, “LLNet: A deep autoencoder
approach to natural low-light image enhancement,” Pattern Recognit.,
vol. 61, pp. 650–662, Jan. 2017.
[8] E. H. Land, “The retinex theory of color vision,” Sci. Amer., vol. 237,
no. 6, pp. 108–129, 1977.
[9] D. J. Jobson, Z.-U. Rahman, and G. A. Woodell, “Properties and
performance of a center/surround retinex,” IEEE Trans. Image Process.,
vol. 6, no. 3, pp. 451–462, Mar. 1997.
[10] D. J. Jobson, Z.-U. Rahman, and G. A. Woodell, “A multiscale retinex
for bridging the gap between color images and the human observation
of scenes,” IEEE Trans. Image Process., vol. 6, no. 7, pp. 965–976,
Fig. 20. Comparison of enhancement results of images taken in dusty Jul. 1997.
weather. From left to right: observed images, results by a specialized method [11] S. Wang, J. Zheng, H.-M. Hu, and B. Li, “Naturalness preserved
[44] and the proposed method with model (6). (a) Input. (b) Results by [44]. enhancement algorithm for non-uniform illumination images,” IEEE
(c) Proposed. Trans. Image Process., vol. 22, no. 9, pp. 3538–3548, Sep. 2013.
[12] X. Fu, D. Zeng, Y. Huang, Y. Liao, X. Ding, and J. Paisley, “A
fusion-based enhancing method for weakly illuminated images,” Signal
Process., vol. 129, pp. 82–96, Dec. 2016.
are downloaded from the author’s website. The specialized [13] X. Guo, Y. Li, and H. Ling, “LIME: Low-light image enhancement
method takes different derivatives (generated by Gamma cor- via illumination map estimation,” IEEE Trans. Image Process., vol. 26,
rection with different γ ) of the original image as input. Three no. 2, pp. 982–993, Feb. 2017.
[14] X. Fu, D. Zeng, Y. Huang, X.-P. Zhang, and X. Ding, “A weighted
weight maps (sharpness, chromatic, and prominence maps) variational model for simultaneous reflectance and illumination estima-
calculated from each derivative are summed and normalized tion,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., Jun. 2016,
to obtain the final weight map, which is then used to fuse the pp. 2782–2790.
[15] E. Provenzi, L. De Carli, A. Rizzi, and D. Marini, “Mathematical
corresponding derivative to obtain the final result. As shown definition and analysis of the retinex algorithm,” J. Opt. Soc. Amer. A,
in the figure, results by [44] still look like images with haze, Opt. Image Sci., vol. 22, no. 12, pp. 2613–2621, 2005.
while our method produces images with better visibility. [16] R. Grosse, M. K. Johnson, E. H. Adelson, and W. T. Freeman, “Ground
truth dataset and baseline evaluations for intrinsic image algorithms,” in
Proc. IEEE Int. Conf. Comput. Vis., Sep./Oct. 2009, pp. 2335–2342.
V. C ONCLUSION [17] Q. Chen and V. Koltun, “A simple model for intrinsic image decomposi-
Low-light enhancement methods using the classic Retinex tion with depth cues,” in Proc. IEEE Int. Conf. Comput. Vis., Dec. 2013,
pp. 241–248.
model often fail to dealing with the noise, which inevitably [18] P.-Y. Laffont, A. Bousseau, and G. Drettakis, “Rich intrinsic image
exists in such condition. In this paper, we present the robust decomposition of outdoor scenes from multiple views,” IEEE Trans.
Retinex model by adding a noise term to handle low-light Vis. Comput. Graph., vol. 19, no. 2, pp. 210–224, Feb. 2013.
[19] S. Bell, K. Bala, and N. Snavely, “Intrinsic images in the wild,” ACM
image enhancement in the case of intensive noise. Moreover, Trans. Graph., vol. 33, no. 4, p. 159, 2014.
we impose novel regularization terms in our optimization [20] A. Meka, M. Zollhöfer, C. Richardt, and C. Theobalt, “Live intrinsic
problem for both illumination and reflectance to jointly esti- video,” ACM Trans. Graph., vol. 35, no. 4, p. 109, 2016.
[21] J. T. Barron and J. Malik, “Color constancy, intrinsic images, and shape
mate a piece-wise smoothed illumination and a structure- estimation,” in Proc. 12th Eur. Conf. Comput. Vis., 2012, pp. 57–70.
revealed reflectance. An ADM-based algorithm is provided [22] Y. Li and M. S. Brown, “Single image layer separation using relative
to solve the optimization problem. In addition to low-light smoothness,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit.,
Jun. 2014, pp. 2752–2759.
image enhancement, our method is also suitable for other [23] X. Fu, Y. Liao, D. Zeng, Y. Huang, X. Zhang, and X. Ding, “A proba-
similar tasks, such as image enhancement for underwater or bilistic method for image enhancement with simultaneous illumination
remote sensing, and in hazy or dusty conditions. Future works and reflectance estimation,” IEEE Trans. Image Process., vol. 24, no. 12,
pp. 4965–4977, Dec. 2015.
include accelerating our method and generalizing it to video [24] M. Elad, “Retinex by two bilateral filters,” in Proc. 5th Int. Conf. Scale
enhancement. Automatically deciding which model would be Space PDE Methods Comput. Vis., 2005, pp. 217–229.
optimal for an input image is also an appealing topic. [25] W.-J. Li, B. Gu, J.-T. Huang, and M.-H. Wang, “Novel retinex algorithm
by interpolation and adaptive noise suppression,” J. Central South Univ.,
vol. 19, no. 9, pp. 2541–2547, 2012.
R EFERENCES [26] X. Yu, X. Luo, G. Lyu, and S. Luo, “A novel retinex based enhancement
[1] S. M. Pizer, R. E. Johnston, J. P. Ericksen, B. C. Yankaskas, and algorithm considering noise,” in Proc. 16th Int. Conf. Comput. Inf. Sci.
K. E. Muller, “Contrast-limited adaptive histogram equalization: Speed (ICIS), May 2017, pp. 649–654.
and effectiveness,” in Proc. 1st Conf. Vis. Biomed. Comput., May 1990, [27] Z. Farbman, D. Lischinski, and R. Szeliski, “Edge-preserving decom-
pp. 337–345. positions for multi-scale tone and detail manipulation,” Trans. Graph.,
[2] M. Abdullah-Al-Wadud, M. H. Kabir, M. A. A. Dewan, and vol. 27, no. 3, p. 67, Aug. 2008.
O. Chae, “A dynamic histogram equalization for image contrast [28] K. He, J. Sun, and X. Tang, “Guided image filtering,” IEEE Trans.
enhancement,” IEEE Trans. Consum. Electron., vol. 53, no. 2, Pattern Anal. Mach. Intell., vol. 35, no. 6, pp. 1397–1409, Jun. 2013.
pp. 593–600, May 2007. [29] W. Chao and Y. Zhong-Fu, “Variational enhancement for infrared
[3] L. Li, R. Wang, W. Wang, and W. Gao, “A low-light image enhancement images,” J. Infr. Millim. Waves, vol. 25, no. 4, pp. 306–310, 2006.
method for both denoising and contrast enlarging,” in Proc. IEEE Int. [30] Y. Wang, W. Yin, and J. Zeng, “Global convergence of ADMM in
Conf. Image Process., Sep. 2015, pp. 3730–3734. nonconvex nonsmooth optimization,” Dept. Comput. Appl. Math., Univ.
[4] X. Zhang, P. Shen, L. Luo, L. Zhang, and J. Song, “Enhancement and California, Los Angeles, CA, USA, Tech. Rep. 62, 2015, vol. 15.
noise reduction of very low light level images,” in Proc. 21st Int. Conf. [31] Y. Xu, W. Yin, Z. Wen, and Y. Zhang, “An alternating direction algorithm
Pattern Recognit. (ICPR), Nov. 2012, pp. 2034–2037. for matrix completion with nonnegative factors,” Frontiers Math. China,
[5] K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, “Image denoising vol. 7, no. 2, pp. 365–384, 2012.
by sparse 3-D transform-domain collaborative filtering,” IEEE Trans. [32] Image Datasets of the Computational Vision Group at CALTECH.
Image Process., vol. 16, no. 8, pp. 2080–2095, Aug. 2007. [Online]. Available: http://www.vision.caltech.edu/archive.html
LI et al.: STRUCTURE-REVEALING LOW-LIGHT IMAGE ENHANCEMENT 2841

[33] NASA. (2001). Retinex Image Processing. [Online]. Available: Wenhan Yang received the B.S. degree in computer
https://dragon.larc.nasa.gov/retinex/pao/news science from Peking University, Beijing, China,
[34] K. Gu, W. Lin, G. Zhai, X. Yang, W. Zhang, and C. W. Chen, in 2012, where he is currently pursuing the Ph.D.
“No-reference quality metric of contrast-distorted images based on degree with the Institute of Computer Science and
information maximization,” IEEE Trans. Cybern., vol. 47, no. 12, Technology. He was a Visiting Scholar with the
pp. 4559–4565, Dec. 2017. National University of Singapore from 2015 to 2016.
[35] K. Gu et al., “Blind quality assessment of tone-mapped images His current research interests include image process-
via analysis of information, naturalness, and structure,” IEEE Trans. ing, sparse representation, image restoration, and
Multimedia, vol. 18, no. 3, pp. 432–443, Mar. 2016. deep-learning-based image processing.
[36] K. Gu, G. Zhai, X. Yang, and W. Zhang, “Using free energy principle
for blind image quality assessment,” IEEE Trans. Multimedia, vol. 17,
no. 1, pp. 50–63, Jan. 2015.
[37] K. Gu, D. Tao, J.-F. Qiao, and W. Lin, “Learning a no-reference
quality assessment model of enhanced images with big data,” IEEE
Trans. Neural Netw. Learn. Syst., to be published. [Online]. Available:
http://ieeexplore.ieee.org/document/7872424/
[38] D. Martin, C. Fowlkes, D. Tal, and J. Malik, “A database of human
segmented natural images and its application to evaluating segmentation
algorithms and measuring ecological statistics,” in Proc. 8th IEEE Int.
Conf. Comput. Vis. (ICCV), vol. 2. Jul. 2001, pp. 416–423. Xiaoyan Sun is currently a Lead Researcher
[39] L. Kratz and K. Nishino, “Factorizing scene albedo and depth from with Microsoft Research Asia. She is currently
a single foggy image,” in Proc. IEEE 12th Int. Conf. Comput. Vis., focusing on video analysis, image restoration,
Sep./Oct. 2009, pp. 1701–1708. and image/video coding. She has authored or
[40] G. Meng, Y. Wang, J. Duan, S. Xiang, and C. Pan, “Efficient image co-authored over 100 papers in journals and con-
dehazing with boundary constraint and contextual regularization,” in ferences, holds ten proposals to standards with one
Proc. IEEE Int. Conf. Comput. Vis., Dec. 2013, pp. 617–624. accepted, and holds over ten granted U.S. patents.
[41] J.-P. Tarel and N. Hautiere, “Fast visibility restoration from a single color Her current research interests include computer
or gray level image,” in Proc. IEEE 12th Int. Conf. Comput. Vis. (ICCV), vision, image/video processing, and machine learn-
Sep. 2009, pp. 2201–2208. ing. She is a TC Member of the IEEE Multimedia
[42] K. He, J. Sun, and X. Tang, “Single image haze removal using dark Systems and Applications. She received the Best
channel prior,” in Proc. IEEE Int. Conf. Comput. Vis. Pattern Recognit., Paper Award from the IEEE T RANSACTIONS ON C IRCUITS AND S YSTEMS
Miami, FL, USA, Jun. 2009, pp. 1956–1963. FOR V IDEO T ECHNOLOGY in 2009 and the Best Student Paper Award from
[43] X. Fu, P. Zhuang, Y. Huang, Y. Liao, X.-P. Zhang, and X. Ding, VCIP in 2016. She is an AE of the Signal Processing: Image Communication
“A retinex-based enhancing approach for single underwater image,” in Journal and an AE of the IEEE J OURNAL ON E MERGING AND S ELECTED
Proc. IEEE Int. Conf. Image Process., Oct. 2014, pp. 4572–4576. T OPICS IN C IRCUITS AND S YSTEMS . She also served as the session chair,
[44] X. Fu, Y. Huang, D. Zeng, X.-P. Zhang, and X. Ding, “A fusion-based the area chair, and the TC co-chair for several international conferences.
enhancing approach for single sandstorm image,” in Proc. IEEE Int. She received the B.S., M.S., and Ph.D. degrees in computer science from
Workshop Multimedia Signal Process., Sep. 2014, pp. 1–5. the Harbin Institute of Technology, Harbin, China, in 1997, 1999, and 2003,
respectively. She was an Intern with Microsoft Research Asia from 2000 to
2003, and joined Microsoft Research Asia in 2003. She is currently an Adjunct
Mading Li received the B.S. degree in computer Professor (a Ph.D. Supervisor) with the University of Science and Technology
science from Peking University in 2013, where he is of China.
currently pursuing the Ph.D. degree with the Institute
of Computer Science and Technology, being advised
by Z. Guo and J. Liu. He was a Visiting Scholar with
McMaster University in 2016. His current research
interests include image and video processing, image
interpolation, image restoration, and low-light image
enhancement.

Jiaying Liu (S’08–M’10–SM’17) received the B.E. Zongming Guo (M’09) received the B.S. degree
degree in computer science from Northwestern Poly- in mathematics and the M.S. and Ph.D. degrees in
technic University, Xian, China, in 2005, and the computer science from Peking University, Beijing,
Ph.D. degree (Hons.) in computer science from China, in 1987, 1990, and 1994, respectively.
Peking University, Beijing, China, in 2010. He is currently a Professor with the Institute of
She is currently an Associate Professor with Computer Science and Technology, Peking Univer-
the Institute of Computer Science and Technology, sity. His current research interests include video
Peking University. She has authored over 100 tech- coding, processing, and communication.
nical articles in refereed journals and proceedings Dr. Guo is an Executive Member of the China
and holds 24 granted patents. Her current research Society of Motion Picture and Television Engineers.
interests include image/video processing, compres- He was a recipient of the First Prize of the State
sion, and computer vision. Administration of Radio Film and Television Award in 2004, the First Prize
Dr. Liu was a Visiting Scholar with the University of Southern California, of the Ministry of Education Science and Technology Progress Award in 2006,
Los Angeles, CA, USA, from 2007 to 2008. In 2015, she was a Visiting the Second Prize of the National Science and Technology Award in 2007, and
Researcher with Microsoft Research Asia, supported by Star Track for Young the Wang Xuan News Technology Award and the Chia Tai Teaching Award
Faculties. She served as a TC Member in the IEEE CAS MSA and APSIPA in 2008. He received the Government Allowance granted by the State Council
IVM and a APSIPA Distinguished Lecturer from 2016 to 2017. She is a Senior in 2009. He received the Distinguished Doctoral Dissertation Advisor Award
Member of CCF. from Peking University in 2012 and 2013.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy