Research On Image Enhancement Algorithm Based On A
Research On Image Enhancement Algorithm Based On A
Abstract. With the continuous development of social science and technology, people
have higher and higher requirements for image quality. This paper integrates artificial
intelligence technology and proposes a low-illuminance panoramic image
enhancement algorithm based on simulated multi-exposure fusion. First, the image
information content is used as a metric to estimate the optimal exposure rate, and the
brightness mapping function is used to enhance the V component, and the low-
illuminance. The image and the overexposed image are input, the medium exposure
image is synthesized by the exposure interpolation method, and the low illumination
image, the medium exposure image and the overexposure image are merged using a
multi-scale fusion strategy to obtain the fused image, which is corrected by a multi-
scale detail enhancement algorithm. After the fusion, the details are enhanced to
obtain the final enhanced image. Practice has proved that the algorithm can effectively
improve the image quality.
1. Introduction
With the continuous development of social science and technology, people have higher and higher
requirements for image quality [1-3]. Due to the low-light environment and limited camera equipment,
the image has problems such as low brightness, low contrast, high noise, color distortion, etc., which
will not only affect the aesthetics of the image and the human visual experience, but also reduce the
use of normal lighting images. The performance of visual tasks. In order to effectively improve the
quality of low-light images, scholars have proposed many low-light image enhancement algorithms,
which undergo three stages of grayscale transformation, retinal cortex theory, and deep neural network
[4-7]. And at night, shadows, etc. The quality of panoramic images collected under poor lighting
conditions will deteriorate, which is mainly manifested in the overall brightness of the image, low
contrast, and color deviation. It has a serious impact on the visual effect of the panoramic image and is
a subsequent computer vision processing task ( Such as image segmentation, target tracking, target
recognition, etc.) bring certain difficulties. Therefore, the development of low-light panoramic image
enhancement algorithm research is of great significance to the field of machine vision [8-10].
Content from this work may be used under the terms of the Creative Commons Attribution 3.0 licence. Any further distribution
of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI.
Published under licence by IOP Publishing Ltd 1
ITBDE 2021 IOP Publishing
Journal of Physics: Conference Series 2074 (2021) 012024 doi:10.1088/1742-6596/2074/1/012024
In this paper, artificial intelligence technology is integrated, and algorithm research is carried out
for the needs of image enhancement. A low-illumination panoramic image enhancement algorithm
based on simulated multi-exposure fusion is proposed to improve the quality of the image.
2
ITBDE 2021 IOP Publishing
Journal of Physics: Conference Series 2074 (2021) 012024 doi:10.1088/1742-6596/2074/1/012024
lrcon
i low , normal
W1 Rlow I i Si 1
j low , normal
W2 Rnormal I j S j
1
(2)
Where represents the pixel-by-pixel multiplication operation. When i is low or j is normal, the
weight coefficient W1=W2=1, otherwise W1=W2=0.001. For paired images, using larger weights can
make the decomposition network better Learning the features of the paired images. For paired image
pairs, using larger weights can make the decomposition network better learn the features of the paired
images.
The constant reflectance loss lR is based on the color constancy of Retinex theory. It is mainly used
in the decomposition network to constrain the consistent reflectance of different illumination images:
lR Rlow Rnormal 1 (3)
For the illumination smoothing loss lI, this paper adopts the structure-perceived smoothing loss.
This loss uses the reflection component gradient term as the weight. In the area where the image
gradient changes greatly, the illumination becomes discontinuous, so that the light map with smooth
brightness can retain the image Structural information, then
3
ITBDE 2021 IOP Publishing
Journal of Physics: Conference Series 2074 (2021) 012024 doi:10.1088/1742-6596/2074/1/012024
and the Sigmoid function in turn, and finally multiply with βi element by element. The result is
channel-cascaded with αi. In this propagation process, the attention mechanism can fuse image
information of different scales, while reducing the response of irrelevant features, and enhancing the
network's ability to adjust brightness.
Independent of the constraint loss of the decomposition network, the enhancement network adjusts
the degree of illumination based on the assumptions of local consistency and structure perception. In
addition to the loss of the constraint enhancement network in Retinex-Net, the algorithm in this paper
is aimed at Retinex-Net The color deviation that occurs increases the color loss and therefore enhances
the network loss:
L Lrcon LI Lc (6)
Among them, Lrcon is the reconstruction loss of the enhanced image,
Lrcon S normal Rlow I en 1 (7)
LI represents the structure-perceived smoothing loss, Lc represents the color loss of this article, and
μ represents the balance coefficient. The definition of Lrcon represents the distance term between the
enhanced image and its corresponding normal-illuminated image. The structure-perceived smoothing
loss LI is similar to the smoothing loss of the decomposition network, but different It is that in the
enhanced network, Ien uses the gradient of Rlow as the weight coefficient:
Lc F Sen F S normal 1
2
(9)
Among them: F(x) represents the Gaussian blur operation, x represents the image to be blurred.
This operation can be understood as each pixel of the image takes the average of the neighboring
pixels with the normal distribution weight, so as to achieve the blur effect, and Sen is the enhanced
image, Snormal is the corresponding normal lighting image.
F x i, j x i k , j l G k , l
k ,l
(10)
G(k,l) represents the weight coefficient that obeys the normal distribution. In the convolutional
network, G(k,l) is equivalent to a fixed-size convolution kernel.
k2 l2
G k , l 0.053exp
6 (11)
3. Multi-scale fusion
In order to obtain a better image enhancement effect, this paper uses a multi-scale fusion strategy to
fuse low-light images, medium-exposure images and over-exposure images. The fusion frame can be
expressed as:
4
ITBDE 2021 IOP Publishing
Journal of Physics: Conference Series 2074 (2021) 012024 doi:10.1088/1742-6596/2074/1/012024
K
I1 x, y Yl W k x, y Ll Ek x, y
k 1 (12)
In the formula: Yl and Ll respectively represent the Gaussian pyramid of the first/layer and the
W k Wk k 1Wk
K
Laplacian/pyramid of the lth layer, and is the normalized weight, and E1, E2 and
E3 are the low-illuminance image, the medium-exposure image and the overexposure image,
respectively. Exposing the image, a large number of experiments in different scenes show that the 5-
layer pyramid decomposition usually achieves the best results, so the value of l is set to 5 in this paper.
This article uses the above method to generate medium and overexposure images for a low-
illuminance image.
For the low-illuminance image E1, it is hoped that it can effectively enhance the poorly exposed
areas in the image while retaining the well-exposed areas in the image; compared with E1 and E2, the
over-exposed image E3 loses the image details while at the same time. It can show more effective
image content information. For this reason, this paper adopts the Sigmoid function based on the
illumination component to set the weights of E1 and E3. A large amount of statistical data shows that
the pixel value distribution of a well-exposed image approximately satisfies a Gaussian distribution
with a mean value of 0.5 and a variance of 0.25, so we use the Gaussian distribution function to set the
weight of the medium-exposure image E2. In order to balance the Gaussian distribution function and
the Sigmoid function, this paper proposes an improved brightness weight function, which is defined as
follows:
1
W1
1 e 6 L1 3 (13)
1 L2 0.5
0.6
W2 e 2 0.252
(14)
1
W3
1 e6 L3 3 (15)
Where L1, L2 and L3 represent the illumination components of E1, E2 and E3, respectively. In
order to obtain the illumination components, this article transfers the images E1, E2, E3 from the RGB
color space to the HSV color space, and obtains the brightness component of the image, and then uses
a weighted least square filter (Weighted Least Square, WLS) that can maintain the edges of the image.
Smoothing and filtering is performed on the V component to obtain the illumination component.
5
ITBDE 2021 IOP Publishing
Journal of Physics: Conference Series 2074 (2021) 012024 doi:10.1088/1742-6596/2074/1/012024
In Figure 2, the abscissa is the illumination component of the image, and the ordinate is the weight.
The red, green, and blue curves represent the brightness weight functions of low-illuminance images,
medium-exposure images, and over-exposure images, respectively. By appropriately assigning
weights to the pixel values of the three images with different exposure levels, the fused image achieves
a good balance between enhancing the brightness and avoiding overexposure.
In the process of Gauss-Laplace pyramid decomposition and reconstruction of the image, as the
number of pyramid layers increases, part of the image details will be lost, and reducing the number of
pyramid layers will cause halo artifacts in the fusion result. In order to enrich the image details, this
paper adopts a multi-scale Gaussian filtering algorithm to enhance the image details while avoiding
halo artifacts.
First, a multi-scale Gaussian filter is used to smooth and filter the fused image to obtain 3 different
Gaussian blurred images, as shown in equation (16):
B1 G1 I * , B2 G2 I * , B3 G3 I * (16)
Secondly, extract fine details D1, intermediate details D2 and coarse details D3 for the image; as
shown in equation (18):
D1 I * B1 , D2 B1 B2 , D3 B2 B3 (17)
Then D1, D2 and D3 are weighted and fused to obtain the detail image D*, as shown in equation
(18):
D* 1 w1 sgn D1 D1 w2 D2 w3 D3
(18)
6
ITBDE 2021 IOP Publishing
Journal of Physics: Conference Series 2074 (2021) 012024 doi:10.1088/1742-6596/2074/1/012024
of the image enhanced by the algorithm in this paper is natural and the brightness distortion rate is
lower.
5. Conclusion
In order to solve the problem of low-light panoramic image enhancement, this paper proposes a low-
light panoramic image enhancement algorithm based on simulated multi-exposure fusion, and verifies
the effectiveness of this algorithm through subjective visual perception and objective index evaluation.
The experimental results show that the algorithm is effective Effectiveness.
References
[1] Park S, Yu S, Moon B, et al. Low-light image enhancement using variational optimization-
based retinex model [J]. IEEE Transactions on Consumer Electronics, 2017, 63(2):178-184.
[2] Goldstein T, Xu L, Kelly K, et al. The STONE Transform: Multi-Resolution Image
Enhancement and Real-Time Compressive Video [J]. IEEE Trans Image Process, 2015,
24(12):5581-5593.
[3] Lore K G, Akintayo A, Sarkar S . LLNet: A Deep Autoencoder Approach to Natural Low-light
Image Enhancement [J]. Pattern Recognition, 2017, 61:650-662.
[4] Fu X, Liao Y, Zeng D, et al. A Probabilistic Method for Image Enhancement With
Simultaneous Illumination and Reflectance Estimation [J]. IEEE Transactions on Image
7
ITBDE 2021 IOP Publishing
Journal of Physics: Conference Series 2074 (2021) 012024 doi:10.1088/1742-6596/2074/1/012024