0% found this document useful (0 votes)
66 views11 pages

Hair Segmentation and Removal in Dermoscopic Images Using Deep Learning

Uploaded by

Rutuja Jadhav
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
66 views11 pages

Hair Segmentation and Removal in Dermoscopic Images Using Deep Learning

Uploaded by

Rutuja Jadhav
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

This article has been accepted for publication in a future issue of this journal, but has not been

fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/ACCESS.2020.3047258, IEEE Access

Date of publication xxxx 00, 0000, date of current version xxxx 00, 0000.
Digital Object Identifier 10.1109/ACCESS.2017.DOI

Hair Segmentation and Removal in


Dermoscopic Images using Deep
Learning
LIDIA TALAVERA-MARTÍNEZ1,2 , PEDRO BIBILONI1,2 , AND MANUEL
GONZÁLEZ-HIDALGO (Member, IEEE)1,2 .
1
SCOPIA research group. University of the Balearic Islands. Dpt. of Mathematics and Computer Science. Crta. Valldemossa, km 7.5, E-07122 Palma, Spain
2
Health Research Institute of the Balearic Islands (IdISBa), E-07010 Palma, Spain
Corresponding author: Lidia Talavera-Martínez (e-mail: l.talavera@uib.es).
This work was partially supported by the Spanish Grant FEDER/Ministerio de Economía, Industria y Competitividad -
AEI/TIN2016-75404-P. Lidia Talavera-Martínez also benefited from the fellowship BES-2017-081264 conceded by the Ministry of
Economy, Industry and Competitiveness under a program co-financed by the European Social Fund. We thank Dr. Attia from the Institute
For Innovation and Research, Deakin University, Australia, for providing the GAN-based simulated hair images.

ABSTRACT Melanoma and non-melanoma skin cancers have shown a rapidly increasing incidence rate,
pointing to skin cancer as a major problem for public health. When analyzing these lesions in dermoscopic
images, the hairs and their shadows on the skin may occlude relevant information about the lesion at the time
of diagnosis, reducing the ability of automated classification and diagnosis systems. In this work, we present
a new approach for the task of hair removal on dermoscopic images based on deep learning techniques.
Our proposed model relies on an encoder-decoder architecture, with convolutional neural networks, for the
detection and posterior restoration of hair’s pixels from the images. Moreover, we introduce a new combined
loss function in the network’s training phase that combines the L1 distance, the total variation loss, and a
loss function based on the structural similarity index metric. Currently, there are no datasets that contain
the same images with and without hair, which is necessary to quantitatively evaluate our model. Thus, we
simulate the presence of hair in hairless images extracted from publicly known datasets. We compare our
results with six state-of-the-art algorithms based on traditional computer vision techniques by means of
similarity measures that compare the reference hairless image and the one with simulated hair. Finally, the
Wilcoxon signed-rank test is used to compare the methods. The results, both qualitatively and quantitatively,
demonstrate the effectiveness of our model and how our loss function improves the restoration ability of the
proposed model.

INDEX TERMS Deep Neural Networks, Dermoscopy, Hair removal, Image Processing, Inpainting, Skin
Lesion.

I. INTRODUCTION accuracy up to 10–30% [3] compared to simple clinical ob-


Melanoma is the most aggressive, metastatic and deadli- servation. This in-vivo, non-invasive skin imaging technique
est type of skin cancers, turning this disease into a major enables the visualization of specific subsurface structures,
problem for public health. In Europe, it accounts for 1–2% forms and colors that are not discernible by a simple visual
of all malignant tumors [1], and its estimated mortality in inspection. The diagnosis of skin lesions is mainly based
2018 was 3.8 per 100.000 men and women per year [2]. on their morphological characteristics, such as an irregular
Although melanoma is still incurable, its early diagnosis is of shape, asymmetry and a variety of colors, along with a history
great importance. Its early detection can prevent malignancy of changes in size, shape, color and/or texture. However,
and increase the survival rate and the effectiveness of the their evaluation might be altered by the individual judgment
treatment. Nowadays, practitioners rely on the dermoscopic of the observers, which depends on their experience and
evaluation for completing the clinical analysis and the di- subjectivity [4]. Thus, in order to help physicians to obtain an
agnosis of melanoma. This practice improves the diagnostic early, objective, and reproducible diagnosis of skin lesions,

VOLUME 4, 2016 1

This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/ACCESS.2020.3047258, IEEE Access

Lidia Talavera-Martínez et al.: Preparation of Papers for IEEE TRANSACTIONS and JOURNALS

sophisticated Computer-Aided Diagnosis (CAD) software quantitative evaluation of hair removal approaches. We
are developed. These computational tools are designed based will make the data available for further research in the
on clinical protocols [5]–[8] and focus mainly on image ac- field.
quisition, artifact removal (hairs, bubbles, etc.), segmentation The rest of the document is structured as follows. First,
of the lesion, extraction and selection of features, and final in Section II, we review the related work on hair removal
classification of the lesion. methods in dermoscopic images. Then, in Section III, we
In the pre-processing stage during the analysis of the present our method based on CNNs and detail the loss func-
lesions, hair removal is one of the key steps. The presence tion that we have used to train the network. Next, in Section
of hair in dermoscopic images usually occludes significant IV, we describe the database and the implementation details,
patterns reducing the accuracy of the system. Once the hairs as well as presenting the results obtained, and performing
are detected and removed, the next step is to estimate and an ablation study of the loss terms and some architecture’s
restore the underlying information (i.e., color and texture aspects. Finally, in Section V, we discuss the previous results
patterns) of the skin pixels underneath the hair regions. and study the strengths and limitations of this study.
Extensive previous research has been done addressing the
hair removal process in dermoscopic images. To the best of II. RELATED WORK
our knowledge, previous works presented approaches based In this section, we describe previous works that addressed the
on traditional computer vision techniques. Thus, tackling the task of hair removal in dermoscopic images. To the best of
problem with classical generative and discriminative models, our knowledge, only traditional computer vision approaches
which rely on hand-crafted features. However, hand-crafted have been used to address this task. In addition to describing
features needed to be defined and their impact is typically these methods, we will use them, in Section IV, to evaluate
tested with small datasets [9]. In recent years, deep learning the performance of our model.
has shown to be a powerful tool for image analysis. More Several traditional computer vision approaches have been
specifically, deep learning techniques have achieved a higher used for hair removal in dermoscopic images. Here we high-
performance with respect to traditional approaches for the light six state-of-the-art algorithms, based on their availabil-
majority of applications within the medical field [10]. Deep ity and wide use in the literature. These are the ones proposed
Convolutional Neural Networks (CNNs) allow to automati- by Lee et al. [14], Xie et al. [15], Abbas et al. [16], Huang et
cally learn features of different complexity directly from data al. [17], Toossi et al. [18] and Bibiloni et al. [19]. A summary
through a set of trainable filters. Moreover, it has shown to be of the approaches considered by each of them can be seen in
a powerful tool when working with large datasets. Table 1.
In the preprocessing step, hair removal stands out as one Nowadays, deep learning techniques have shown their
of the most useful and used methods. However, traditional potential when addressing computer vision tasks and have
approaches are still used for this task in more advanced shown to be the state-of-the-art for many problems. Specifi-
systems in which the main model uses deep learning tech- cally, deep learning-based image restoration techniques have
niques [11], [12]. Thus, we face the task of developing a been used in other fields for image inpainting [20], image
deep learning model for the detection and removal of hairs. deblurring and image denoising [21], among others. These
Such model could be integrated into a complete CAD system methods learn the parameters of the network to reconstruct
based entirely on deep learning. Our objective is threefold. images directly from training data, that is composed by
First, to design a novel model based on deep CNNs for the pairs of clean and corrupted images. This is usually more
removal of skin hair in dermoscopic images and the subse- effective in real-world images. For instance, Xie et al. [20]
quent restoration of the affected pixels. Second, to qualitative proposed an approach for image denoising and blind in-
and quantitatively assess its performance. Third, to compare painting that combines sparse coding with pre-trained deep
the results with other hair removal strategies using the same networks, achieving good results in both tasks. Vincent et
database, which would provide an objective comparison of al. [22] presented a stack of denoising autoencoders for
the strengths and weaknesses, and therefore of the quality of image denoising that is applied not only to the input, but
the method presented. also recursively to intermediate representations, to initialize
The contributions of this work are three-fold: the deep neural network. Also, Cui et al. [23] proposed a
• To the best of our knowledge, we are the first to use deep cascade of multiple stacked collaborative local autoencoders
learning techniques for hair removal in dermoscopic for image super-resolution. Their method searches in each
images. layer non-local self-similarity to enhance high frequency
• We introduce a loss function for the detection and poste- texture details from the image patches to suppress the noise
rior restoration of hair’s pixels based on the combination and combine the overlapping patches. In [24], Mao et al.
of loss functions that focus on different aspects, which proposed an encoding-decoding framework for image de-
complement each other towards a more robust recon- noising and super-resolution combining convolutions and de-
struction. convolution layers linked symmetrically by skip connections,
• We extend the dataset presented in [13] by creating more which helps improving the training process and the network’s
synthetic hair on dermoscopic images. This eases the performance. Finally, Jain et al. [25] and Dong et al. [26]
2 VOLUME 4, 2016

This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/ACCESS.2020.3047258, IEEE Access

Lidia Talavera-Martínez et al.: Preparation of Papers for IEEE TRANSACTIONS and JOURNALS

TABLE 1: Detection and inpainting techniques employed in the literature to remove hair from dermoscopic images.
Year Method Hair segmentation Inpainting method Color space
1997 Lee et al. [14] Grayscale closing Bilinear interpolation RGB
2009 Xie et al. [15] Grayscale top-hat Non-linear PDE RGB
2011 Abbas et al. [16] Derivative of Gaussians Coherence transport CIELab
Region growing algorithms and
2013 Huang et al. [17] Conventional matched filters RGB
linear discriminant analysis
2013 Toossi et al. [18] Canny edge detector Coherence transport RGB
2017 Bibiloni et al. [19] Color top-hat Morphological inpainting CIELab

proposed a fully convolutional CNN for image denoising in the second one, followed by a down-sampling operation,
and image super-resolution, respectively. The authors shown which is applied by a two-strided 3 × 3 convolution to
that their methods achieve comparable results to traditional reduce the spatial resolution. On the other side, each block
computer vision techniques. of the decoder consists of an up-sampling of the feature map
When focusing on the hair removal process on dermoscopy by a deconvolution of 3 × 3 with strides of two in each
images and deep learning techniques, Attia et al. [27] dimension. A skip connection follows, which concatenates
used them to simulate hair. The authors used Generative- the up-sampled output with the corresponding feature map
Adversarial-Networks to generate fake hair and add it to from the layer of equal resolution of the encoder. This enables
the image. In this work, we address the inverse problem, the decoder to recover image details, and therefore improves
including the location and reconstruction of the hair regions the restoration performance. Next, a 3 × 3 convolution is
in dermoscopic images using CNNs. Given the promising applied over the merged data. Finally, in the last block, an
results achieved by deep learning models for other computer additional 3 × 3 convolution is added to reduce the feature
vision related tasks and the need for robust models for hair map to the number of output channels.
removal in dermoscopic images, we present a novel model The choice of this architecture resides in the aim of eval-
that relies on an autoencoder to address this task. uating the suitability of autoencoders to tackle this task as
a denoising problem. In terms of its size, we believe that a
III. METHODOLOGY rather small network is more suitable to correctly learn the
In this section, we describe our proposed deep learning task due to the amount of data we have available.
model for hair removal in dermoscopic images and depict the
introduced reconstruction loss function.
B. RECONSTRUCTION LOSS FUNCTION
We designed and propose a convolutional encoder-decoder
architecture for the task of hair removal in dermoscopic The loss function guides how the network learns by serving
images. Our model, which is detailed in Figure 1, is com- as a criterion that numerically reflects the errors of the model.
posed of 12 layers. To train our model, we use pairs of It is computed between the network output and its corre-
images formed by the reference image without hair and its sponding hairless reference image, also known as Ground
corresponding image with simulated hair. The output is the Truth (GT). There are several loss functions that have been
reconstructed dermoscopic without hairs on it. During the used in image restoration tasks. Some widespread losses
learning process, the network relies on our proposed loss are the Mean Square Error (MSE) or the Mean average
function that evaluates the goodness of the output image in Error (MAE). These measurements exclusively depend on
comparison to the hairless reference image. Next, we de- the difference between the corresponding pixels of the two
scribe the proposed model and loss functions for this relevant images. Therefore, the results might have poor quality in
task. terms of human perception, since the noise of a pixel should
not be considered independently of the error of its neigh-
A. ARCHITECTURE STRUCTURE bouring pixels. To overcome these limitations, other loss
The first module of our model is an encoder network. The metrics have been proposed, such as the Structural Similarity
input is of size 512 × 512 × 3 and its output is a hidden rep- Metric (SSIM) or the Multiscale Structural Similarity Metric
resentation of high-level features of the input image. When (MSSSIM), which depend on local luminance, contrast and
looking for such features, the encoder tends to ignore noise. structure [28].
In our case, our hypothesis is that the network will treat the To achieve results appealing to a human observer, and in-
hair as noise and will be ignored, having as output the hairless spired by the results obtained by Liu et al. in [29], we propose
skin image. The second layer of the model corresponds to to capture the best characteristics of the loss functions that
the decoder, which aims to recover the missing information measure statistical features locally along with other per-pixel
from the high-level feature representation. Its output is a losses.
512 × 512 × 3 cleaned version, without skin hair, of the Thus, our reconstruction loss is defined as follows:
input image. Both the encoder and the decoder have two
blocks. Each block of the encoder consists of one 3 × 3 Lrec = αLforeground
1 +βLbackground
1 +γLcomposed
2 +δLSSIM +λLtv ,
convolution, of 128 filters in the first block and 256 filters (1)
VOLUME 4, 2016 3

This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/ACCESS.2020.3047258, IEEE Access

Lidia Talavera-Martínez et al.: Preparation of Papers for IEEE TRANSACTIONS and JOURNALS

FIGURE 1: Architecture of our proposed network. The pairs of reference hairless images (GT) and its corrupted (hair
simulated) images are passed through the encoder to extract complex features. The decoder, connected to the encoder with
skip-connections, reconstructs the image.

where α, β, γ, δ and λ are the weights of each term of tively, to the six traditional hair removal methods presented
the linear combination that define the reconstruction loss in Section II. To obtain the numerical results, we rely on
function. We opted to perform a random hyperparameter several performance measures. We determine which method
search as there are many parameters of which we have to find outperforms the rest by means of a statistical test. Finally, we
the optimal value, and a grid search would require a higher conducted an ablation study of the loss terms, as well as of
computational cost. Specifically, we performed 10 runs of some aspects of the model’s architecture.
our model, assigning in each case a random value between
0 and 10 to each of the weights. Afterwards, a statistical test A. DATASET DESCRIPTION
indicates which are the best set of weights according to the In order to train the CNN and to quantitatively validate, in
measurements explained in Section IV-D. an effective way, the performance of our method, we need
foreground a dataset. It must contain pairs of images: images with hair,
• The term L1 is the L1 distance between the
original hairless image and the prediction of the network used as the algorithm input, along with their corresponding
only between those pixels belonging to the hair areas. “clean" version, in this case the same image without hair.
background If we only had the image with hairs, we could only do a
• Next, L1 estimates the L1 distance between the
original hairless image and the network’s prediction qualitative evaluation.
only among the background pixels, which in our context Finding this type of data is a challenging task, the same
accounts for the non-hair regions. dermoscopic image can not be captured with and without
composed hair. To tackle this problem, we decided to simulate the
• Then, L2 computes the L2 restricted to the hair
regions, but normalizing over all pixels rather than over presence of skin hair in hairless images extracted from five
the amount of hair pixels. publicly available datasets, i.e. PH2 [31], dermquest1 , der-
• The term LSSIM calculates the loss function based on the mis2 , EDRA2002 [32] and from the ISIC Data Archive3 .
SSIM metric over the whole image. We have avoided selecting images with other artifacts (eg.
• Finally, we use a total variation loss, Ltv , as a regularizer ruler, bandages etc) that are not hairs. Three different hair
to smooth the transition of the predicted values for simulation methods have been used. The first is the one, pre-
the regions corresponding to hair, according to their sented by Attia et al. [27], is based on generative adversarial
surrounding context. A more detailed description of this networks. The second one was implemented by Mirzaalian
term can be found in [30]. et al. [33], whose software “HairSim" is publicly available
in [34]. Finally, the last approach we have used involved
IV. EXPERIMENTAL FRAMEWORK AND RESULTS extracting hair masks by an automated method, proposed by
In this section, we first establish the experimental framework Xie et al. [15], and superimpose them on hairless images.
by describing the database used and the implementation 1 Was deactivated on December 31, 2019
details of our method. Then, we analyze the results obtained 2 www.dermis.net

by our method and compare them, qualitative and quantita- 3 www.isic-archive.com

4 VOLUME 4, 2016

This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/ACCESS.2020.3047258, IEEE Access

Lidia Talavera-Martínez et al.: Preparation of Papers for IEEE TRANSACTIONS and JOURNALS

We constructed a dataset with 618 images, which consists


of 322 images from the EDRA2002 dataset, 239 images from
the PH2 dataset, 46 images from the ISIC Data Archive, 6
images from the dermis dataset, and 5 from the dermquest
dataset. During the experimentation we divide it into 70%
for training and 30% for the test, which gives us 433 and 185
images, respectively. It is composed of images with diverse
hair that present a variety of hair thickness, density and
color, ranging from coarse to more realistic. This diversity FIGURE 3: Plots of the Loss (left), and PSNR performance
guarantees that we consider hairs with very different char- measure (right), of the training of the proposed model.
acteristics when training the network. We also introduced
images without hair to teach the network that some images
must not be modified and their textures must be maintained. ones obtained by other hair removal methods presented in
In Figure 2, we show examples of original hairless images Section II. Let us remark that we compare with these methods
from the dataset, together with the simulations obtained using as they are the ones that have been applied to dermoscopic
each of the hair simulation approaches. As we can see, with images.
the first and third method we achieve a much more natural
Reference image Hair simulation Our method
morphology, quantity and distribution that resembles real
hair compared to the second. We consider a reduced number
of testing images to maximize the training samples and, thus,
to help the network learn and generalize well.

FIGURE 2: Original images (top) and simulated hair images


(bottom), respectively by a deep neural network [27] (left),
“HairSim" [34] (middle), superimposing a hair mask [15]
(right).
FIGURE 4: Example of the hair removal results obtained by
our method.
B. EXPERIMENTAL SETUP
We implemented the proposed architecture using Keras [35]. As can be seen in Figure 5, not all the methods are suc-
The network was trained from scratch with randomly ini- cessful in both the hair removal and the subsequent process
tialized weights and using the Adam [36] optimizer with a of inpainting. For instance, Abbas et al.’s and Toossi et al.’s
learning rate experimentally set to 10−4 . The coefficients methods are not capable of segment the hairs as it seems
for the different terms of the reconstruction loss function they are not able to detect them properly. In contrast, Huang
with which the network has been trained were experimentally et al.’s and Xie et al.’s methods are capable of detecting
found to be: α = 2.626, β = 3.892, γ = 0.309, δ = 0.398 much of the hair. However, their inpainting process seems
and λ = 0.597. The network was trained on a single NVIDIA to leave a trail of them. Finally, our results and the ones of
GeForce GTX 1070 with a batch size of 4. Bibiloni et al.’s and Lee et al.’s methods seem to adjust to
In figure 3, we can see how our model trained over almost the reference image at first sight. Although, the last two leave
25 epochs (reaching an early stopping policy based on moni- traces, while the new method does not. However, it may be
toring the validation loss), and how the Peak Signal-to-Noise the case that some of them introduce some alterations, to a
Ratio (PSNR) metric evolves satisfactory during the training. greater or lesser extent, that blur the lesion’s features, such as
streaks or reticular textures.
C. QUALITATIVE RESULTS Both in Figure 4 and 5, we have seen that our method
We conducted a qualitative and quantitative analysis of the reaches good visual results when evaluated on synthetic
results obtained. In Figure 4, we can see that our proposed images. In Figure 6, we show its effectiveness and its general-
method attains visually appealing results. In addition, we ization ability in dermoscopic images with real hair. We show
present a visual comparison of our results with the with the images from the 5 databases, to demonstrate that although the
VOLUME 4, 2016 5

This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/ACCESS.2020.3047258, IEEE Access

Lidia Talavera-Martínez et al.: Preparation of Papers for IEEE TRANSACTIONS and JOURNALS

(a) Reference image (b) Hair simulation


(a)

(c) Abbas et al. [16] (d) Bibiloni et al. [19]

(b)

(e) Huang et al. [17] (f) Lee et al. [14]

(c)

(g) Toossi et al. [18] (h) Xie et al. [15]

(d)

FIGURE 6: Example of hair removal results obtained by our


method in dermoscopic images with real hair from (a) PH2
dataset, (b) EDRA2002 dataset, (c) Dermis and Dermquest
datasets, and (d) ISIC Data Archive. In the first row of each
(i) Our method
subfigure we find the test sample image, while in the second
FIGURE 5: Given a sample image (a), we simulate hair on it row we find their corresponding output of our model.
(b), and present the results obtained by several state-of-the-
art methods (c)-(h) against our proposed method (i).
We used a set of nine objective error metrics to quan-
titatively assess the quality of the results obtained by the
data is not balanced, the network has not suffered database- proposed CNN-based hair removal approach, with respect to
specific overfitting. the original hairless image. We cluster these measures into
three different groups. The first one are the Mean Squared
D. QUANTITATIVE RESULTS Error (MSE) [37], the Peak Signal-to-Noise Ratio (PSNR)
It is worth noting that once the hairs are removed there are [38], the Root Mean Squared Error (RMSE) [39], and the
many possible solutions as to what is the expected inpaint- Structural Similarity Index (SSIM) [38], which are per-pixel
ing result, always with the aim of preserving the texture metrics. Within the second group we consider the Multi-
of the area involved. Therefore, a qualitative evaluation is Scale Structural Similarity Index (MSSSIM) [40] and the
not enough to evaluate the quality of the different methods Universal Quality Image Index (UQI) [41], which measure
introduced. In the following, we introduce an automatic, statistical features locally and then combine them. Finally,
objective and comparable performance evaluation system. the Visual Information Fidelity (VIF) [42], the PSNR-HVS-
6 VOLUME 4, 2016

This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/ACCESS.2020.3047258, IEEE Access

Lidia Talavera-Martínez et al.: Preparation of Papers for IEEE TRANSACTIONS and JOURNALS

TABLE 2: Mean (µ) and standard deviation (σ) of the similarity measures obtained to compare our method with six state of the
art hair removal algorithms.
Our method Abbas [16] Bibiloni [19] Huang [17] Lee [14] Toossi [18] Xie [15]
µ 27.847 258.347 103.903 404.366 175.303 221.346 55.311
MSE
σ 35.261 302.243 108.762 589.341 219.633 256.319 103.553
µ 0.926 0.867 0.885 0.851 0.890 0.864 0.921
SSIM
σ 0.026 0.051 0.048 0.050 0.075 0.053 0.041
µ 35.137 26.080 29.785 27.001 29.158 26.570 33.096
PSNR
σ 3.006 4.079 4.073 6.603 5.783 3.852 3.705
µ 4.790 14.226 9.207 15.485 11.032 13.287 6.318
RMSE
σ 2.220 7.501 4.386 12.864 7.340 6.711 3.935
µ 0.525 0.526 0.499 0.402 0.531 0.509 0.592
VIF
σ 0.099 0.180 0.176 0.130 0.186 0.178 0.186
µ 0.997 0.991 0.996 0.990 0.995 0.992 0.997
UQI
σ 0.004 0.009 0.005 0.013 0.006 0.008 0.004
µ 0.978 0.870 0.934 0.917 0.945 0.875 0.955
MSSSIM
σ 0.011 0.073 0.039 0.042 0.051 0.069 0.037
µ 36.802 25.078 29.404 26.248 28.445 25.519 33.005
PSNR-HVS-M
σ 3.703 3.861 4.245 6.854 6.247 3.680 4.106
µ 35.168 24.628 28.681 25.738 27.826 25.065 32.186
PSNR-HVS
σ 3.465 3.823 4.158 6.634 5.991 3.639 3.896

M [43] and PSNR-HVS [43], conforming the third group, considered performance measures, the proposed method out-
have been designed to obtain more similar results to those stands according to the majority of similarity measures. It
perceived by the Human Visual System (HVS). This set of is only significantly outperformed on the VIF performance
metrics constitutes a representative selection of the state- measure compared to Abbas et al.’ and Lee et al.’s algo-
of-the-art performance metrics for restoration quality. We rithms. Among the rest of methods, we can see that Lee
must recall that largest values of PSNR, SSIM, MSSSIM, et al.’s algorithm surpasses statistically Huang et al.’ and
UQI, VIF, PSNR-HVS-M, and PSNR-HVS are indicators of Toossi et al.’s algorithms. However, when comparing Lee
a better quality of reconstructed images. On the other hand, et al.’s algorithm with Bibiloni et al.’s, the former is sta-
lowest values of MSE and RMSE are indicators of higher tistically better in very specific settings, namely the SSIM,
similarity. VIF and MSSSIM measures. It is Xie et al.’s algorithm that
In Table 2, we show the mean and standard deviation of outperforms Lee et al.’s in all performance measures. In the
the results obtained for the 185 images of the test set, and comparison between the algorithms of Bibiloni with Huang
for each of the nine performance measures. In addition, we and Toossi, it is the first that outperforms statistically the
make a comparison of our results against the ones obtained other two in the majority of measures. Finally, the Toossi et
by applying the six state of-the-art hair removal methods, al.’s algorithm is statistically superior to the Abbas et al.’s,
detailed in Section II, to the same 185 images. except in the SSIM and VIF performance measures. These
The next step in our work, is to study if one algorithm out- two algorithms are the ones that provide statistically worse
performs another one significantly. Given a fixed similarity results compared to the rest of the algorithms.
measure, we decided to use a statistical test to contrast the
means of all pairs of algorithms. Specifically, we have used E. ABLATION STUDY
the t-test if the samples pass the Shapiro-Wilk normality test, Some works [24], [29] defend the fact that using skip con-
or the Wilcoxon signed-rank test, otherwise, both considering nections, or convolutions/deconvolutions instead of pool-
a significance level of 0.05. According to the statistical test ing/unpooling layers may decrease the amount of detail loss
we can determine which method surpasses the others. Table 3 and deteriorate the restoration performance. We study how
summarizes the results obtained. In it, rows represent all the these layers can affect the learning of our model by replacing
pairs of algorithms in which the statistical test was applied, the pooling layers with convolutions, and introducing skip-
and the columns correspond to the measures of similarity. connection layers. In Table 4, we can see that the introduction
As an example, let us interpret the test comparing Abbas vs. of skip-connections does improve the results numerically, in
Huang. According to the SSIM and VIF performance mea- terms of the similarity measures previously presented. How-
sures, Abbas’ algorithm significantly outperforms Huang’s ever, these do not vary significantly when pooling layers are
algorithm. While according to the PSNR, MSSIM, PSNR- used instead of convolutional ones in our model. In this case,
HVS-M and PSNR-HVS measures, Huang’s algorithm sig- we can visualize its effects in Figure 7, where we compare
nificantly outperforms Abbas’ algorithm. For the rest of the results of using or not skip-connections. As can be seen,
measures, both methods obtain statistically comparable re- the network is able to create a more detailed prediction with
sults. Let us remark that Abbas’ algorithm is non-statistically them, especially when it comes to dermoscopic structures
superior in all of them. such as streaks or globules.
As it can be seen in Table 3, taking into account all the Another study that we believe is of great importance is the
VOLUME 4, 2016 7

This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/ACCESS.2020.3047258, IEEE Access

Lidia Talavera-Martínez et al.: Preparation of Papers for IEEE TRANSACTIONS and JOURNALS

TABLE 3: Classification of algorithms according to objective similarity measures. The results are as follows: 33 if the population mean of
the first algorithm is better than that of the second algorithm; 3 if the mean of the first algorithm is better but statistically comparable to that
of the second algorithm; 7 if the mean of the first algorithm is worse but statistically comparable to that of the second algorithm; 77 if the
population mean of the first algorithm is worse than that of the second algorithm.
MSE SSIM PSNR RMSE VIF UQI MSSSIM PSNR-HVS-M PSNR-HVS
p-value 2.62e-28 1.51e-29 1.49e-30 3.89e-29 0.452 1.09e-28 4.49e-32 1.87e-31 3.33e-31
Our method vs. Abbas
Statistical test 33 33 33 33 7 33 33 33 33
p-value 4.44e-30 1.29e-25 3.38e-31 1.27e-30 4.07e-04 2.05e-10 2.68e-30 1.06e-31 1.49e-31
Our method vs. Bibiloni
Statistical test 33 33 33 33 33 33 33 33 33
p-value 1.21e-30 2.23e-17 4.66e-30 1.57e-30 0.160 8.52e-13 3.35e-27 1.81e-31 3.85e-31
Our method vs. Lee
Statistical test 33 33 33 33 7 33 33 33 33
p-value 5.54e-30 2.18e-30 1.75e-31 9.41e-31 1.46e-23 1.51e-29 1.87e-31 7.19e-32 9.63e-32
Our method vs. Huang
Statistical test 33 33 33 33 33 33 33 33 33
p-value 1.49e-27 3.95e-29 5.12e-30 1.60e-28 1.49e-03 7.83e-27 5.28e-32 2.79e-31 5.83e-31
Our method vs. Toossi
Statistical test 33 33 33 33 33 33 33 33 33
p-value 1.86e-14 9.59e-04 1.14e-14 1.32e-14 2.31e-10 4.05e-05 1.36e-25 1.72e-19 7.22e-18
Our method vs. Xie
Statistical test 33 33 33 33 33 33 33 33 33
p-value 5.24e-23 1.27e-12 1.40e-23 2.43e-23 0.012 8.04e-29 3.07e-27 1.39e-24 3.42e-24
Abbas vs. Bibiloni
Statistical test 77 77 77 77 33 77 77 77 77
p-value 1.36e-11 4.70e-16 2.48e-15 7.87e-13 1.03e-10 5.69e-23 3.96e-26 1.63e-15 1.22e-15
Abbas vs. Lee
Statistical test 77 77 77 77 77 77 77 77 77
p-value 0.839 1.22e-03 5.1e-02 0.488 3.95-e21 0.118 1.49e-17 1.61e-03 1.56e-03
Abbas vs. Huang
Statistical test 3 33 77 3 33 3 77 77 77
p-value 9.08e-28 3.52e-07 2.93e-27 1.27e-27 4.14e-32 4.76e-25 3.27e-08 5.66e-25 4.13e-25
Abbas vs. Toossi
Statistical test 77 33 77 77 33 77 77 77 77
p-value 4.56e-32 4.21e-32 4.64e-32 4.56e-32 4.64e-32 8.74e-32 5.64e-32 4.87e-32 4.87e-32
Abbas vs. Xie
Statistical test 77 77 77 77 77 77 77 77 77
p-value 5.07e-05 6.25e-13 2.51e-02 3.71e-04 1.34e-17 2.05e-10 1.36e-15 2.42e-03 2.36e-03
Bibiloni vs. Lee
Statistical test 33 77 33 33 77 3 77 33 33
p-value 1.53e-13 9.88e-24 9.07e-15 7.50e-14 4.51e-17 1.72e-17 4.58e-14 2.30e-16 3.38e-16
Bibiloni vs. Huang
Statistical test 33 33 33 33 33 33 33 33 33
p-value 7.98e-21 1.59e-14 2.37e-21 3.29e-21 0.298 1.56e-27 4.32e-27 5.77e-23 1.22e-22
Bibiloni vs. Toossi
Statistical test 33 33 33 33 7 33 33 33 33
p-value 1.03e-18 4.40e-29 2.24e-20 1.66e-19 4.21e-32 5.35e-12 8.77e-14 1.74e-18 1.68e-19
Bibiloni vs. Xie
Statistical test 77 77 77 77 77 77 77 77 77
p-value 1.24e-11 6.62e-17 1.61e-15 9.23e-13 4.76e-20 2.92e-21 1.44e-17 9.01e-16 1.92e-15
Lee vs. Huang
Statistical test 33 33 33 33 33 33 33 33 33
p-value 4.13e-07 1.75e-16 9.88e-12 7.17e-09 2.16e-14 1.32e-20 4.08e-26 4.36e-12 3.29e-12
Lee vs. Toossi
Statistical test 33 33 33 33 33 33 33 33 33
p-value 1.11e-16 2.09e-10 1.15e-15 1.63e-16 6.04e-28 5.12e-06 0.869 2.22e-16 8.65e-17
Lee vs. Xie
Statistical test 77 77 77 77 77 77 7 77 77
p-value 0.496 7.83e-03 0.133 0.871 1.70e-17 0.594 5.44e-16 0.050 5.38e-02
Huang vs. Toossi
Statistical test 7 77 3 7 77 7 33 33 3
p-value 6.91e-25 7.80e-32 9.36e-27 8.70e-26 2.82e-28 2.40e-24 1.08e-25 7.17e-27 3.89e-27
Huang vs. Xie
Statistical test 77 77 77 77 77 77 77 77 77
p-value 4.87e-32 4.27e-32 5.11e-32 4.87e-32 4.14e-32 1.35e-31 7.55e-32 1.72e-19 5.11e-32
Toossi vs. Xie
Statistical test 77 77 77 77 77 77 77 77 77

TABLE 4: Mean of the similarity measures obtained on the test set images for the ablation study on our model. The second
and third row correspond to the skip-connections and pooling layers’ ablation study. Then, from the fourth to the last row
correspond to the ablation study of the loss terms.
MSE SSIM PSNR RMSE VIF UQI MSSSIM PSNR-HVS-M PSNR-HVS
Full method 27.847 0.926 35.137 4.790 0.525 0.997 0.978 36.802 35.168
Model with Pooling Layers
27.786 0.925 35.122 4.794 0.524 0.998 0.978 36.596 35.003
and Skip-Connections
Model with DeConv Layers
32.846 0.909 33.862 5.443 0.467 0.997 0.974 35.538 33.908
and no Skip-Connections
foreground
Loss with no L1 term 41.197 0.881 32.719 6.138 0.437 0.997 0.954 33.946 32.717
background
Loss with no L1 term 26.398 0.928 35.306 4.689 0.521 0.997 0.979 36.862 35.209
Loss with no LSSIM term 31.220 0.926 34.488 5.119 0.523 0.997 0.979 36.802 35.154
composed
Loss with no L2 term 30.881 0.926 34.463 5.113 0.523 0.997 0.978 36.744 35.143
Loss with no Ltv term 30.621 0.923 34.673 5.039 0.522 0.997 0.977 36.599 34.985

evaluation of the relevance of each term of the loss function. computing the L1 distance between the GT and the network’s
As in the previous case, we show in Table 4 and Figure 8 prediction only among the background pixels (Lbackground
1 ).
the quantitative and qualitative results, respectively, of our By comparing Figures 8c and 8e, we can see that when we
model trained by removing in each case one of the terms do not use this term, the structures tend to be more blurrier.
that compose the loss function. As can be seen, most of the Such blurrier regions may not be penalized as much when
performance measures and resulting images are worsen by calculating performance measures.
deleting some of the terms. This is not the case when we stop

8 VOLUME 4, 2016

This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/ACCESS.2020.3047258, IEEE Access

Lidia Talavera-Martínez et al.: Preparation of Papers for IEEE TRANSACTIONS and JOURNALS

FIGURE 7: Hair removal results for two samples (left column), when using skip-connections (middle column), and without
using them (right column).

(a) (b) (c)

(d) (e) (f)

(g) (h)

FIGURE 8: (a) Original hairless image, (b) Input image with simulated hair, (c) result of our model trained with the complete
loss, (d)-(h) results of our model trained by removing (d) Lforeground
1 , (e) Lbackground
1 , (f) Lcomposed
2 , (g) LSSIM , and (h) Ltv .

V. DISCUSSION AND CONCLUSIONS have built an encoder-decoder architecture, which has shown
In this work, we have presented a novel CNN-based method good results in reconstruction tasks like the one at hand. We
to the task of hair removal in dermoscopic images. We
VOLUME 4, 2016 9

This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/ACCESS.2020.3047258, IEEE Access

Lidia Talavera-Martínez et al.: Preparation of Papers for IEEE TRANSACTIONS and JOURNALS

highlight an architectural aspect of the network: the use of deep learning in medical image analysis,” Medical image analysis, vol. 42,
skip connections helps to retrieve details. The benefits of pp. 60–88, 2017.
[11] J. A. A. Salido and C. Ruiz, “Using deep learning to detect melanoma
its use have been demonstrated with an ablation study. In in dermoscopy images,” International Journal of Machine Learning and
addition, we have analyzed the performance of our method Computing, vol. 8, no. 1, 2018.
and compared it with six state-of-the-art approaches. To carry [12] I. Bakkouri and K. Afdel, “Computer-aided diagnosis (cad) system based
on multi-layer feature fusion network for skin lesion recognition in der-
out the experiments we created a dataset using different hair moscopy images,” Multimedia Tools and Applications, vol. 79, no. 29, pp.
simulating strategies over images from publicly available 20 483–20 518, 2020.
dermoscopic datasets. For the validation of the algorithms, [13] L. Talavera-Martínez, P. Bibiloni, and M. González-Hidalgo, “Compara-
tive study of dermoscopic hair removal methods,” in ECCOMAS Thematic
we calculated nine measures of similarity between the hair- Conference on Computational Vision and Medical Image Processing.
less reference images and their corresponding image with Springer, 2019, pp. 12–21.
simulated hair. Finally, we performed a statistical test to [14] T. Lee, V. Ng, R. Gallagher, A. Coldman, and D. McLean, “Dullrazor®:
objectively study and compare their performance. A software approach to hair removal from images,” Computers in biology
and medicine, vol. 27, no. 6, pp. 533–543, 1997.
The results obtained by means of the statistical tests ap- [15] F.-Y. Xie, S.-Y. Qin, Z.-G. Jiang, and R.-S. Meng, “PDE-based unsu-
plied to these measures lead to the conclusion that for eight of pervised repair of hair-occluded information in dermoscopy images of
the performance measures, our method is statistically the best melanoma,” Computerized Medical Imaging and Graphics, vol. 33, no. 4,
pp. 275–282, 2009.
algorithm. Except for the VIF measure and when we compare [16] Q. Abbas, M. E. Celebi, and I. F. García, “Hair removal methods: a
it with Abbas’ et al. and Lee et al.’s methods. As reflected comparative study for dermoscopy images,” Biomedical Signal Processing
in Figure 5 and in Table 3, Abbas’ and Toossi’s algorithms and Control, vol. 6, no. 4, pp. 395–404, 2011.
[17] A. Huang, S.-Y. Kwan, W.-Y. Chang, M.-Y. Liu, M.-H. Chi, and G.-
produce the least suitable results. This bad behavior may S. Chen, “A robust hair segmentation and removal approach for clinical
be due to the fact that these algorithms do not seem to images of skin lesions,” in 2013 35th Annual International Conference of
distinguish well hairs of greater thickness or dark colors. It the IEEE Engineering in Medicine and Biology Society (EMBC). IEEE,
2013, pp. 3315–3318.
is worth mentioning that we have evaluated our model in [18] M. T. B. Toossi, H. R. Pourreza, H. Zare, M.-H. Sigari, P. Layegh, and
dermoscopic images with real hair, obtaining good visual A. Azimi, “An effective hair removal algorithm for dermoscopy images,”
results and demonstrating, thus, its effectiveness. Skin Research and Technology, vol. 19, no. 3, pp. 230–235, 2013.
[19] P. Bibiloni, M. González-Hidalgo, and S. Massanet, “Skin hair removal
As future work, we aim to use our approach on a more
in dermoscopic images using soft color morphology,” in Conference on
complete skin lesion analysis system, leveraging the knowl- Artificial Intelligence in Medicine in Europe. Springer, 2017, pp. 322–
edge to extract other characteristics. Also, increasing the 326.
number of images used in the dataset to train the network [20] J. Xie, L. Xu, and E. Chen, “Image denoising and inpainting with deep
neural networks,” in Advances in neural information processing systems,
might enhance the network’s generalization capabilities. 2012, pp. 341–349.
[21] C. Tian, L. Fei, W. Zheng, Y. Xu, W. Zuo, and C.-W. Lin, “Deep learning
REFERENCES on image denoising: An overview,” arXiv preprint arXiv:1912.13171,
2019.
[1] “Melanoma Molecular Map Project,” http://www.mmmp.org/MMMP/
[22] P. Vincent, H. Larochelle, Y. Bengio, and P.-A. Manzagol, “Extracting and
welcome.mmmp, accessed on 4th April 2020.
composing robust features with denoising autoencoders,” in Proceedings
[2] “European Cancer Information System,” https://ecis.jrc.ec.europa.eu/
of the 25th international conference on Machine learning, 2008, pp. 1096–
index.php, accessed on 4th April 2020.
1103.
[3] J. Mayer et al., “Systematic review of the diagnostic accuracy of der-
[23] Z. Cui, H. Chang, S. Shan, B. Zhong, and X. Chen, “Deep network cascade
matoscopy in detecting malignant melanoma,” Medical journal of Aus-
for image super-resolution,” in European Conference on Computer Vision.
tralia, vol. 167, no. 4, pp. 206–210, 1997.
Springer, 2014, pp. 49–64.
[4] G. Argenziano, H. P. Soyer, S. Chimenti, R. Talamini, R. Corona, F. Sera,
[24] X. Mao, C. Shen, and Y.-B. Yang, “Image restoration using very deep con-
M. Binder, L. Cerroni, G. De Rosa, G. Ferrara et al., “Dermoscopy of
volutional encoder-decoder networks with symmetric skip connections,” in
pigmented skin lesions: results of a consensus meeting via the internet,”
Advances in neural information processing systems, 2016, pp. 2802–2810.
Journal of the American Academy of Dermatology, vol. 48, no. 5, pp. 679–
693, 2003. [25] V. Jain and S. Seung, “Natural image denoising with convolutional net-
[5] G. Argenziano, G. Fabbrocini, P. Carli, V. De Giorgi, E. Sammarco, and works,” in Advances in neural information processing systems, 2009, pp.
M. Delfino, “Epiluminescence microscopy for the diagnosis of doubtful 769–776.
melanocytic skin lesions: comparison of the ABCD rule of dermatoscopy [26] C. Dong, C. C. Loy, K. He, and X. Tang, “Image super-resolution using
and a new 7-point checklist based on pattern analysis,” Archives of derma- deep convolutional networks,” IEEE transactions on pattern analysis and
tology, vol. 134, no. 12, pp. 1563–1570, 1998. machine intelligence, vol. 38, no. 2, pp. 295–307, 2015.
[6] H. Kittler, “Dermatoscopy: introduction of a new algorithmic method [27] M. Attia, M. Hossny, H. Zhou, S. Nahavandi, H. Asadi, and A. Yazdabadi,
based on pattern analysis for diagnosis of pigmented skin lesions,” Der- “Realistic hair simulator for skin lesion images: A novel benchemarking
matopathology: Practical & Conceptual, vol. 13, no. 1, p. 3, 2007. tool,” Artificial Intelligence in Medicine, vol. 108, p. 101933, 2020.
[7] S. W. Menzies, C. Ingvar, K. A. Crotty, and W. H. McCarthy, “Frequency [28] H. Zhao, O. Gallo, I. Frosio, and J. Kautz, “Loss functions for image
and morphologic characteristics of invasive melanomas lacking specific restoration with neural networks,” IEEE Transactions on computational
surface microscopic features,” Archives of Dermatology, vol. 132, no. 10, imaging, vol. 3, no. 1, pp. 47–57, 2016.
pp. 1178–1182, 1996. [29] G. Liu, F. A. Reda, K. J. Shih, T.-C. Wang, A. Tao, and B. Catanzaro,
[8] W. Stolz, “ABCD rule of dermatoscopy: a new practical method for early “Image inpainting for irregular holes using partial convolutions,” in Pro-
recognition of malignant melanoma,” Eur. J. Dermatol., vol. 4, pp. 521– ceedings of the European Conference on Computer Vision (ECCV), 2018,
527, 1994. pp. 85–100.
[9] N. O’Mahony, S. Campbell, A. Carvalho, S. Harapanahalli, G. V. Hernan- [30] L. I. Rudin, S. Osher, and E. Fatemi, “Nonlinear total variation based noise
dez, L. Krpalkova, D. Riordan, and J. Walsh, “Deep learning vs. traditional removal algorithms,” Physica D: nonlinear phenomena, vol. 60, no. 1-4,
computer vision,” in Science and Information Conference. Springer, pp. 259–268, 1992.
2019, pp. 128–144. [31] T. Mendonça, M. Celebi, T. Mendonça, and J. Marques, “P H 2 : A public
[10] G. Litjens, T. Kooi, B. E. Bejnordi, A. A. A. Setio, F. Ciompi, M. Ghafoo- database for the analysis of dermoscopic images,” Dermoscopy Image
rian, J. A. Van Der Laak, B. Van Ginneken, and C. I. Sánchez, “A survey on Analysis, 2015.

10 VOLUME 4, 2016

This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/ACCESS.2020.3047258, IEEE Access

Lidia Talavera-Martínez et al.: Preparation of Papers for IEEE TRANSACTIONS and JOURNALS

[32] G. Argenziano, H. Soyer, V. De Giorgi, D. Piccolo, P. Carli, and MANUEL GONZÁLEZ-HIDALGO He was born
M. Delfino, Interactive atlas of dermoscopy (Book and CD-ROM). EDRA in the province of Leon in 1964, Spain. Degree
Medical Publishing & New media, 2000. in Mathematics, Specialty General Mathematics,
[33] H. Mirzaalian, T. K. Lee, and G. Hamarneh, “Hair enhancement in dermo- Guidance Mathematical Analysis, by the Univer-
scopic images using dual-channel quaternion tubularness filters and MRF- sity of Valencia in 1988. Ph.D. in Computer Sci-
based multilabel optimization,” IEEE Transactions on Image Processing, ence from the University of the Balearic Islands
vol. 23, no. 12, pp. 5486–5496, 2014. (UIB) in 1995. He is currently an Associate Pro-
[34] H. Mirzaalian, “Hair Sim Software,” http://www2.cs.sfu.ca/~hamarneh/
fessor with the Department of Mathematics and
software/hairsim/Welcome.html, accessed on 7th Mar 2019.
Computer Science of the UIB. The research areas
[35] F. Chollet et al., “Keras: The python deep learning library,” Astrophysics
Source Code Library, 2018. of interest included computer vision, image anal-
[36] D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” ysis, modeling and animation of deformable objects, analysis and synthesis
arXiv preprint arXiv:1412.6980, 2014. of human movement, medical imaging and 3D modeling, and recently the
[37] Z. Wang and A. C. Bovik, “Mean squared error: Love it or leave it? a study of aggregation operators and their applications to image processing
new look at signal fidelity measures,” IEEE signal processing magazine, and analysis, focusing on the fuzzy mathematical morphology and its
vol. 26, no. 1, pp. 98–117, 2009. applications. Currently he is working in Soft Computing techniques and its
[38] Z. Wang, A. C. Bovik, H. R. Sheikh, E. P. Simoncelli et al., “Image applications to biomedical image analysis. Member of the research group
quality assessment: from error visibility to structural similarity,” IEEE SCOPIA “Soft Computing, Image Processing and Aggregation” and regular
transactions on image processing, vol. 13, no. 4, pp. 600–612, 2004. collaborator with the research group “Computer Graphics and Vision and AI
[39] A. G. Barnston, “Correspondence among the correlation, RMSE, and Group (UGiVIA)”, moreover he is member of the “Balearic Islands Health
Heidke forecast verification measures; refinement of the Heidke score,” Research Institute”. This research activity has been reflected in scientific
Weather and Forecasting, vol. 7, no. 4, pp. 699–709, 1992. publications in international journals and in papers presented at national and
[40] Z. Wang, E. P. Simoncelli, and A. C. Bovik, “Multiscale structural sim- international conferences. It is referee for several journals of reference in
ilarity for image quality assessment,” in The Thrity-Seventh Asilomar
his research fields, has organized several congress and special sessions, and
Conference on Signals, Systems & Computers, 2003, vol. 2. Ieee, 2003,
different scientific activities.
pp. 1398–1402.
[41] Z. Wang and A. C. Bovik, “A universal image quality index,” IEEE signal
processing letters, vol. 9, no. 3, pp. 81–84, 2002.
[42] H. R. Sheikh and A. C. Bovik, “Image information and visual quality,”
in 2004 IEEE International Conference on Acoustics, Speech, and Signal
Processing, vol. 3. IEEE, 2004, pp. iii–709.
[43] K. Egiazarian, J. Astola, N. Ponomarenko, V. Lukin, F. Battisti, and
M. Carli, “New full-reference quality metrics based on hvs,” in Pro-
ceedings of the Second International Workshop on Video Processing and
Quality Metrics, vol. 4, 2006.

LIDIA TALAVERA-MARTÍNEZ She was born in


the province of Jaén in 1993, Spain. She received
the B.Sc. degree in industrial electronics and au-
tomation engineering from the University of the
Balearic Islands (UIB) in 2016 and the M.Sc.
degree in computer vision from the Autonomous
University of Barcelona (UAB) in 2017. She is
currently pursuing a Ph.D. degree in Informa-
tion and Communication Technologies with the
SCOPIA “Soft Computing, Image Processing and
Aggregation” research group. Her research activity is mainly focused on
computer vision, deep learning, and medical imaging.

PEDRO BIBILONI He is a lecturer at the Uni-


versity of the Balearic Islands. He received a
B.S. degree in Mathematics (2013) and a B.Sc. in
Telecommunications Engineering (2015) at Uni-
versitat Politècnica de Catalunya, a MSc in Infor-
mation Security (2014) at University College Lon-
don and a Ph.D. at the University of the Balearic
Islands (2018). His research interests are medical
image processing and pattern recognition.

VOLUME 4, 2016 11

This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy