Medical Image Analysis Using Artificial Intelligence
Medical Image Analysis Using Artificial Intelligence
Article
Review Article
Hyun Jin Yoon1,2, Young Jin Jeong1,2, Hyun Kang2, Ji Eun Jeong1, Do-Young Kang1,2
1
Department of Nuclear Medicine, Dong-A University Medical Center, Dong-A University
College of Medicine, 2Institute of Convergence Bio-Health, Dong-A University, Busan,
Korea
Received: May 2, 2019; Revised: May 21, 2019; Accepted: May 21, 2019
This is an Open-Access article distributed under the terms of the Creative Commons
Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/4.0)
which permits unrestricted non-commercial use, distribution, and reproduction in any
medium, provided the original work is properly cited.
Abstract
Automated analytical systems have begun to emerge as a database system that enables the
scanning of medical images to be performed on computers and the construction of big data.
Deep-learning artificial intelligence (AI) architectures have been developed and applied to
medical images, making high-precision diagnosis possible. For diagnosis, the medical images
need to be labeled and standardized. After pre-processing the data and entering them into the
deep-learning architecture, the final diagnosis results can be obtained quickly and accurately.
To solve the problem of overfitting because of an insufficient amount of labeled data, data
augmentation is performed through rotation, using left and right flips to artificially increase
the amount of data. Because various deeplearning architectures have been developed and
publicized over the past few years, the results of the diagnosis can be obtained by entering a
medical image. Classification and regression are performed by a supervised machine-learning
method and clustering and generation are performed by an unsupervised machine-learning
method. When the convolutional neural network method is applied to the deep-learning layer,
feature extraction can be used to classify diseases very efficiently and thus to diagnose various
diseases. AI, using a deep-learning architecture, has expertise in medical image analysis of the
nerves, retina, lungs, digital pathology, breast, heart, abdomen, and musculo-skeletal system.
Keywords
Artificial Intelligence (AI), Medical images, Deep-learning, Machine-learning, Convolutional
Neural Network (CNN), Big data
Introduction
imaging analysis of the nerves, retina, lung, digital pathology, breast, heart, abdomen, and
musculo-skeletal system using the AI methods that make up the deep-learning architecture.
3. AI research field
For the diagnosis of medical imaging, there should be labeled and standardized big data, as
shown in Fig. 1. AI is generally a concept that includes machine-learning and deep-learning,
and can be analyzed in various ways, depending on the characteristics of images stored in the
database. The data is pre-processed in the same order as in Fig. 2, entered into the deep-
learning architecture, and final diagnosis results are obtained. The hyper-parameter is
changed and the deep-learning result is checked and modified to optimize the parameter. In
order to solve the problem of over-fitting due to lack of the sufficient labeled data, data
augmentation is performed through rotation, left and right flip, and generative adversarial
network (GAN).
1. Data sets
Although medical images are stored in the picture archiving and communication system
(PACS), they are almost useless data for AI analysis. In order to use medical images for AI,
labeling, standardization, bounding box, and segmentation are required. These high quality
images require manual work, which can be time-consuming and can vary, depending on the
skill of the person. Standardized and anonymized well-labeled databases are very important
for developing and testing AI algorithms that require big data. There has been an attempt to
build a large-scale database system through these complex processes, and to improve
diagnostic performance through competition. These attempts have been quite successful and
appear to be most effective in increasing diagnostic accuracy.
In recent years, an anonymous medical image database has been released and researchers are
freely approaching and presenting research results. A database of F-18 fluorodeoxyglucose
(FDG) PET images from participants of the AD neuro-imaging initiative (ADNI) is being
developed to study early detection of AD patients. F-18 FDG PET images of the ADNI database
were used to distinguish AD patients with 88.64% specificity, 87.70% sensitivity, and 88.24%
accuracy.11) Magnetic resonance imaging (MRI) of the knee is the preferred method for
diagnosing a knee injury, but the analysis is time consuming and the likelihood of diagnostic
errors is high. MRNet, a CNN, and logistic regression analysis were performed as a dataset of
1,370 knee MRI scans performed at the Stanford University Medical Center
(https://stanfordmlgroup.github.io). The results of the validation did not show a significant
difference from the analysis of 9 clinical experts at the Stanford University Medical Center.12)
https://www.progmedphys.org/journal/view.html?volume=30&number=2&spage=49#:~:text=For diagnosis%2C the medical images,%2C and … 4/17
1/15/25, 12:36 PM Medical Image Analysis Using Artificial Intelligence
The leader-board reports the average area under the receiver operating characteristic curve
(AUC; 0.917) of current (Jan 09, 2019) abnormal detection, anterior cruciate ligament tears, and
meniscal tear work (https://stanfordmlgroup.github.io/competitions/mrnet).
To screen for severe pneumothorax, a total of 13,292 frontal chest X-rays were visually labeled
by a radiologist to create a database.13) Candidate images for analysis were identified and
stored in a clinical PACS in the form of ‘digital imaging and communications in medicine’
(DICOM; National Electrical Manufacturers Association, Arlington, VA, USA). These well-
defined data sets can be used to train and evaluate different network architectures. Building
such a well-defined database requires a lot of time and effort. The National Institute of Health
(NIH) has opened the Chest X-ray14 set, so that it is publicly available (available at
https://nihcc.app.box.com/v/ChestXray-NIHCC). The NIH allows students to learn by using
well-organized and labeled data sets, and opens up a database for anyone to challenge and
develop optimized algorithms. LIDC and IDRI databases were created and released for the
diagnosis of lung cancer. For the diagnosis of skin cancer, the database can be opened
(https://isic-archive.com) to participate in the competition.5) The MR knee database is also
available and anyone interested in this field can participate in AI analysis
(https://stanfordmlgroup.github.io/competitions/mrnet). Most problems with large data sets
and public challenge data sets are listed at http://www.grand-challenge.org.
In order to construct a more efficient accurate, and useful database, the Department of
Nuclear Medicine at Dong-A University Medical Center established a database system called
SortDB with IRM (http://www.irm.kr/). The feature of SortDB is some software that allows
researchers to download a desired DICOM file in a specified file format (nii, jpg, gif, etc.) by
entering a manageable list of Excel files. To apply an AI algorithm, big data is needed. It is an
automated database generation program that stores and manages data stored in small and
medium sized hospitals in the necessary format. A database is constructed with the system
configuration shown in Fig. 3, shortening the pre-processing time for creating a standardized
data set.
For quantitative analysis, an image was normalized using a standard affine model with 12
parameters using SPM5 software.14) In order to use quantitative evaluation using machine
learning, a pre-processing method, including the normalization process of image data must
necessarily be included. Since the accuracy of the normalized image depends on the standard
template or the variety of the process, there are considerable variations among researchers.
https://www.progmedphys.org/journal/view.html?volume=30&number=2&spage=49#:~:text=For diagnosis%2C the medical images,%2C and … 5/17
1/15/25, 12:36 PM Medical Image Analysis Using Artificial Intelligence
The accuracy of the quantitative analysis has a considerable impact on the quality of the image
transformation by pre-processing. In particular, the accuracy of AI analysis varies
considerably, depending on the quality of medical images for applying the algorithm. The
image transformation of the DICOM file is linearly transformed using the return value of
GetOuputData function of a library.15) Bits number 8 and the standardized uptake values
(SUV) were linearly converted to 0–255 scales. By using this method of linear transformation,
the image representing the SUV can be reproduced as a jpg file with distortion. It is necessary
to study the image conversion methods so that medical images will suffer only minimal
deformation in the conversion process. It is necessary to maintain the consistency of AI
learning using image-aware CNNs by down-sampling commonly used images.13)
Currently, CNN algorithms are implemented in various ways. The recent trend is to use Keras
(version 2.0.3, https://keras.io/) deep-learning library on TensorFlow (version 1.2.1,
https://www.tensorflow.org/?hl=ko) to implement a model of CNN. Deep-learning
architectures for AI analysis are VGG16/19, Xception, Inception, and ResNet models, which are
the best performing algorithms.16-19) The optimal algorithm for image classification may be
an algorithm that can reflect the feature characteristics of each medical image. Currently, the
location of such features is estimated through the activation map. However, at present, only
the experience of repeated execution of each medical image is the method of finding the
optimal model. The AlexNet algorithm, which won the ImageNet Large Scale Visual
Recognition Challenge in 201220) to classify the chest X-ray image into 5 classes, was used and
successfully classified with 92.10% accuracy.21) AlexNet consists of 5 convolution layers and 3
fully-connected (FC) layers, and the last FC layer uses a softmax activation function. The
researcher21) down-sampled the DICOM file and converted it into a 256×256 pixel png file and
used it as the input image of AlexNet. The mini-batch size is 128, the number of training
iterations is 100, the adaptive learning rate is 1×10−3 for adaptive moment estimation (Adam)
optimizer, and the momentum is 0.5. The rectified linear unit (ReLU) activation functions were
used in the previous step of the max-pooling layers and the L2 regularization was adjusted to
1×10−4.
Machine Learning
One of the most successful methods is the principal component analysis (PCA) method, which
is based on the eigen-image approach.22) After analyzing the images through PCA, support
vector machine (SVM) was used as a classifier to distinguish the boundary between normal
and demented patients and to improve the accuracy by diagnosing and analyzing the initial
dementia. PCA was applied to single photon emission computed tomography (SPECT) images
and the possibility of classification was greatly improved, and the results were presented
through 3-dimensional (3D) scattering.23) Quantitative evaluation and classification is one of
the most efficient and logical methods. The accuracy of classification of AD by applying PCA-
SVM to SPECT and PET images was 96.7% and 89.52%, respectively.24) Using Gaussian mixture
models, a method of automatically selecting the region of interest (ROI) of a 3D functional
brain image containing high and low activation region information was presented.25) Based on
classification using eigenvector decomposition and SVM, including concept extraction by
representative vector separation using principal component analysis, we find that
PCA/independent component analysis for early AD diagnosis by brain image analysis and
pattern disease discrimination CAD is a very useful direction.11)
1. AI in brain imaging
https://www.progmedphys.org/journal/view.html?volume=30&number=2&spage=49#:~:text=For diagnosis%2C the medical images,%2C and … 7/17
1/15/25, 12:36 PM Medical Image Analysis Using Artificial Intelligence
In brain research using AI, many studies have been conducted in the field of AD classification,
anatomical segmentation of brain regions, and tumor detection. AD/mild cognitive
impairment/healthy control classification was successfully performed by using the Gaussian
restricted Boltzmann machine (RBM) to find feature expressions in volume patches of MRI and
PET images.26) The 3D CNN in AD classification is superior to other algorithm classifiers.27,28)
It automatically segments the MR images of the human brain using CNN.29) Segmentation of
striatum segmentation was performed using deep CNN, and the results were compared with
those of FreeSurfer.30) In the brain area, manual segments are time-consuming, with
individual differences occurring, while automatic segmentation has significant difficulties in
performing in complex structures. Twenty-five deep-layers called the ‘voxel-wise residual
network’ (VoxResNet) were developed and successfully segmented automatically.31) To
demonstrate end-to-end nonlinear mapping from MR images to CT images, a 3D fully CNN
was employed and verified in a real pelvic CT/MRI data set.32) Input and output improved
performance by using two-volume CNNs, and excellent performance was observed by
evaluating the input and output forms in the MRI and PET images of the ADNI database.33)
2. AI in chest imaging
By introducing the multiple-instance learning framework, a de-CNN is constructed to
generate the heat map of suspicious regions.34) A unique set of radiologic datasets of publicly
available chest X-rays and their reports were used to find and report 17 unique patterns by
applying CNN algorithms.35) It has been reported that the presence of interstitial patterns
have been found by applying a segmentation-based label propagation method to a dataset of
interstitial lung disease,36) and it has been reported that lung texture patterns are classified
using CNN.37) A method for classifying frontal and lateral chest X-ray images using deep-
learning methods and automating metadata annotations has been reported.38) A new method
of using a 3D CNN for false positive reduction in automatic pulmonary nodule detection in a
volumetric CT scan has been proposed. 3D CNN is able to enter more spatial information and
extract more representative features through a hierarchical architecture, trained with 3-D
samples. The proposed algorithm has achieved high competition metric scores, has been
extensively tested in the LUNA16 Challenge, and can be applied to 3D PET images.
3. AI in breast imaging
Since most mammograms are 2D and the number of data is large, AI images can be
successfully analyzed using deep-learning in natural images. The discovery of breast cancer is
the detection and classification of tumor lesions, the detection and classification of micro-
calcifications, and risk-scoring work, which can be effectively analyzed by CNN or RBM
methods. For the measurement of breast density, CNNs for feature extraction were used,39)
and a modified region proposal CNN (R-CNN) has been used for localization. It has been
reported that U-net is used for segmentation breast and fibro-glandular tissue in MRI in a
dataset and accurate breast density calculation results are observed.40) It has been reported
that a short-term risk assessment model has been developed by achieving a predictive
accuracy of 71.4% by calculating the risk score by implementing the mammographic X-ray
image as a risk prediction module multiple layer perception classifier.41)
4. AI in cardiac imaging
Cardiac artificial research fields include left ventricle segmentation, slice classification, image
quality assessment, automated calcium scoring, coronary centerline tracking, and super-
resolution (SR). 2D and 3D CNN techniques are mainly used for classification, and deep-
learning techniques such as U-net segmentation algorithm are used for segmentation. The
high-resolution 3D volume in the 2D image stack has been reconstructed using a novel image
SR approach.42) The image quality is superior to the SR method because the CNN model is
computationally efficient, but SR-CNN is advantageous in image segmentation and motion
tracking.43) Using multi-stream CNN (3 views), it has been reported that low-dose chest CT
can be identified with high accuracy by deep learning when the ROI is considered as a
coronary artery calcification candidate over 130 HU.44) Coronary calcium in gated cardiac CT
angiography was detected using 3D CNN and multi-stream 2D CNN.45)
5. AI in musculo-skeletal imaging
is reported that highly accurate real-time 2D/3D registration is possible, even in a greatly
enlarged capture range.47) Several deep-learning methods have been developed for
automatically evaluating the age of skeletal bones using X-ray imaging and their performance
has been verified by showing an average discrepancy of about 0.8 years.48)
Discussion
Diagnosis using AI is quickly performed and its accuracy is very high. AI diagnosis is becoming
an important technology for future diagnostic systems. However, AI diagnosis needs to be
supplemented in several aspects. AI learning using deep-learning architecture requires big
data. However, most medical images are technical and man-powered, making it difficult to
build big data systems. It is also time-consuming to create a database of standardized and
labeled images of medical images. Most people are building databases by manually pre-
processing all medical images for AI application. Performing data augmentation through
rotation, left/right flip, and up/down reversal of a medical image has a positive effect on the
accuracy of learning. Data augmentation using GAN is being applied in various areas of the
medical field.21,49) In liver lesion classification using CT images, it was reported that the
accuracy of 7.1% was increased when the number of data using GAN was increased.21) In chest
pathology classification using X-ray image, accuracy was reported to be increased by 21.23%.4
9) It has been reported that synthesized images using GAN can be a method of data
augmentation in medical image analysis. However, more research is needed to see if synthetic
images can be used for AI learning to determine clinical diagnostics that require rigorous
accuracy. Increasing data through the GAN is still controversial, but it is a field of research
that is needed. It is necessary to make the medical image stored in the PACS system into the
image of the previous step for AI analysis. Research is needed to automatically generate
standard images so that medical images can be used directly in deep-learning.
AI analysis of medical images requires labeled, standardized and optimized images. There is a
clear difference in the accuracy of the final classification between pre-processing and non-
pre-processing medical images. However, since the pre-processed image has noise different
from that of the original image, it is necessary to study the effect of the resulting image on the
accuracy. Deep-learning architecture can have various forms, and new excellent architecture
for image analysis is being released every year through competition. However, it is only
through experience that the most accurate result of any architecture can be used for each
Medical imaging diagnostics using deep-learning architecture have reached expert levels in
the areas of neurons, retina, lung, digital pathology, breast, heart, abdomen, and musculo-
skeletal system. However, when other hospital protocols apply different medical images, their
accuracy is significantly reduced and new optimization parameters must be found. Even with
some resolution and noise changes in medical imaging, AI diagnostics are very fragile. There is
also a lack of logical explanations as to which process to diagnose. Although there is an effort
to maintain consistency through data harmonization, there is the possibility that the overall
quality of the image will deteriorate. It is necessary to study the change of the accuracy by
using the data whose resolution is decreased. It is necessary to develop algorithms that can
recognize and generalize images of medical images with different resolutions or noise, but it
will take a considerable amount of time. Nevertheless, the diagnosis of medical images by the
deep-learning architecture developed up till now is almost at expert diagnosis level. In
addition, many data that have not yet been analyzed can be discovered and studied with AI
diagnosis, and this can accurately and quickly diagnose and improve the quality of medical
care dramatically.
Acknowledgements
This research was supported by the project at Institute of Convergence Bio-Health, Dong-A
University funded by Busan Institute of S&T Evaluation and Planning (Grant no: 2020).
The authors would like to thank Dr. Adrian Ankiewicz of the Australian National University
(Australia) for helpful comments on the manuscript.
Conflicts of Interest
All relevant data are within the paper and its Supporting Information files.
The study was approved by the institutional review board (IRB approval number; DAUHIRB-17-
108).
References
1. Farooq A, Anwar S, Awais M, and Alnowami M. Artificial intelligence based smart diagnosis of
Alzheimer's disease and mild cognitive impairment, Piscataway: 2017 IEEE International Smart Cities
Conference (ISC2) 2017 p. 1-4.
2. Vieira S, Pinaya WH, and Mechelli A. Using deep learning to investigate the neuroimaging correlates of
psychiatric and neurological disorders: methods and applications. Neurosci Biobehav Rev 2017;74((Pt
A)):58-75.
4. Zeiler MD, and Fergus R. Visualizing and understanding convolutional networks. New York: Springer-
Verlag 2014:818-833.
5. Esteva A, Kuprel B, Novoa RA, Ko J, Swetter SM, and Blau HM, et al. Dermatologist-level classification of
skin cancer with deep neural networks. Nature 2017;542:115-118.
6. Gulshan V, Peng L, Coram M, Stumpe MC, Wu D, and Narayanaswamy A, et al. Development and
validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus
photographs. JAMA 2016;316:2402-2410.
7. Golan R, Jacob C, and Denzinger J. Lung nodule detection in CT images using deep convolutional neural
networks, Piscataway: 2016 IEEE International Joint Conference on Neural Networks (IJCNN) 2016 p.
243-250.
8. Kooi T, Litjens G, van Ginneken B, Gubern-Mérida A, Sánchez CI, and Mann R, et al. Large scale deep
learning for computer aided detection of mammographic lesions. Med Image Anal 2017;35:303-312.
9. Liu F, Zhou Z, Samsonov A, Blankenbaker D, Larison W, and Kanarek A, et al. Deep learning approach for
evaluating knee MR images: achieving high diagnostic performance for cartilage lesion detection.
Radiology 2018;289:160-169.
10. Sharif MS, Abbod M, Amira A, and Zaidi H. Artificial neural network-based system for PET volume
segmentation. Int J Biomed Imaging 2010;2010:105610.
11. Illán IA, Górriz JM, Ramírez J, Salas-Gonzalez D, López MM, and Segovia F, et al. 18F-FDG PET imaging
analysis for computer aided Alzheimer's diagnosis. Inf Sci 2011;181:903-916.
12. Bien N, Rajpurkar P, Ball RL, Irvin J, Park A, and Jones E, et al. Deep-learning-assisted diagnosis for knee
magnetic resonance imaging: development and retrospective validation of MRNet. PLoS Med 2018;15.
13. Taylor AG, Mielke C, and Mongan J. Automated detection of moderate and large pneumothorax on
frontal chest X-rays using deep convolutional neural networks: a retrospective study. PLoS Med 2018;15.
14. Woods RP, Grafton ST, Holmes CJ, Cherry SR, and Mazziotta JC. Automated image registration: I.
general methods and intrasubject, intramodality validation. J Comput Assist Tomogr 1998;22:139-152.
15. DCMTK V3.6.5. DicomImage Class Reference. Arlington (VA): National Electrical Manufacturers
Association (NEMA) 2019 Available from:
Https://support.dcmtk.org/docs/classDicomImage.html#ac1b5118cbae9e797aa55940fcd60258e [cited
2019 April 7].
16. Simonyan K, and Zisserman A. Very deep convolutional networks for large-scale image recognition,
Paper presented at: 3rd International Conference on Learning Representations (ICLR), 2015 May 7-9,
San Diego (CA), USA p. 14.
17. Szegedy C, Liu W, Jia Y, Sermanet P, Reed S, and Anguelov D et al. Going deeper with convolutions,
Piscataway: 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2015 p. 1-9.
18. He K, Zhang X, Ren S, and Sun J. Deep residual learning for image recognition, Piscataway: 2016 IEEE
Conference on Computer Vision and Pattern Recognition (CVPR) 2016 p. 770-778.
19. Chollet F. Xception: deep learning with depthwise separable convolutions, Paper presented at: 2017 IEEE
Conference on Computer Vision and Pattern Recognition (CVPR), 2017 Jul 21-26, Honolulu (HI), USA p.
1251-1258.
20. Krizhevsky A, Sutskever I, and Hinton GE. ImageNet classification with deep convolutional neural
networks. Adv Neural Inf Process Syst 2012;2:1097-1105.
21. Salehinejad H, Valaee S, Dowdell T, Colak E, and Barfett J. Generalization of deep neural networks for
chest pathology classification in X-rays using generative adversarial networks, Paper presented at: IEEE
International Conference on Acoustics, Speech and Signal Processing (IEEE ICASSP), 2018 Apr 15-20,
Calgary (AB), Canada p. 990-994.
22. Turk M, and Pentland A. Eigenfaces for recognition. J Cogn Neurosci 1991;3:71-86.
23. Korez R, Likar B, Pernuš F, and Vrtovec T. Model-based segmentation of vertebral bodies from MR
images with 3D CNNs. Cham: Springer 2016:433-441.
24. López M, Ramírez J, Górriz JM, Álvarez I, Salas-Gonzalez D, and Segovia F, et al. Principal component
analysis-based techniques and supervised classification schemes for the early detection of Alzheimer's
disease. Neurocomputing 2011;72:1260-1271.
25. Górriz JM, Lassl A, Ramírez J, Salas-Gonzalez D, Puntonet CG, and Lang EW. Automatic selection of
ROIs in functional imaging using Gaussian mixture models. Neurosci Lett 2009;460:108-111.
26. Suk HI, Lee SW, and Shen D; Alzheimer's Disease Neuroimaging Initiative. Hierarchical feature
representation and multimodal fusion with deep learning for AD/MCI diagnosis. Neuroimage
2014;101:569-582.
27. Payan A, and Montana G, . Predicting Alzheimer's Disease: A Neuroimaging Study with 3D Convolutional
Neural Networks 2015 The Computing Research Repository (CoRR). Ithaca (NY) Available from:
https://arxiv.org/pdf/1502.02506v1.pdf [cited 2019 April 7].
28. Hosseini-Asl E, Gimel'farb G, and El-Baz A, . Alzheimer's Disease Diagnostics by a Deeply Supervised
Adaptable 3D Convolutional Network 2016 The Computing Research Repository (CoRR). Ithaca (NY)
Available from: https://arxiv.org/pdf/1607.00556.pdf [cited 2019 April 7].
29. de Brebisson A, and Montana G. Deep neural networks for anatomical brain segmentation, Paper
presented at: 2015 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW),
2015 Jun 7-12, Boston (MA), USA p. 22-28.
30. Choi H, and Jin KH. Fast and robust segmentation of the striatum using deep convolutional neural
networks. J Neurosci Methods 2016;274:146-153.
31. Chen H, Dou Q, Yu L, Qin J, and Heng PA. VoxResNet: deep voxelwise residual networks for brain
segmentation from 3D MR images. Neuroimage 2018;170:446-455.
32. Nie D, Cao X, Gao Y, Wang L, and Shen D. Estimating CT image from MRI data using 3D fully
convolutional networks. Deep Learn Data Label Med Appl (2016) 2016;2016:170-178.
33. Li R, Zhang W, Suk HI, Wang L, Li J, and Shen D, et al. Deep learning based imaging data completion for
improved brain disease diagnosis. Med Image Comput Comput Assist Interv 2014;17(Pt 3):305-312.
34. Kim H, and Hwang S, . Deconvolutional Feature Stacking for Weakly-Supervised Semantic Segmentation
2016 Cornell University. Ithaca (NY) Available from: https://arxiv.org/pdf/1602.04984.pdf [cited 2019
April 7].
35. Shin HC, Roberts K, Lu L, Demner-Fushman D, Yao J, and Summers RM. Learning to read chest X-rays:
recurrent neural cascade model for automated image annotation, Paper presented at: 2016 IEEE
Conference on Computer Vision and Pattern Recognition (CVPR), 2016 Jun 27-30, Las Vegas (NV), USA p.
2497-2506.
36. Gao M, Xu Z, Lu L, Wu A, Nogues I, and Summers RM et al. Segmentation label propagation using deep
convolutional neural networks and dense conditional random field, Paper presented at: 2016
International Symposium on Biomedical Imaging (ISBI), 2016 Apr 13-16, Prague, Czech p. 1265-1268.
37. Gao M, Bagci U, Lu L, Wu A, Buty M, and Shin HC, et al. Holistic classification of CT attenuation patterns
for interstitial lung diseases via deep convolutional neural networks. Comput Methods Biomech Biomed
Eng Imaging Vis 2018;6:1-6.
38. Rajkomar A, Lingam S, Taylor AG, Blum M, and Mongan J. High-throughput classification of radiographs
using deep convolutional neural networks. J Digit Imaging 2017;30:95-101.
39. Fonseca P, Mendoza J, Wainer J, Ferrer J, Pinto J, and Guerrero J et al. Automatic breast density
classification using a convolutional neural network architecture search procedure, Paper presented at:
SPIE Medical Imaging, 2015 Feb 21-26, Orlando (FL), USA p. 8.
40. Dalm©¥ş MU, Litjens G, Holland K, Setio A, Mann R, and Karssemeijer N, et al. Using deep learning to
segment breast and fibroglandular tissue in MRI volumes. Med Phys 2017;44:533-546.
41. Qiu Y, Wang Y, Yan S, Tan M, Cheng S, and Liu H et al. An initial investigation on developing a new
method to predict short-term breast cancer risk based on deep learning technology, SPIE Medical
Imaging, 2016 Feb 27-Mar 3, San Diego (CA), USA p. 6.
42. Avendi MR, Kheradvar A, and Jafarkhani H. A combined deep-learning and deformable-model approach
to fully automatic segmentation of the left ventricle in cardiac MRI. Med Image Anal 2016;30:108-119.
43. Oktay O, Bai W, Lee M, Guerrero R, Kamnitsas K, and Caballero J et al. Multi-input cardiac image super-
resolution using convolutional neural networks. Cham: Springer 2016:246-254.
44. Lessmann N, Išgum I, Setio AAA, de Vos BD, Ciompi F, and de Jong PA et al. Deep convolutional neural
networks for automatic coronary calcium scoring in a screening study with low-dose chest CT, SPIE
Medical Imaging, 2016 Feb 27-Mar 3, San Diego (CA), USA p. 6.
45. Wolterink JM, Leiner T, de Vos BD, van Hamersvelt RW, Viergever MA, and Išgum I. Automatic coronary
artery calcium scoring in cardiac CT angiography using paired convolutional neural networks. Med
Image Anal 2016;34:123-136.
46. Cai Y, Landis M, Laidley DT, Kornecki A, Lum A, and Li S. Multi-modal vertebrae recognition using
transformed deep convolution network. Comput Med Imaging Graph 2016;51:11-19.
47. Miao S, Wang ZJ, and Liao R. A CNN regression approach for real-time 2D/3D registration. IEEE Trans
Med Imaging 2016;35:1352-1363.
48. Spampinato C, Palazzo S, Giordano D, Aldinucci M, and Leonardi R. Deep learning for automated
skeletal bone age assessment in X-ray images. Med Image Anal 2017;36:41-51.
49. Frid-Adar M, Diamant I, Klang E, Amitai M, Goldberger J, and Greenspan H. GAN-based synthetic
medical image augmentation for increased CNN performance in liver lesion classification.
Neurocomputing 2018;321:321-331.