Final Report Submission - Ameya, Ananya
Final Report Submission - Ameya, Ananya
Project Report
of B.Tech. Degree
In
Information Technology
By
Ananya [1805213009]
Under supervision of
May 2022
CONTENTS
S No Content Page No
DECLARATION I
CERTIFICATE II
ACKNOWLEDGEMENT III
ABSTRACT IV
LIST OF FIGURES V
1 INTRODUCTION 1
2 LITERATURE REVIEW 3
3 METHODOLOGY 5
4 EXPERIMENTAL RESULTS 15
5 CONCLUSION 33
REFERENCES 36
Declaration
We hereby declare that this submission is our work and that, to the best of our belief and
knowledge, it contains no material previously published or written by another person or material
which to a substantial error has been accepted for the award of any degree or diploma of university
or other institutes of higher learning, except where the acknowledgment has been made in the text.
The project has not been submitted by us at any other institute for the requirement of any other
degree.
Name – Ananya
Roll- No – 1805213009
Branch – IT
Roll- No – 180523008
Branch – IT
I|Page
CERTIFICATE
This is to certify that the project report entitled “Analysis of X-ray images for Covid-19 using
CNN” presented by Ananya and Ameya Srivastava, in the partial fulfillment for the award of
Bachelor of Technology in Computer Science and Engineering, is a record of work carried out by
the under my supervision and guidance at the Department of Computer Science and Engineering
at Institute of Engineering and Technology, Lucknow.
It is also certified that this project has not been submitted to any other institute for the award of
any other degrees to the best of my knowledge.
II | P a g e
CERTIFICATE
This is to certify that the project report entitled “Analysis of X-ray images for Covid-19 using
CNN” presented by Ananya and Ameya Srivastava, in the partial fulfillment for the award of
Bachelor of Technology in Computer Science and Engineering, is a record of work carried out by
the under my supervision and guidance at the Department of Computer Science and Engineering
at Institute of Engineering and Technology, Lucknow.
It is also certified that this project has not been submitted to any other institute for the award of
any other degrees to the best of my knowledge.
Prof. Y N Singh
III | P a g e
ACKNOWLEDGMENT
We wish to thank everyone who contributed and supported us in our project work. Our deep bow
of gratitude begins with Prof. Y N Singh, whose continuous guidance and critical, constructive
feedback has not only enhanced our work but also developed our analytical and writing skills. We
are duly thankful to Dr. Tulika Narang for her input in improving the quality of our work. We
reserve a special place for our advisor, Promila Bahadur, for her enthusiastic encouragement and
continuous advice. We express our heartfelt gratitude to her for her compassion and unwavering
confidence in us. Thanks to the project supervisor who invested their time in guiding the team to
achieve the goal. We have to hold a high estimation of all other supervisors as well as panels,
especially in our project presentation that has improved our presentation skills cherish to their
comments and advice.
IV | P a g e
ABSTRACT
COVID-19 cases have shown a rising trend in the country primarily due to the emergence of
mutant variants of omicron, which have a high degree of transmissibility. Aggrandize by normal
routine and relaxation of mask mandate and social pandemic norms in many states, the ability of
mutant variant increases. Raising cases of Covid-19 and shortage of durable and less
timeconsuming testing devices mark the new beginning of X-ray analysis using machine learning
techniques. The emergence of the Covid-19 virus has a hazardous impact on human life.
Therefore, it is the need of time to find an effective and faster way to detect the Covid-19 virus in
the patients.
The convolution neural network, we have designed for our model uses the standard dataset
available from the University of Waterloo.
V|Page
List of Figures
1 Layers of CNN 14
4 Dataset description-1 19
5 Dataset description-1 20
8 Pre-processed image 26
10 Model performance 28
VI | P a g e
CHAPTER 1
INTRODUCTION
Information technology is plethoric to serve holes in the safety net for all the frontline workers
who have worked day and night to serve humankind in all possible ways. We are talking about
doctors, nurses, and health workers. We are at a time of dark clouds of glass half empty where the
domain will maintain body chemistry that is distinctly different from masses. With prelude bestow
we harbinger preliminary test for COVID-19 which can ostensibly detect Covid-19 cases from
image classification by radiology [2]. With the timorous increasing Covid-19 rate, we need to find
out a constructive and efficacious way to diagnose the Covid-19 virus.
Radiology deals with the branch of medicine that deals with gleaming energy used to detect and
provide a meaningful cure for the treatment of disease. Artificial Intelligence and Data Science
have always excelled in every field beyond expectations and served every purpose beyond
exceptional results. So it has now been considered to use in the field of clinical uses also. We
already have significant development in brain tumors, and other heart diseases fields’ prediction
area.
Therefore, our project highlights the concept of using artificial intelligence for predicting the
Covid-19 patients. There are several models to perform transfer learning e.g.: VGG, ResNet and
Xception, etc. We have used ResNet for our model. The dataset used is publicly available from the
University of Waterloo and consists of 3000 images of Covid-19 positive and negative chest X-
ray images. Imbibing new life to old fears with great alacrity digging deep into the palliative curing
power of radiology of COVID–19, targeted towards the specific task to detect the Coronavirus
through the digitally scanned chest X-ray image of a suspicious individual.
The Artificial Neural network is implemented with the help of a convolution neural network and
trained with the available dataset. Developing new jargon for our journey of the progress we have
used the basic understanding of the CNN network and thus implemented it successfully as per our
goal and requirements in the field of radiology [3]. There is two main task exception or we can say
1|Page
challenge while implementing any CNN model in real life which is basically in most cases the
availability of a small dataset and over-fitting of the model. Therefore, we have tried to minimize
it as much as possible as our dataset is comparatively huge to train a model very well. The model
accuracy portrays that the model is not ill-fitted.
The model has been successful to learn spatial and other features of the images by using the
concept of backpropagation. We have done an evacuated effort to exempt something extraordinary.
As we head out of this pandemic, we can transform to take a leaf out of the playbook. Our laudable
goal towards radiology detection through CNN fulfills the holes in the safety net with an embarked
positive aspect.
2|Page
CHAPTER 2
LITERATURE REVIEW
This chapter gives an insight from various studies that have been conducted by different
researchers, and it explains the terms used concerning the CNN model we have used for Covid-19
detection and classification. The chapter aims to portray the present status along with the history
of the problem to be addressed.
Artificial intelligence and data science have always excelled in every field beyond expectations
and served every purpose beyond exceptional results. So it has been considered to use in the field
of clinical uses also. We already have significant development in brain tumors, and other heart
diseases fields’ prediction area. Therefore, our project highlights the concept of using artificial
intelligence for predicting the Covid-19 patients. There are several models to perform transfer
learning e.g. VGG, ResNet and Xception, etc. We have used ResNet for our model. The dataset
used is publicly available from the “University of Waterloo” and consists of 3000 images of
Covid19 positive and negative chest X-ray images. Imbibing new life to old fears with great
alacrity digging deep into the palliative curing power of radiology of COVID–19, targeted towards
the specific task to detect the Coronavirus through the digitally scanned chest x-ray image of a
suspicious individual. The Artificial Neural network is implemented with the help of a convolution
neural network and trained with the available dataset. Developing new jargon for our journey of
the progress we have used the basic understanding of the CNN network and thus implemented it
successfully as per our goal and requirements in the field of radiology. There is two main task
exception or we can say challenge while implementing any CNN model in real life, which is in
most cases the availability of a small dataset and over-fitting of the model. Therefore, we have
tried to minimize it as much as possible as our dataset is comparatively huge to train a model very
well. The model accuracy portrays that the model is not ill-fitted.
The model learns the spatial and other features of the images using the concept of
backpropagation. If we dig deeper into the matter and become a little more specific, we then tested
the performance of the model in every possible severe condition and we have also tested how well
the model can perform the process of transform learning for our CNN model to perform well.
Finally, we merged the two datasets and performed 10- fold cross-validation to investigate the
3|Page
effect of the size of available data on accuracy, precision, and recall. The experimental evaluation
demonstrated the potential of building diagnostic tools for automatic detection of COVID-19
positive cases from chest X-ray images and deep convolution neural networks and the development
of larger and clinically standardized datasets would further help in this direction. We have taken
the entire dataset and then split it up into training, testing, and validation parts. We have split it up
in the ratio of 40:40:20 and then used the training dataset to make the model learn about the features
and then the testing dataset is used to test whether the model is working fine in extreme situations
or not. We have done intense pre-processing of the image before that so that our model
performance would be high enough.
Liu F et al., stated that the Convolution Neural Network has always been dominant in all
image-related prediction tasks as it learns the features of the images thoroughly and efficiently [5].
So, it is used extensively in the field of Computer vision, where the artificial agent is made to
perceive the information from the real world same as the human eye and brain do. The CNN
architecture consists of various layers we say the first layer is the convolution layer, then the
pooling layer, the biases, and other various building block chains. CNN has a wide variety of
applications in the radiology field, which involves studying the light emergence from the organs
of the body. As we are very much familiar with the advantages, disadvantages, and the usage of
the CNN, it allows us to extract the best performance out of it and complete potential from it, which
ultimately fulfills the goal to enhance the performance and the accuracy of our model as a whole
and hence becoming the superior among all other ways. It can be seen that CNN is now been used
in almost every other field related to image prediction tasks and has achieved phenomenal results
in all fields especially the medical-related fields to enhance patients' healthcare value.
The model is well built only when the data provided to the model is accurate and of higher
information value. Many a time the numerical large value of the dataset only gets to loss of memory
and does not provide any meaningful information to the model and the model collapses as the
result of it or gets under fit or over fit, and both are highly undesirable in any of the cases.
Yasaka and Haung have stated that the convolution neural network, which is a class of Artificial
Neural networks typically, shows an extraordinary beyond human intelligence performance in the
tasks related to the image in the fields of computer vision and image processing [3]. The presently
existing areas of CNN implementations include the prediction, classification tasks Image
Classification, Object Detection, Video Processing, Natural Language Processing, and Speech
4|Page
Recognition. CNN has enormously high power in learning and extracting the features from the
current image being displayed as the input and mapping it to the correct output class. The power
comes from so many internal hidden layers that also involve the process of backpropagation and
learning the complex features as well. The larger the quality dataset available to the model for
training the higher the performance achieved. There are many researchers working day and night
to even bring more advancement to the already advanced CNN by using the appropriate activation
function, optimizing the parameters, reducing the value of loss function as much as possible,
regularization, and other innovations related to the typical architecture of the CNN.
It’s a worthy point to note that the idea to extract the spatial and only meaningful information
from the input sample data to make to model learn leaving behind the useless information to reduce
the complexity of the model to learn and work efficiently has now gained substantial attention [6].
Similarly, we can also say that the concept of using linked blocks of layers as a fundamental
discrete unit is also taking the light in the current days. There are quite a few surveys that tell to
focus on the intrinsic taxonomy, which is contained in the typical CNN working. Hence, we can
simply classify the current innovations in the current CNN infrastructure into seven basic domains.
These seven categories broadly rely upon spatial exploitation, width, feature-map exploitation,
depth, multi-path, channel boosting, and attention. In addition to it the basic understanding of the
components, the typical challenger involved is also studied and applications are examined
thoroughly. Therefore in a nutshell we can say that it will be best suited to use the CNN model for
our project which contains valuable information and uses the process of transfer learning using
ResNet and hence getting the desired outcome and providing the betterment to the society as a
whole.
5|Page
CHAPTER 3
METHODOLOGY
Convolutional Neural Networks also known as CNNs or ConvNets, are a type of feed-forward
artificial neural network whose connectivity structure is inspired by the organization of the animal
visual cortex. Small clusters of cells in the visual cortex are sensitive to certain areas of the visual
field. Individual neuronal cells in the brain respond or fire only when certain orientations of edges
are present. Some neurons activate when shown vertical edges, while others fire when shown
horizontal or diagonal edges. A convolutional neural network is a type of artificial neural network
used in deep learning to evaluate visual information. These networks can handle a wide range of
tasks involving images, sounds, texts, videos, and other media.
6|Page
Fig. 2 Stack new layers in the neural network
We can define convolution as the process, which consists of using a filter and sampling it over the
image and hence used to extract only the desired information from the image while removing or
we can say filtering the other with the use of kernel. If we say that image is represented in pixels
in the form of a matrix then the filter is also represented as a matrix and moved through the entire
image matrix covering the entire row at once and then increasing the column value by the desired
unit each time, basically hovering it over to the image, this is also known as a stride. The next step
typically involves the multiplication of the image as a matrix to the weighted matrix filter each
time along the rows and incrementing the unit value of the column. When we reach the ending of
a particular row, we hover the filter to the next row incrementing it by one.
7|Page
When this operation is performed, we will get a filtered output image, which has the size that can
be given by the formula:
W: width
H: height
P: Padding
8|Page
3.2 MODEL FOR COVID-19 DETECTION
We use a convolution neural network (CNN) to process the X-ray images and analyze the
information for the detection of Covid-19 infection.
The proposed system involves a pre-processing of the X-ray image. It consists of three steps
those are resizing all images to standard pixels l (grayscale), normalization of the dataset to avoid
irregularities and at last applying standardization. Then in the next step, we will remove the
surroundings, which did not offer relevant information for the task and may produce biased results;
after this initial stage the classification model i.e., the chest X-rays has a label corresponding to the
image projection: frontal (poster anterior and ante posterior) and lateral. This model will allow us
to filter images efficiently and keep the frontal projection images that offer more information than
lateral images. Following the next stage, we will be applying the process of segmentation on the
lungs to extract the detailed and accurate information from the X-ray and concentrate only on the
9|Page
relevant details. Going towards the next stage, we are going to apply the digital image
preprocessing steps like image rescaling, dilation, and erosion to the image to make it an
appropriate input to the final classification model. The processed image is now provided as the
input to the Convolution Neural Network classifier model, which is going to classify the image
based on its learning from the training datasets, and thus it finally gives the outcome as positive or
negative.
In medical image analysis, classification with deep learning usually utilizes target lesions depicted
in medical images, and these lesions are classified into two or more classes. For example, deep
learning is frequently used for the classification of lung nodules on computed tomography (CT)
images as benign or malignant [7]. For lung nodule classification, CT images of lung nodules and
their labels (i.e., benign or cancerous) are used as training data. The training data where each datum
includes an axial image and its label, and the training data where each data includes three images
(axial, coronal, and sagittal images of a lung nodule) and their labels. After training CNN, the
target lesions of medical images can be specified in the deployment phase by medical doctors or
computer-aided detection (CADe) systems [8]. In this section, we first detailed the architecture of
the proposed model. Next, we present a procedure followed by our method based on the transfer
learning technique for the image pre-processing classification task dataset. Finally, we discuss the
effect of one of the hyper parameters of a deep neural network in our model.
A- Pre - Requirements:
• Deep learning
• Convolution Neural Network
• Digital Image processing
• OpenCV: Image Preprocessing
• Django: Python Framework for backend
• Tesseract : Text Extraction
• Javascript: For Front End
10 | P a g e
B- Procedure followed for X-Ray Detection
The very first step includes the pre-processing of the chest X-ray image which we are going to
provide as the input where we are going to apply various digital image pre-processing steps to
make the image suitable for the models to generate optimal results [1]. Now in the next step, the
X-rays consist of two views the frontal and the lateral we know that the lateral view consists of
very little information, and thus we are going to discard those images, and hence the classifier is
going to classify and produce the frontal images as the output.
In the preceding step, Lung segmentation is to be performed which is going to remove all the
information from the image which is not necessary for the model to make the prediction, and then,
only with the images that pass as frontal from the previous stage[7]. To enhance the accuracy of
the prediction a deep learning classification model will be used to predict COVID-19 positive and
negative cases for the same chest X-ray in two ways:
• In variation I, the datasets passed through the classification without lung segmentation.
• In variation II, we will be using the segmented images, obtained by multiplying the mask
by the original processed image, and then passing it through the CNN classifier model.
Segmentation Stage
These variations will allow us to assess the importance of the segmentation stage, by giving the
model full or partial information and analyzing which part of the images contributes to the
prediction. Thus, we will get the output from the model as a positive or negative label.
C - Classification Stage
Finally the segmented image is supplied as an input to the CNN model which classifies the image
as subject is infected or not.
11 | P a g e
D - Effect of Hyper parameters on our Model
Although several methods facilitate learning on smaller datasets as described above, wellannotated
large medical datasets are still needed since most of the notable accomplishments of deep learning
are typically based on very large amounts of data. Unfortunately, building such datasets in
medicine is costly and demands an enormous workload by experts, and may also possess ethical
and privacy issues [2]. The goal of large medical datasets is the potential to enhance
generalizability and minimize overfitting, as discussed previously. In addition, dedicated medical
pre-trained networks can probably be proposed once such datasets become available, which may
foster deep learning research on medical imaging, though whether transfer learning with such
networks improves the performance in the medical field compared to that with ImageNet
pretrained models is not clear and remains an area of further investigation.
Our model for image classification is based on deep convolution neural networks.
1. Input: Input the images from the data set by the University of Waterloo having a collection
of images labeled by classification tags – called a training set.
2. Learning: In this step, we use the training set to learn for predictable inputs- a step called
learning a model.
3. Evaluation Classifier is used to predict the classification of images that are labeled and
eventually evaluate the quality of the classifier.
We compare the labels predicted by the classifier with the result obtained and judge the input the
classification is true or not.
The very first step includes the pre-processing of the chest X-ray image which we are going to
provide as the input where we are going to apply various digital image pre-processing steps to
make the image suitable for the models to generate optimal results [5]. Now in the next step, the
X-rays consist of two views the frontal and the lateral we know that the lateral view consists of
very little information, thus, we are going to discard those images, and hence the classifier is going
to classify and produce the frontal images as the output.
12 | P a g e
2. Extracting Information from inputs in the preceding step
Lung segmentation is to be performed which is going to remove all the information from the image
which is not necessary for the model to make the prediction and then, only with the images that
pass as frontal from the previous stage [7]. To enhance the accuracy of the prediction a deep
learning classification model will be used to predict COVID-19 positive and negative cases for the
same chest X-ray in two ways:
• In variation a, the datasets passed through the classification without lung segmentation.
• Variation b we will be using the segmented images, obtained by multiplying the mask to the
original processed image, and then passing it through the CNN classifier model.
Segmentation is an initial image processing technique for medical image classification analysis,
illustrated by organ volume and shape using a computer-aided diagnosis system. Training data for
the segmentation system consist of the medical images containing the organ images to produce
segmentation results this would be more efficient than performing manual segmentation. As
compared to classification where the whole image is treated as one in this step we segmented the
image into different nodules to produce the latter result. Performing segmentation using CNN
classifier used for calculating the probability of an organ defected for covid or not. Segmentation
is divided into two steps:-
• Step 1 - construction of the probability map of the organ using CNN and image nodules.
• Step-2 - refinement step where the probability map is utilized.
These variations will allow us to assess the importance of the segmentation stage, by giving the
model full or partial information and analyzing which part of the images contributes to the
prediction. Thus, we will get the output from the model as a positive or negative label.
13 | P a g e
3. Detection
Detection of the image from inputs using dataset is the most crucial step to achieve the final results
as lung detection from cropping using morphological process is treated to detect a person's lung
got infected fro covid or not X-ray detection technique applied on the segmented image.
14 | P a g e
CHAPTER 4
EXPERIMENTAL RESULTS
4.1 INTRODUCTION
This chapter presents the experimental results of the model used to perform the classification of
the chest X-ray image.The dataset used is publicly available from the University of Waterloo and
consists of 3000 images of Covid-19 positive and negative chest X-ray images.
A team of researchers from Qatar University, Doha, Qatar, and the University of Dhaka,
Bangladesh along with their collaborators from Malaysia in collaboration with medical doctors
have created a database of chest X-ray images for COVID-19 positive cases along with Normal
and Viral Pneumonia images. This COVID-19, normal, and other lung infection dataset is released
in stages.
In the first release, we released 219 COVID-19, 1341 normal, and 1345 viral pneumonia chest
X-ray images. In the first update, they increased the COVID-19 class to 1200 images. In the second
update, they have increased the database to 3616 COVID-19 positive cases along with 10,192
Normal, 6012 Lung Opacity (Non-COVID lung infection), and 1345 Viral Pneumonia images.
15 | P a g e
Fig. 4 Dataset Description
COVID data are collected from different publicly accessible datasets, online sources, and
published papers.
Normal images:10192 Normal data are collected from three different datasets.
Lung opacity images: 6012 Lung opacity CXR images are collected from the Radiological Society
of North America (RSNA) CXR dataset
Viral Pneumonia images: 1345 Viral Pneumonia data are collected from the Chest X-Ray Images
(pneumonia) database [9]
16 | P a g e
4.3 TRAINING ON DATASET
The dataset we used was split into three subsets of training, testing, and validation parts in the ratio
of 40:40:20. We have mentioned earlier also that we have kept a keen eye on fitting as well as
overfitting of the model to avoid any kind of performance loss from our model.
The abundance of data imagining is desirable but rarely available due to cost. Techniques used to
train the model are:
Data augmentation
Transfer Learning
I. An effective strategy to train the model on a small data set- network is a completely large
dataset called an image Net
II. The generic features learned on a large enough dataset can be shared among seemingly
disparate datasets. This portability of learned generic features is a unique advantage of
deep learning that makes itself useful in various domain tasks with small datasets
III. A fixed feature extraction method is a process to remove fully connected layers from a
network pre-trained on Image Net while maintaining the remaining network, which
consists of a series of convolution and pooling layers, referred to as the convolution base,
as a fixed feature extractor.
To enhance the accuracy of the prediction a deep learning classification model will be used to
predict COVID-19 positive and negative cases for the chest X-ray in particular ways [7]. The
datasets passed through the classification without lung segmentation and then we will be using
the segmented images, obtained by multiplying the mask to the original processed image, and
then passing it through the CNN classifier model.
17 | P a g e
Fig. 5 Dataset description
18 | P a g e
Fig. 6 High-level description of the system
19 | P a g e
Fig. 7 Graphical representation of metadata
20 | P a g e
a b
c d
21 | P a g e
e f
g h
Fig. 8 (continued)
i j
22 | P a g e
k l
Fig. 8 (continued)
m n
23 | P a g e
o p
Fig. 8 (continued)
24 | P a g e
4.4 Result
When an X-ray image is given as an input to the CNN model, it can be seen that it gives the output
whether the X-ray is Covid-19 positive or negative along with the chances of the X-ray being
Corona virus-infected and being normal.
25 | P a g e
Fig. 10 Model performance
26 | P a g e
CHAPTER 5
CONCLUSION
5.1 Conclusion
We presented an evaluation of transfer learning using pre-trained deep convolution neural network
models for COVID-19 identification using chest X-ray images. The experimental evaluation
demonstrated the potential of building diagnostic tools for automatic detection of COVID19
positive cases from chest X-ray images and deep convolution neural networks and the development
of larger and clinically standardized datasets would further help in this direction. Inspired by recent
research that correlates the presence of COVID-19 to findings in Chest X-ray images. Our
approach would be to use build and design deep learning models to process these images and
classify them as positive or negative for COVID-19. The new COVID-19 virus has caused
thousands of deaths, especially in elders and patients with health conditions. The standard method
for detection and diagnosis of COVID-19 is the reverse transcription-polymerase chain reaction
(RT-PCR) test after collection of proper respiratory tract specimen, which is time-consuming and
in many cases not affordable, thus the development of new low-cost rapid tests of diagnostic tools
to support clinical assessment is needed. We presented an evaluation of transfer learning using pre-
trained deep convolution neural network models for COVID-19 identification using chest Xray
images. Two publicly available datasets were used in different experimental setups.
For the future, this deep learning using CNN architecture is in exigency to be trained on a
macroscopic on high ranges of the available dataset to the performance silver check-in results. In
27 | P a g e
collaboration with that, it will also be used to predict datasets on other identifying chest-related
diseases such as Bronchiectasis and SARS. We admire our work inspires others too so that it may
help in enhancing the accuracy and in all contributing to the communication.
28 | P a g e
REFERENCES
[1] R.yama, M. Misho, R.K.G Do and K, Togashi, Convolution Neural Network: An Overview and
Application in Radiology, Insight into Imaging, 2018, 611-62.
[2] Convolutional neural networks: An overview and application in radiology, Online Available
at:[ Image Classification Using Convolutional Neural Networks (ijser.org)], Accessed on June
22, 2018
[3] Yasaka K and C. Huang, “Clinical features of patients infected with 2019 novel coronavirus
in Wuhan, China,” Lancet, vol. 395, no.12, pp. 344-365.
[4] World Health Organization, “Pneumonia of unknown cause-China, Online Available at: [1D
convolutional neural networks and applications: A survey - ScienceDirect”], Accessed on 18
July 2022
[5] Liu F, A. H. Shurrab and A. Y. A. Maghari, "Covid-19 detection using data mining
techniques", 2017 8th International Conference on Information Technology (ICIT), pp. 625-
631.
[6] Catrin Sohrabia, Zaid Alsafi, Niamh O’Neill, Mehdi Khan, Ahmed Kerwan, Ahmed Al-Jabir,
"World Health Organization declares global emergency: A review of the 2019 novel coronavirus
(COVID-19)", International Journal of Surgery, vol. 76, pp. 71-76, April 2020 V.
[7] Yasaka K et al., Predicting COVID-19 Pneumonia Severity on Chest X-ray with Deep
Learning, vol.14 no. 7, pp.34-36.
[8] Simonyan K, Zisserman A (2015) deep convolutional networks for large-scale image
recognition. ArXiv, Online Available at:[Very Deep Convolutional Networks for Large-Scale
Image Recognition (arxiv.org)], Accessed on 22 Jan 2018
29 | P a g e