0% found this document useful (0 votes)
71 views11 pages

Deep Learning Model For Automatic Classification A

Uploaded by

siwar garrouri
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
71 views11 pages

Deep Learning Model For Automatic Classification A

Uploaded by

siwar garrouri
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

Hindawi

Journal of Sensors
Volume 2022, Article ID 3065656, 11 pages
https://doi.org/10.1155/2022/3065656

Research Article
Deep Learning Model for Automatic Classification and
Prediction of Brain Tumor

Sarang Sharma,1 Sheifali Gupta,1 Deepali Gupta ,1 Abhinav Juneja ,2 Harsh Khatter ,2
Sapna Malik,3 and Zelalem Kiros Bitsue 4
1
Chitkara University Institute of Engineering and Technology, Chitkara University, Punjab, India
2
KIET Group of Institutions, Delhi NCR, Ghaziabad, India
3
Maharaja Surajmal Institute of Technology, Delhi, India
4
US AHO, Ethiopia

Correspondence should be addressed to Zelalem Kiros Bitsue; bitsue.zelalem29@gmail.com

Received 17 December 2021; Revised 2 March 2022; Accepted 12 March 2022; Published 8 April 2022

Academic Editor: Pradeep Kumar Singh

Copyright © 2022 Sarang Sharma et al. This is an open access article distributed under the Creative Commons Attribution
License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is
properly cited.

A brain tumor (BT) is an unexpected growth or fleshy mass of abnormal cells. Depending upon their cell structure they could
either be benign (noncancerous) or malign (cancerous). This causes the pressure inside the cranium to increase that may lead
to brain injury or death. This causes excessive exhaustion, hinders cognitive abilities, headaches become more frequent and
severe, and develops seizures, nausea, and vomiting. Therefore, in order to diagnose BT computerized tomography (CT),
magnetic resonance imaging (MRI), positron emission tomography (PET), and blood and urine tests are implemented.
However, these techniques are time consuming and sometimes yield inaccurate results. Therefore, to avoid such lengthy and
time-consuming techniques, deep learning models are implemented that are less time consuming, require less sophisticated
equipment, yield results with greater accuracy, and are easy to implement. This paper proposes a transfer learning-based model
with the help of pretrained VGG19 model. This model has been modified by utilizing a modified convolutional neural network
(CNN) architecture with preprocessing techniques of normalization and data augmentation. The proposed model achieved the
accuracy of 98% and sensitivity of 94.73%. It is concluded from the results that proposed model performs better as compared
to other state-of-art models. For training purpose, the dataset has been taken from the Kaggle having 257 images with 157
with brain tumor (BT) images and 100 no tumor (NT) images. With such results, these models could be utilized for
developing clinically useful solutions that are able to detect BT in CT images.

1. Introduction glands of the brain whereas metastatic brain tumor is devel-


oped in the different body parts of the human body like
The nervous system of the human body is controlled by the breasts or lungs and migrated into the brain [3]. Tumors
important organ called as the brain. It consists of 100 bil- are malignant or benign. Malignant brain tumors grow very
lions of nerve cells [1]. If any nerve cells are damaged, it fast in the body and are also cancerous. The most common
may cause several human health problems which leads to malignant brain tumor is glioblastoma [4]. In benign brain
abnormality in the brain of the human body. These damaged tumors, the cells grow at a relatively slow speed are noncan-
cells give an adverse effect on tissues of the brain. Such prob- cerous too. Such type of tumor does not spread into other
lem increases the risk of brain tumors in the human body parts of the body. If it is removed safely from surgery, it will
[2]. Primary and metastatic are the two different categories not come back into the body [5]. If the brain tumors are
of brain tumors. Primary brain tumors originated inside diagnosed at early stages, it increases the survival rate of
the brain which includes nerves, blood vessels, or various the patients. Other primary brain tumors include pituitary
2 Journal of Sensors

Table 1: Comparison of existing state-of-art models.

Citation/
year of Reference Approach Objective Challenges of the approach
publishing
Dataset contained 3064 brain tumor images. It
CDLLC-CNN, To develop brain tumor classification
[1]/2021 FIN implemented binary classification and yielded an
VGG19, VGG16 technique by using CDLLC on CNN.
accuracy of 96.39%.
Dataset contained 1426 brain tumor images. It
SVM-CNN, To distinguish brain tumor from healthy
[2]/2021 JAIHC implemented binary classification and yielded an
VGG16, VGG19 individuals using SVM with CNN.
accuracy of 95.82%.
RNGAP-CNN, Dataset contained 3064 brain tumor images. It
To predict brain tumor from normal
[3]/2021 MMTA DenseNet201, implemented binary classification and yielded an
individual by RNGAP model on CNN.
VGG16 accuracy of 97.08%.
3DCNN, Dataset contained 1074 brain tumor images. It
To detect brain tumor on CT scans using
[4]/2021 MRT DenseNet201, implemented binary classification and yielded an
3DCNN technique.
VGG 16 accuracy of 92.67%.
MSMCNN, To automatically classify CT images into Dataset contained 374 brain tumor images. It
[5]/2021 NCA DenseNet121, brain tumor and normal individuals by implemented binary classification and yielded an
VGG19 using MSMCNN. accuracy of 96.36%.
HSANN, Dataset contained 3064 brain tumor images. It
To classify BT by using HSANN
[6]/2019 BS VGG19, implemented binary classification and yielded an
architecture.
DenseNet201 accuracy of 97.33%.
ELM-CNN, Dataset contained 1074 brain tumor images. It
To develop an ELM system to early
[7]/2017 SIVP DenseNet201, implemented binary classification and yielded an
diagnose BT individuals.
VGG16 accuracy of 97.8%.
Dataset contained 1074 brain tumor images. It
3DCNN,
[8]/2020 JDI To classify BT analysis by using 3DCNN implemented binary classification and yielded an
DenseNet201
accuracy of 96.49%.
Deep-CNN, Dataset contained 121 brain tumor images. It
To develop Deep-CNN system that can
[9]/2021 JCS DenseNet121, implemented binary classification and yielded an
determine BT by using CT scans.
DenseNet201 accuracy of 94.58%.
CNN, VGG16, Dataset contained 3064 brain tumor images. It
To diagnose BT by using an ensemble
[10]/2021 WMPBE VGG19, implemented binary classification and yielded an
system of CNN.
DenseNet201 accuracy of 84.19%.

tumors which are usually benign and are located near the Depending upon the location, type, and size of the tumor,
pituitary glands; pineal gland tumors which could be either different methods are employed to treat different tumors.
malignant or benign; lymphomas located at central nervous Surgery is the most widely recognized treatment of tumor
system which is malignant; and meningiomas and schwan- and has no adverse effects [9]. Grade 4 tumors can also lead
nomas, both of them occur in people in the age group in to neurodegenerative disease such as Alzheimer’s disease,
between 40 and 70 and mostly are benign. Parkinson’s disease, and Huntington’s disease which lead
According to World Health Organization (WHO), there to inability of basic cognitive and motor functions of the
exist four grades of brain tumors [6]. Grading is the process body and may lead to dementia.
of segmenting the brain tumor cells on the basis of their To detect the progress in modelling process, computed
identification. The more the abnormal the cells represent, tomography images of the brain are used. Computed tomog-
the higher the grade is detected. Grades I and II depict the raphy (CT) is not only an alternate method for the detection
lower level tumors whereas grade III and IV tumors com- of tumor but also provides more data about the given med-
prise the most extreme ones [7]. In grade 1, the cells appear ical image [10].
to be normal, hence less likely to infect other cells. In grade This paper encloses a novel CNN-based model that clas-
2, cells appear to be slowly growing into the adjacent neigh- sifies BT in two categories, i.e., BT and NT. Moreover, the
boring brain tissue. In grade 3, cells appear to be more CNN model is trained and developed for a large dataset.
abnormal and start spreading to other parts of the brain The accuracy of the proposed model has been enhanced by
and central nervous system. In grade 4, cells exhibit more implementing preprocessing techniques like normalization
abnormality and start growing into tumors and spread these and data augmentation on the dataset. Thus, automated sys-
to other parts of the brain and spinal cord. A benign tumor tems like these are helpful in saving time and also improve
is of low grade whereas malignant tumor is of high grade [8]. the efficiency in clinical institutions.
Journal of Sensors 3

Table 2

(a) Different architectures of CNN: DenseNet121 and DenseNet201

Layers Output size DenseNet121 DenseNet201


Convolution 112 × 112 7 × 7, stride 2 7 × 7, stride 2
Pooling 56 × 56 3 × 3 maxpool, stride 2 3 × 3 maxpool, stride 2
Dense block 1 56 × 56 6 × ½½1 × 1 conv, ½3 × 3 conv 6 × ½½1 × 1 conv, ½3 × 3 conv
56 × 56 1 × 1 conv 1 × 1 conv
Transitional layer 1
28 × 28 2 × 2 average pool, stride 2 2 × 2 average pool, stride 2
Dense block 2 28 × 28 12 × ½½1 × 1 conv, ½3 × 3 conv 12 × ½½1 × 1 conv, ½3 × 3 conv
28 × 28 1 × 1 conv 1 × 1 conv
Transitional layer 2
14 × 14 2 × 2 average pool, stride 2 2 × 2 average pool, stride 2
Dense block 3 14 × 14 24 × ½½1 × 1 conv, ½3 × 3 conv 48 × ½½1 × 1 conv, ½3 × 3 conv
14 × 14 1 × 1 conv 1 × 1 conv
Transitional layer 3
7×7 2 × 2 average pool, stride 2 2 × 2 average pool, stride 2
Dense block 4 7×7 16 × ½½1 × 1 conv, ½3 × 3 conv 32 × ½½1 × 1 conv, ½3 × 3 conv
1×1 7 × 7 global average pool 7 × 7 global average pool
Classification layer
1000 Fully connected, Softmax Fully connected, Softmax

(b) Different architectures of CNN: VGG 16 and VGG 19

Layers Output size VGG 16 VGG 19


224×224 2 × ½Conv2D 2 × ½Conv2D
Convolution Block1
112×112 Max pooling 2D Max pooling 2D
112×112 2 × ½Conv2D 2 × ½Conv2D
Convolution Block2
56×56 Max pooling 2D Max pooling 2D
56×56 3 × ½Conv2D 4 × ½Conv2D
Convolution Block3
28×28 Max pooling 2D Max pooling 2D
28×28 3 × ½Conv2D 4 × ½Conv2D
Convolution Block4
14×14 Max pooling 2D Max pooling 2D
14×14 3 × ½Conv2D 4 × ½Conv2D
Convolution Block5
7×7 Max pooling 2D Max pooling 2D
Classification layer 4096 3 × ½ fully connected, Softmax 3 × ½ fully connected, Softmax

(c) Different architectures of CNN and parameters of all the models

Name of model Size of input layer Size of output layer Number of layers Trainable parameters (millions)
VGG16 (224, 224, 3) (4,1) 16 138
VGG19 (224, 224, 3) (4,1) 19 143
DenseNet121 (224, 224, 3) (4,1) 121 8
DenseNet201 (224, 224, 3) (4,1) 201 10.2

2. Related Work parison of existing state-of-art models in which approach


used and challenges of the approach are given in details.
Most of the researchers working on the binary classification From Table 1, it can be observed that a small size of
of BT are comparatively using a similar dataset to design a dataset has been used to train and validate the existing
CNN-based model that may not be versatile. The authors state-of-art models. However, Gu et al. [1], Kumar et al.
working on a large dataset have also implemented binary [3], Abd El Kader et al. [6], and Abiwinanda et al. [10] uti-
classification only with lesser accuracy. Table 1 depicts com- lized comparatively larger datasets to validate their models.
4 Journal of Sensors

DENSENET121:

DENSENET201:

VGG16:

VGG19:

Figure 1: Illustration of the major functional blocks of four CNN models.

Table 3: Brain tumor dataset description. DenseNet121 comprises one convolutional layer (CL),
one max pooling layer (MPL), three transition layers (TL),
Brain Number of training Number of validating one average pooling layer (APL), one FCL, and one Softmax
S.no.
tumor images images layer (SML) with 10.2 million trainable parameters. It has also
1 BT 125 32 four dense block layers (DBL) in which the third and fourth
2 NT 79 21 dense blocks have one CL of stride 1 × 1 and stride 3 × 3,
respectively [14]. DenseNet201 comprises one CL, one MPL,
three TL, one APL, one FCL, and one SML with 10.2 million
But, it can be noticed that these studies have worked on trainable parameters. It has also four DBL in which third
mostly binary classification. and fourth DBL have two CL of stride 1 × 1 and stride 3 × 3,
The proposed model in this research paper is trained on respectively [15]. VGG16 comprises thirteen CL, five MPL,
a large size of dataset having 1800 images. The proposed three FCL, and one SML with 138 million trainable parame-
model classifies the brain tumor into two categories that is ters [16]. VGG19 comprises sixteen CL, five MPL, two FCL,
with brain tumor (BT) and no tumor (NT). and one SML with 143 million trainable parameters [17].

3. Research Methodology
2.1. Brain Tumor Prediction Using Pretrained CNN Models.
For a wide range of healthcare research and applications, Many studies and research have been conducted on BT but
the convolutional neural network models had always dem- very less work has been implemented and published on
onstrated to acquire higher-grade results. Still, building these comparative analysis of BT using four D.L models which
pretrained convolutional neural network models from are VGG16, VGG19, DenseNet121, and DenseNet201.
scratch had always been strenuous for prediction of this neu- Then, these models results are displayed and compared by
rological disease due to restricted access of computed plotting graphs of accuracy, loss, and learning curves and
tomography (CT) images [11]. These pretrained models determining validation rules [18].
are derived from the concept of transfer learning, in which
a trained D.L model from a large dataset is used to elucidate 3.1. Dataset. For the proposed solution, an open access data-
the problem with a smaller dataset [12]. Due to this, not only set is used which is available on (https://www.kaggle.com/
the requirement for a large dataset is removed but also navoneel/brain-mri-images-for-brain-tumor-detection/)
removes excessive learning time required by various D.L uploaded by Navoneel Chakrabarty on 14th April 2019 and
models. This paper encloses four D.L models such as Dense- is named as ‘Brain MRI images for Brain Tumor Detection.’
Net121, DenseNet201, VGG16, and VGG19. These models The dataset consists of two categories of with brain tumor
were trained on ImageNet and then fine-tuned over BT (BT) and no brain tumor (NT) images which had a total of
images. In the last layer of these pretrained models, the fully 157 and100 images, respectively [19]. All of them are of size
connected layer (FCL) is inserted [13]. The architectural 467 × 586 × 3. This dataset is simply divided into two parts.
description and functional blocks of all architectures are One part is known as the training part, and other is known
shown in Tables 2(a) and 2(b), parameters are shown in as the validation part [20]. Dataset category description is
Table 2(c), and Figure 1 displays the diagrammatic represen- given in Table 1, and the image of dataset samples are shown
tation for these models, respectively. in Table 3 and Figure 2.
Journal of Sensors 5

(a) (b)

Figure 2: Brain tumor dataset: (a) no tumor and (b) brain tumor.

NORMALIZATION
INPUT IMAGE TRANSFER LEARNING PREDICTION
AUGMENTATION VGG16
FLIPPING VGG19 WITH BRAIN TUMOR
ROTATION DENSENER121 NO BRAIN TUMOR
BRIGHTNESS DENSENER201

Figure 3: Overview of the proposed model.

(a) (b) (c)

Figure 4: Flipping data augmentation: (a) original, (b) horizontal flipping, and (c) vertical flipping.

3.2. Proposed Methodology. The proposed BT detection 0 and 255. By normalizing the input images, D.L models
model is depicted in Figure 3. This model classifies BT image can be trained faster [21].
into four categories, namely, NT and BT.
3.2.2. Augmentation. In order to improve effectiveness of a
D.L model, a large amount of dataset is required. However,
3.2.1. Normalization. The dataset underwent normalization accessing these datasets often come along with numerous
preprocessing technique so as to keep its numerical stability restrictions [22]. Therefore, in order to surpass these issues,
to D.L models. Initially, these CT images are in monochro- data augmentation techniques are implemented to increase
matic or in grayscale format having pixel values in between the number of sample images in the sample dataset [23].
6 Journal of Sensors

(a) (b)

(c) (d)

Figure 5: Clockwise rotation data augmentation: (a) original, (b) 90-degree anticlockwise, (c) 180-degree anticlockwise, and (d) 270-degree
anticlockwise.

(a) (b) (c)

Figure 6: Brightness data augmentation: (a) original image, (b) with brightness factor 0.2, and (c) with brightness factor 0.4.

Various data augmentation methods such as flipping, rotation, Rotation augmentation technique as shown in Figure 5 is
brightness, and zooming are implemented. Both horizontal implemented in clockwise direction by an angle of 90 degree
flipping and vertical flipping techniques are shown in Figure 4. each.
Journal of Sensors 7

Table 4: Sample images before and after data augmentation. Table 6: Training performance of all models with 16 batch size.

Brain Number of images Number of images after Train Valid Error Valid accuracy
S.no. Class Epoch
tumor before augmentation augmentation loss loss rate (%)
1 BT 157 1100 5 0.083 0.371 0.08 92.13
2 NT 100 700
10 0.075 0.302 0.08 92.34
VGG16
15 0.062 0.223 0.07 93.27
Table 5: Confusion matrix parameters of all models with 16 batch 20 0.067 0.205 0.06 94.83
size.
25 0.052 0.192 0.06 94.22
Precision Sensitivity Specificity Accuracy 5 0.103 0.126 0.06 94.71
Model
(%) (%) (%) (%)
10 0.089 0.105 0.05 95.76
VGG16 88.23 93.75 94.12 94 15 0.072 0.093 0.04 95.41
VGG19 100 94.73 100 98 VGG19
20 0.035 0.083 0.03 96.67
DenseNet121 85.71 100 94.73 96 25 0.026 0.081 0.03 96.71
DenseNet201 93.33 93.33 97.14 96 5 0.042 0.481 0.07 92.13
10 0.035 0.443 0.06 93.57
15 0.029 0.353 0.05 94.3
Brightness data augmentation technique as shown in DenseNet121 20 0.023 0.35 0.05 94.9
Figure 6 is also applied on in image dataset by taking bright- 25 0.021 0.33 0.05 94.92
ness factor values such as 0.2 and 0.4. 5 0.073 0.193 0.04 95.98
Training images before and after augmentation are
shown in Table 4. Further, there is a class imbalance in the 10 0.062 0.081 0.04 96.17
input dataset. In order to resolve this imbalance issue, the 15 0.059 0.071 0.04 96.4
DenseNet201
above data augmentation techniques are applied. After 20 0.045 0.059 0.02 97.98
applying these data augmentation techniques, the sample 25 0.043 0.052 0.02 98.2
dataset in each class was increased to 700 to 1000 images
approximately, and then, the entire sample dataset was
updated to 1800 images. Table 4 represents the number of (a) Accuracy. Accuracy is defined as the ratio of total
newly updated images. number of true predictions to the total number of
observed predictions
(b) Precision. Precision is calculated as the number of
4. Experiments and Results correct positive predictions divided by the total
number of positive predictions
An experimental evaluation for detection of BT from CT
images using four pretrained CNN models such as Dense- (c) Specificity. Specificity is defined as the number of
Net121, DenseNet201, VGG16, and VGG19 is implemented. correct negative predictions divided by the total
The CNN models were implemented using CT images col- number of negatives
lected from the brain tumor Dataset. For training and vali- (d) Sensitivity. Sensitivity is defined as the number of
dating, 432 training images and 104 testing images were correct positive predictions divided by the total
used, respectively. The brain MRI images were initially number of positives
resized from 467 × 586 to 224 × 224. An algorithm was
implemented using FastAI library. For transfer learning,
the models are trained for the batch size 16. Each model 4.2. The Training Performance Comparison for Different
was trained for 20 epochs. Both the batch size and number Models. Various performance parameters in terms of train-
of epochs are determined empirically. Adam optimizer was ing loss, validation loss, and error rate, and validation accu-
used to perform training. The learning rate was also empir- racy are obtained by four different models using different
ically decided. The performance of each model was evalu- epochs and batch size [25]. The four models such as Dense-
ated based on performance metrics such as accuracy, Net121, DenseNet201, VGG16, and VGG19 were evaluated
precision, sensitivity, and specificity. using 20 epochs with 16 batch size, respectively. For training
of all D.L models, Adam optimizer is utilized. From Table 5,
it can be seen that the VGG19 model acquired the highest
4.1. Performance Metrics. The performance metrics are cal- performance in the testing phase with precision of 100%,
culated by various parameters of the confusion matrix such sensitivity of 94.73%, specificity of 100%, and yielded accu-
as true positive (TP), false positive (FP), true negative racy of 98% for batch size 16. Table 6 depicts that during
(TN), and false negative (FN) [24]. These confusion matrix training phase also, and VGG19 outperforms the other
parameters are shown below: models because validation loss is minimum, whereas
8 Journal of Sensors

VGG16 VGG19
30 30

No tumor 15 1 25 No tumor 18 1 25

20 20
Actual

Actual
15 15

2 32 10 0 32 10
Brain tumor Brain tumor

5 5

No tumor Brain tumor No tumor Brain tumor

Predicted Predicted
(a) (b)
Densenet-121 Densenet-201
30 30

No tumor 12 0 25 No tumor 14 1 25

20 20
Actual

Actual

15 15

2 36 10 1 34 10
Brain tumor Brain tumor

5 5

No tumor Brain tumor No tumor Brain tumor

Predicted Predicted
(c) (d)

Figure 7: Confusion matrix of all models with 16 batch size: (a) VGG-16, (b) VGG-19, (c) DenseNet121, and (d) DenseNet201.

validation accuracy is highest in case of VGG19. It has 19 respectively, for batch size 16. From the results, it is analyzed
layers, and 8 million features are comparatively lower than that VGG19 performs better among all the models.
DenseNet121 and DenseNet201 but even then it is outper- From the previous discussion, it is analyzed that VGG19
forming. DenseNet121 and DenseNet201 are almost having performs better for batch size 16 as compared to other
the same performance but DenseNet201 has comparatively models. Now, the learning rate curve is drawn for VGG19
more layers than DenseNet121 that will cause more process- and DenseNet201 for batch size 16 in Figure 9. Learning rate
ing time. After 20 epochs, the performance parameters of all curve controls the model learning rate that decides how
the models remain similar. slowly or speedily a model learns. As the learning rate
increases, a point is generated where the loss stops diminish-
4.3. Confusion Matrices of Different Pretrained Models. The ing and starts to magnify. Ideally, the learning rate should be
confusion matrices of all D.L models of batch size 16 are to the left of lowest point on the graph. For example, in
shown in Figure 7. These matrices represent both correct Figure 9(a), learning rate is shown for VGG19 in which
and incorrect predictions. Each and every column is labelled the point with the lowest loss lies at point 0.001, so the learn-
by its class name such as BT and NT. Diagonal values yield ing rate for VGG19 should be between 0.0001 and 0.001.
accurate number of images classified by the particular Similarly, in Figure 9(b) where the learning rate is shown
model. for DenseNet201, lowest loss point lies at 0.00001. Hence,
From these confusion matrix, accuracy of all the models learning rate for Densenet201 should lie between 0.000001
is evaluated for batch size 16. The accuracy for all the models and 0.00001, which is lowest; it is clear that as the learning
is analyzed through the graphs as shown in Figure 8. From rate increases loss also increases.
Figure 8, it is clear that the best performers are VGG19 Loss convergence plot for VGG19 and DenseNet201
and DenseNet201 with accuracy achieved 98% and 96%, CNN models for batch size 16 are shown in Figure 10.
Journal of Sensors 9

Precision Sensitivity
100 100
95 95
90 90
85 85
80 80

1
19
16
en t121

1
en G19

12

20
16

20

G
G

et

et
G

VG
et

VG
VG

N
VG

se

se
se

se

en

en
D

D
D

D
Precision Sensitivity

(a) (b)
Specificity Accuracy
100 100
95 95
90 90
85 85
80 en t121 80

1
se 19

19
VG 6

16
20

12

20
1
G

G
G

G
et

et

et
VG
e
VG

VG
N

N
se

se

se
en

en

en
D

D
Specificity Accuracy
(c) (d)

Figure 8: Graphical representation of confusion matrix parameters for (a) VGG16, (b) VGG19, (c) DenseNet121, and (d) DenseNet201.

1.6
0.9
1.4
0.8
Loss

1.2
Loss

0.7

1.0 0.6

0.8 0.5

1e–06 1e–05 1e–04 1e–03 1e–02 1e–01 1e+00 1e–06 1e–05 1e–04 1e–03 1e–02 1e–01

Learning rate Learning rate

(a) (b)

Figure 9: Learning rate vs. loss curve for proposed model with 16 batch size: (a) VGG19 and (b) DenseNet201.

0.8 0.8
0.6 0.6
Loss

Loss

0.4 0.4
0.2 0.2
0.0 0.0
0 20 40 60 80 100 120 0 20 40 60 80 100 120
Batches processed Batches processed
Train Train
Validation Validation
(a) (b)

Figure 10: Batches processed vs. loss curve for different CNN architectures with 16 batch size: (a) VGG19 and (b) DenseNet201.
10 Journal of Sensors

Table 7: Comparison with existing state-of-art models.

Study Dataset source No. of images Technique used Accuracy


Gu et al. [1] REMBRANDT 3064 CDLLC on CNN 96.39%
Deepak et al. [2] Figshare 1426 SVM with CNN 95.82%
Kumar et al. [3] Figshare 3064 RNGAP model on CNN 97.08%
Rehman et al. [4] BraTS 2018 1074 3DCNN 92.67%
Rajasree et al. [5] BraTS 2015 374 MSMCNN 96.36%
Abd El Kader et al. [6] Figshare 3064 HSANN 97.33%
Bodapati et al. [7] BraTS 2018 1074 ELM 97.8%
Mzoughi et al. [8] BraTS 2018 1074 3DCNN 96.49%
Sajjad et al. [9] Radiopaedia 121 Deep-CNN 94.58%
Abiwinanda. et al. [10] Figshare 3064 CNN 84.19%
Proposed methodology Kaggle 1800 VGG19 98%

Figure 10 depicts the variations in loss during the course 5. Conclusion


of training the models. As the models learned from the
data, the loss started to drop until it could no longer In this paper, the effectiveness of four most effective D.L
improve during the course of training. Also, validation models such as VGG16, VGG19, DenseNet121, and Dense-
losses are calculated for each epoch. The validation shows Net201 for detection of BT have been thoroughly evaluated.
relatively consistent and low loss values with increasing VGG19 and DenseNet201 yield best results as compared
epochs. From Figure 10, it is clear that minimum loss is with different models using batch size 16. The dataset for
achieved for VGG19 and DenseNet201 at each epoch for BT was acquired from Kaggle via author Navoneel Chakra-
batch size 16. From Figure 10, it is analyzed that at the barty. The results are obtained after training and analysis
time where 120 batches are processed, loss obtained for of these models. Further, by properly working on batch sizes,
VGG19 is comparatively less than Densenet201. For optimizers, and epochs, these results demonstrated the effec-
VGG19, validation and training loss lies between 0 and tiveness of the VGG19 model. Accuracy and sensitivity of
0.2, whereas for DenseNet201, it lies between 0.2 and 98% and 94.73%, respectively, were achieved with the
0.4. Hence, it is clear that VGG19 performs better than VGG19 for batch size 16 with Adam optimizer. Similarly,
DenseNet201 in terms of training and validation loss at accuracy and sensitivity of 96% and 93.33%, respectively,
batch size 16. were achieved with the DenseNet201 for batch size 16 with
Adam optimizer. These comparative results would be cost
4.4. Performance Evaluation with Existing Techniques. effective and would help radiologist take a second opinion
Results obtained from the proposed model are shown in tool or simulator. The major purpose of this research is to
comparison with state-of-art models using CT images as predict BT as early as possible. This comparative analysis
shown in Table 7. It can be observed that the proposed model could become a second opinion tool for radiologists.
method achieved higher performance than other existing This study helps for more accurate diagnosis for develop-
techniques because of preprocessing techniques applied ment of D.L models.
on the dataset. Compared to most studies, Deepak et al. Drawback of this proposed study is that only axial data-
[2], Rehman et al. [4], Rajasree et al. [5],Bodapati et al. set of BT samples is used for training and validation pur-
[7], Mzoughi et al. [8], and Sajjad et al. [9] had utilized pose. In future, the proposed model can further be
a small number of dataset to validate their models. How- generalized by taking coronal and sagittal datasets during
ever, Gu et al. [1], Kumar et al. [3], Abu El Kader et al. training and validation. Also, different pretrained models
[6], and Abiwinanda et al. [10] utilized comparatively and optimization techniques could also be implemented to
larger datasets to validate their models. Also, it can be further enhance the effectiveness of the proposed model.
noticed that mostly binary classification was performed
in all the studies. In the proposed model, VGG19 and
DenseNet201 have been proposed with data augmentation Data Availability
and data normalization techniques to enhance their accu-
racy. The designed model performs better with ADAM Data is available at Figshare, BraTS 2018, BraTS 2015,
optimizer and batch size 16. The proposed model compar- Radiopaedia, and Kaggle.
ison with existing state of state-of-art models is illustrated
in Table 7. From Table 7, it can be analyzed that the pro-
posed model performs better as compared to state-of-art Conflicts of Interest
models in terms of accuracy as well as size of image
dataset. The authors declare that they have no conflicts of interest.
Journal of Sensors 11

References Recent Technology and Engineering, vol. 8, no. 2, pp. 5230–


5235, 2019.
[1] X. Gu, Z. Shen, J. Xue, Y. Fan, and T. Ni, “Brain tumor MR [16] W. Ayadi, W. Elhamzi, and M. Atri, “A new deep CNN for
image classification using convolutional dictionary learning brain tumor classification,” in 2020 20th International Confer-
with local constraint,” Frontiers in Neuroscience, vol. 15, 2021. ence on Sciences and Techniques of Automatic Control and
[2] S. Deepak and P. M. Ameer, “Automated categorization of Computer Engineering, pp. 266–270, Monastir, Tunisia, Dec.
brain tumor from MRI using CNN features and SVM,” Journal 2020.
of Ambient Intelligence and Humanized Computing, vol. 12, [17] P. Saxena, A. Maheshwari, and S. Maheshwari, “Predictive
no. 8, pp. 8357–8369, 2021. modeling of brain tumor: a deep learning approach,” in Inno-
[3] R. L. Kumar, J. Kakarla, B. V. Isunuri, and M. Singh, “Multi- vations in Computational Intelligence and Computer Vision,
class brain tumor classification using residual network and Springer, Singapore, 2021.
global average pooling,” Multimedia Tools and Applications,
[18] C. Shao, Y. Yang, S. Juneja, and T. GSeetharam, “IoT data visu-
vol. 80, no. 9, pp. 13429–13438, 2021.
alization for business intelligence in corporate finance,” Infor-
[4] A. Rehman, M. A. Khan, T. Saba, Z. Mehmood, U. Tariq, and mation Processing and Management, vol. 59, no. 1, p. 102736,
N. Ayesha, “Microscopic brain tumor detection and classifica- 2022.
tion using 3D CNN and feature selection architecture,”
Microscopy Research and Technique (MRT), vol. 84, no. 1, [19] S. Juneja, M. Gahlan, G. Dhiman, and S. Kautish, “Futuristic
pp. 133–149, 2021. cyber-twin architecture for 6G technology to support internet
of everything,” Scientific Programming. Hindawi Limited,
[5] R. Rajasree, C. C. Columbus, and C. Shilaja, “Multiscale-based
vol. 2021, pp. 1–7, 2021.
multimodal image classification of brain tumor using deep
learning method,” Neural Computing and Applications [20] M. Uppal, D. Gupta, S. Juneja, G. Dhiman, and S. Kautish,
(NCA), vol. 33, no. 11, pp. 5543–5553, 2021. “Cloud-based fault prediction using IoT in office automation
for improvisation of health of employees,” Journal of
[6] I. Abd El Kader, G. Xu, Z. Shuai, S. Saminu, I. Javaid, and
Healthcare Engineering, vol. 2021, 13 pages, 2021.
I. Salim Ahmad, “Differential deep convolutional neural net-
work model for brain tumor classification,” Brain Sciences, [21] S. Juneja, S. Jain, A. Suneja et al., “Gender and age classification
vol. 11, no. 3, p. 352, 2021. enabled blockschain security mechanism for assisting mobile
[7] J. D. Bodapati, N. S. Shaik, V. Naralasetti, and N. B. Mundu- application,” IETE Journal of Research, pp. 1–13, 2021.
kur, “Joint training of two-channel deep neural network for [22] S. Juneja, G. Dhiman, S. Kautish, W. Viriyasitavat, and
brain tumor classification,” Signal, Image and Video Process- K. Yadav, “A perspective roadmap for IoMT-based early
ing, vol. 15, no. 4, pp. 753–760, 2021. detection and care of the neural disorder, dementia,” Journal
[8] H. Mzoughi, I. Njeh, A. Wali et al., “Deep multi-scale 3D con- of Healthcare Engineering, vol. 2021, 11 pages, 2021.
volutional neural network (CNN) for MRI gliomas brain [23] S. Juneja, A. Juneja, G. Dhiman, S. Behl, and S. Kautish, “An
tumor classification,” Journal of Digital Imaging, vol. 33, approach for thoracic syndrome classification with convolu-
no. 4, pp. 903–915, 2020. tional neural networks,” Computational and Mathematical
[9] M. Sajjad, S. Khan, K. Muhammad, W. Wu, A. Ullah, and Methods in Medicine, vol. 2021, 10 pages, 2021.
S. W. Baik, “Multi-grade brain tumor classification using deep [24] H. K. Upadhyay, S. Juneja, S. Maggu, G. Dhingra, and
CNN with extensive data augmentation,” Journal of Computa- A. Juneja, “Multi-criteria analysis of social isolation barriers
tional Science, vol. 30, pp. 174–182, 2019. amid COVID-19 using fuzzy AHP,” World Journal of Engi-
[10] N. Abiwinanda, M. Hanif, S. T. Hesaputra, A. Handayani, and neering, vol. 19, no. 2, pp. 195–203, 2022.
T. R. Mengko, “Brain tumor classification using convolutional [25] A. Juneja, S. Juneja, A. Soneja, and S. Jain, “Real time object
neural network,” in In World Congress on Medical Physics and detection using CNN based single shot detector model,” Jour-
Biomedical Engineering 218, pp. 183–189, Springer, Singapore, nal of Information Technology Management, vol. 13, no. 1,
2019. pp. 62–80, 2021.
[11] M. A. Khan, I. Ashraf, M. Alhaisoni et al., “Multimodal brain
tumor classification using deep learning and robust feature
selection: a machine learning application for radiologists,”
Diagnostics, vol. 10, no. 8, p. 565, 2020.
[12] Y. Xu, Z. Jia, Y. Ai et al., “Deep convolutional activation fea-
tures for large scale brain tumor histopathology image classifi-
cation and segmentation,” in In 2015 IEEE international
conference on acoustics, speech and signal processing,
pp. 947–951, South Brisbane, QLD, Australia, April 2015.
[13] T. Sadad, A. Rehman, A. Munir et al., “Brain tumor detection
and multi-classification using advanced deep learning tech-
niques,” Microscopy Research and Technique, vol. 84, no. 6,
pp. 1296–1308, 2021.
[14] G. S. Tandel, A. Balestrieri, T. Jujaray, N. N. Khanna, L. Saba,
and J. S. Suri, “Multiclass magnetic resonance imaging brain
tumor classification using artificial intelligence paradigm,”
Computers in Biology and Medicine, vol. 122, p. 103804, 2020.
[15] B. Srinivas and G. S. Rao, “A hybrid CNN-KNN model for
MRI brain tumor classification,” International Journal of

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy