0% found this document useful (0 votes)
12 views15 pages

Microproject Report Group 2

The document outlines a project focused on developing a deep learning model using a custom DenseNet-like architecture to classify brain MRI images as either Normal or Stroke. It details the architecture's key components, including Dense Blocks and Transition Layers, as well as the project's workflow, code implementation, and results from model evaluation. The project aims to improve the accuracy of stroke prediction through advanced neural network techniques.

Uploaded by

haradiancharlus
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views15 pages

Microproject Report Group 2

The document outlines a project focused on developing a deep learning model using a custom DenseNet-like architecture to classify brain MRI images as either Normal or Stroke. It details the architecture's key components, including Dense Blocks and Transition Layers, as well as the project's workflow, code implementation, and results from model evaluation. The project aims to improve the accuracy of stroke prediction through advanced neural network techniques.

Uploaded by

haradiancharlus
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 15

BRAIN STROKE PREDICTION USING DENSENET

DEEP LEARNING (UEC630)

by

Dhruv Bobal (102106029)


Ishan Grotra (102106041)
Kavish Arora (102106247)
Lakshya Rajpurohit (102156008)
Parth Badhwar (102106073)
Vansh Tejwani (102106086)

Group No (2)

under the supervision of


Dr. Gaganpreet Kaur

DEPARTMENT OF ELECTRONICS AND COMMUNICATION


ENGINEERING THAPAR INSTITUTE OF ENGINEERING &

TECHNOLOGY PATIALA
TABLE OF CONTENTS
1. OBJECTIVE......................................................................................................................................2

2. KEY COMPONENTS OF THE ARCHITECTURE.....................................................................3

3. PROJECT WORKFLOW FLOWCHART.....................................................................................5

4. CODE..................................................................................................................................................6

5. PROJECT DETAILS FROM CODE ANALYSIS.......................................................................10

6. RESULTS.........................................................................................................................................12

1
1. OBJECTIVE
Develop a deep learning model using a custom DenseNet-like architecture to classify brain
MRI images into two categories: Normal and Stroke.

DENSENET: A Brief History

DenseNet, a revolutionary convolutional neural network (CNN) architecture that


fundamentally reimagines how layers within neural networks communicate and share
information.

DenseNet is a convolutional neural network (CNN) architecture that introduces a novel


connectivity pattern:

 Each layer is connected to every other layer in a feed-forward fashion


 Layers receive feature maps from all preceding layers
 Enables feature reuse and reduces the number of parameters
 Alleviates the vanishing-gradient problem

Unlike traditional architectures where each layer connects only to the subsequent layer,
DenseNet creates a radical dense connectivity pattern. In this innovative design, every layer
connects directly to every other layer in a feed-forward manner.

A DenseNet is composed of dense blocks, each representing a sophisticated information


exchange mechanism:
1. Input Layer: Provides initial feature representation
2. Intermediate Layers: Progressively refine and transform features
Transition Layers: Manage feature map dimensions and network complexity

2
2. KEY COMPONENTS OF THE ARCHITECTURE

1. Dense Blocks

 Concept: Dense Blocks are structures within a neural network where each layer is
directly connected to every other layer that comes after it. This creates a dense
connectivity pattern, unlike traditional networks where layers are sequentially
connected.
 Feature Reuse: Each layer uses the feature maps from all preceding layers as inputs.
This ensures that features learned by earlier layers are reused, reducing redundancy
and promoting more efficient learning.
 Knowledge Transfer: The direct connections help transfer information across layers,
allowing the network to learn more robust features. This also mitigates the vanishing
gradient problem, as gradients can flow more effectively through these connections.
 Output: The concatenation of feature maps ensures that the network retains rich,
diverse information as it progresses through the layers.

2. Transition Layers

 Purpose: Transition layers serve as a bridge between Dense Blocks, managing the
flow of information while controlling the network's complexity.
 Feature Map Reduction: These layers typically use techniques like convolution and
pooling to reduce the dimensions of feature maps. This minimizes computational
overhead and prevents the network from becoming too large.
 Dimensionality Control: Transition layers help balance the trade-off between network
depth and computational efficiency by compressing the size of the data passed to
subsequent Dense Blocks.
 Regularization: By reducing dimensions, they act as a form of implicit regularization,
helping prevent overfitting.

3. Key Layers Used

 Convolutional Layers: Extract spatial features from the input data by applying filters.
These layers capture patterns like edges, textures, and more complex structures as the
network deepens.

3
 Batch Normalization: Standardizes the inputs to each layer to have consistent mean
and variance. This stabilizes training, speeds up convergence, and improves
generalization by reducing internal covariate shifts.
 ReLU Activation: Introduces non-linearity into the network, enabling it to model
complex relationships in the data. ReLU (Rectified Linear Unit) helps in addressing
vanishing gradient issues, as it allows gradients to pass unaltered for positive values.
 Global Average Pooling: Reduces the spatial dimensions of feature maps by
averaging values across each channel. This results in a single value per channel,
significantly reducing the number of parameters while retaining essential information.
 Dense (Fully Connected) Layers: Aggregate the learned features from earlier layers to
make predictions. These layers integrate information and map the extracted features to
the final output classes or regression values.

4
3. PROJECT WORKFLOW FLOWCHART

5
4. CODE
from google.colab import drive
drive.mount('/content/drive')
file_path =
'https://drive.google.com/drive/folders/1uxR6l9w9q_uC4kFuAAstH
avFLvftLu8G?usp=sharing'

import os
import tensorflow as tf
from sklearn.model_selection import train_test_split
from tensorflow.keras.preprocessing.image import
ImageDataGenerator
from tensorflow.keras.models import Model
from tensorflow.keras.layers import Dense,
GlobalAveragePooling2D, Input
from tensorflow.keras.callbacks import ModelCheckpoint,
EarlyStopping
import numpy as np
import matplotlib.pyplot as plt
from glob import glob
from tensorflow.keras.utils import to_categorical

data_dir = "/content/drive/MyDrive/Brain_Data_Organised"
normal_dir = os.path.join(data_dir, "Normal")
stroke_dir = os.path.join(data_dir, "Stroke")

def load_images_from_folder(folder, label):


images = []
labels = []
for ext in ["*.png", "*.jpg", "*.jpeg"]:
for filename in glob(os.path.join(folder, ext)):
try:
img = tf.keras.utils.load_img(filename,
target_size=(224, 224))
img_array = tf.keras.utils.img_to_array(img)
images.append(img_array)
labels.append(label)
except Exception as e:
print(f"Error loading image {filename}: {e}")
return images, labels

normal_images, normal_labels =
load_images_from_folder(normal_dir, 0)
stroke_images, stroke_labels =
load_images_from_folder(stroke_dir, 1)

# Debugging: Ensure images are loaded


print(f"Loaded {len(normal_images)} images from 'Normal'.")
print(f"Loaded {len(stroke_images)} images from 'Stroke'.")

6
# Check if data is available
if len(normal_images) == 0 or len(stroke_images) == 0:
raise ValueError("No images found in one or both
folders!")

# Combine data and labels


X = np.array(normal_images + stroke_images)
y = np.array(normal_labels + stroke_labels)

# Normalize pixel values


X = X / 255.0

# Train-test split
X_train, X_temp, y_train, y_temp = train_test_split(X, y,
test_size=0.4, random_state=42, stratify=y)
X_val, X_test, y_val, y_test = train_test_split(X_temp,
y_temp, test_size=0.5, random_state=42, stratify=y_temp)

# Convert labels to categorical


y_train = to_categorical(y_train, num_classes=2)
y_val = to_categorical(y_val, num_classes=2)
y_test = to_categorical(y_test, num_classes=2)

gpus = tf.config.list_physical_devices('GPU')
if gpus:
try:
for gpu in gpus:
tf.config.experimental.set_memory_growth(gpu,
True)
print(f"Using GPU: {gpus}")
except RuntimeError as e:
print(e)
else:
print("No GPU detected, using CPU.")
inputs = Input(shape=(224, 224, 3))

# DenseNet-like architecture from scratch


from tensorflow.keras.layers import Conv2D,
BatchNormalization, ReLU, MaxPooling2D, Flatten

# Initial Conv layer


x = Conv2D(64, (7, 7), strides=(2, 2), padding='same')(inputs)
x = BatchNormalization()(x)
x = ReLU()(x)
x = MaxPooling2D((3, 3), strides=(2, 2), padding='same')(x)

# Dense Block 1
for _ in range(6):
shortcut = x

7
x = Conv2D(32, (3, 3), padding='same')(x)
x = BatchNormalization()(x)
x = ReLU()(x)
x = tf.keras.layers.Concatenate()([x, shortcut])

x = GlobalAveragePooling2D()(x)
x = Dense(256, activation='relu')(x)
outputs = Dense(2, activation='softmax')(x)

model = Model(inputs, outputs)

from tensorflow.keras.optimizers import Adam


learning_rate = 0.0002 # Adjust as needed
optimizer = Adam(learning_rate=learning_rate)
model.compile(optimizer=optimizer,
loss='categorical_crossentropy', metrics=['accuracy'])

model.summary()

checkpoint = ModelCheckpoint("best_model.keras",
monitor="val_accuracy", save_best_only=True, verbose=1)
# early_stopping = EarlyStopping(monitor="val_accuracy",
patience=5, verbose=1)

history = model.fit(
X_train,
y_train,
validation_data=(X_val, y_val),
epochs=25, # Adjust based on performance
batch_size=16,
callbacks=[checkpoint]
)

test_loss, test_acc = model.evaluate(X_test, y_test,


batch_size=32)
print(f"Test Accuracy: {test_acc * 100:.2f}%")

from sklearn.metrics import classification_report,


confusion_matrix
import seaborn as sns
import numpy as np
import matplotlib.pyplot as plt
y_pred = model.predict(X_test)
y_pred_classes = np.argmax(y_pred, axis=1)
y_true_classes = np.argmax(y_test, axis=1)

# Confusion Matrix
cm = confusion_matrix(y_true_classes, y_pred_classes)
plt.figure(figsize=(8, 6))

8
sns.heatmap(cm, annot=True, fmt='d', cmap='Blues',
xticklabels=['Normal', 'Stroke'], yticklabels=['Normal',
'Stroke'])
plt.xlabel('Predicted')
plt.ylabel('True')
plt.title('Confusion Matrix')
plt.show()

print("Classification Report:")
print(classification_report(y_true_classes, y_pred_classes,
target_names=['Normal', 'Stroke']))

plt.figure(figsize=(12, 5))
plt.subplot(1, 2, 1)
plt.plot(history.history['loss'], label='Train Loss')
plt.plot(history.history['val_loss'], label='Validation Loss')
plt.title('Training Loss vs Epochs')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()

9
5. PROJECT DETAILS FROM CODE ANALYSIS

10
 Data Preprocessing

o Image size standardized to 224x224 pixels

o Images loaded from "Normal" and "Stroke" folders

o Pixel values normalized to [0, 1] range

o Stratified train-test split (60% train, 20% validation, 20% test)

 Model Architecture

o Custom DenseNet-like architecture

o Initial Convolutional Layer

o 6 Dense Blocks with Concatenation

o Global Average Pooling

o Two Dense layers for classification

 Training Parameters

o Optimizer: Adam

o Learning Rate: 0.0002

o Loss Function: Categorical Crossentropy

o Epochs: 25

o Batch Size: 16

11
6. RESULTS
1. Confusion Matrix (Heatmap)

Shows True Positives, True Negatives, False Positives, False Negatives

12
2. Training Loss Plot

Training vs Validation Loss across Epochs

3. Classification Report

Detailed metrics for each class

13
14

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy