Microproject Report Group 2
Microproject Report Group 2
by
Group No (2)
TECHNOLOGY PATIALA
TABLE OF CONTENTS
1. OBJECTIVE......................................................................................................................................2
4. CODE..................................................................................................................................................6
6. RESULTS.........................................................................................................................................12
1
1. OBJECTIVE
Develop a deep learning model using a custom DenseNet-like architecture to classify brain
MRI images into two categories: Normal and Stroke.
Unlike traditional architectures where each layer connects only to the subsequent layer,
DenseNet creates a radical dense connectivity pattern. In this innovative design, every layer
connects directly to every other layer in a feed-forward manner.
2
2. KEY COMPONENTS OF THE ARCHITECTURE
1. Dense Blocks
Concept: Dense Blocks are structures within a neural network where each layer is
directly connected to every other layer that comes after it. This creates a dense
connectivity pattern, unlike traditional networks where layers are sequentially
connected.
Feature Reuse: Each layer uses the feature maps from all preceding layers as inputs.
This ensures that features learned by earlier layers are reused, reducing redundancy
and promoting more efficient learning.
Knowledge Transfer: The direct connections help transfer information across layers,
allowing the network to learn more robust features. This also mitigates the vanishing
gradient problem, as gradients can flow more effectively through these connections.
Output: The concatenation of feature maps ensures that the network retains rich,
diverse information as it progresses through the layers.
2. Transition Layers
Purpose: Transition layers serve as a bridge between Dense Blocks, managing the
flow of information while controlling the network's complexity.
Feature Map Reduction: These layers typically use techniques like convolution and
pooling to reduce the dimensions of feature maps. This minimizes computational
overhead and prevents the network from becoming too large.
Dimensionality Control: Transition layers help balance the trade-off between network
depth and computational efficiency by compressing the size of the data passed to
subsequent Dense Blocks.
Regularization: By reducing dimensions, they act as a form of implicit regularization,
helping prevent overfitting.
Convolutional Layers: Extract spatial features from the input data by applying filters.
These layers capture patterns like edges, textures, and more complex structures as the
network deepens.
3
Batch Normalization: Standardizes the inputs to each layer to have consistent mean
and variance. This stabilizes training, speeds up convergence, and improves
generalization by reducing internal covariate shifts.
ReLU Activation: Introduces non-linearity into the network, enabling it to model
complex relationships in the data. ReLU (Rectified Linear Unit) helps in addressing
vanishing gradient issues, as it allows gradients to pass unaltered for positive values.
Global Average Pooling: Reduces the spatial dimensions of feature maps by
averaging values across each channel. This results in a single value per channel,
significantly reducing the number of parameters while retaining essential information.
Dense (Fully Connected) Layers: Aggregate the learned features from earlier layers to
make predictions. These layers integrate information and map the extracted features to
the final output classes or regression values.
4
3. PROJECT WORKFLOW FLOWCHART
5
4. CODE
from google.colab import drive
drive.mount('/content/drive')
file_path =
'https://drive.google.com/drive/folders/1uxR6l9w9q_uC4kFuAAstH
avFLvftLu8G?usp=sharing'
import os
import tensorflow as tf
from sklearn.model_selection import train_test_split
from tensorflow.keras.preprocessing.image import
ImageDataGenerator
from tensorflow.keras.models import Model
from tensorflow.keras.layers import Dense,
GlobalAveragePooling2D, Input
from tensorflow.keras.callbacks import ModelCheckpoint,
EarlyStopping
import numpy as np
import matplotlib.pyplot as plt
from glob import glob
from tensorflow.keras.utils import to_categorical
data_dir = "/content/drive/MyDrive/Brain_Data_Organised"
normal_dir = os.path.join(data_dir, "Normal")
stroke_dir = os.path.join(data_dir, "Stroke")
normal_images, normal_labels =
load_images_from_folder(normal_dir, 0)
stroke_images, stroke_labels =
load_images_from_folder(stroke_dir, 1)
6
# Check if data is available
if len(normal_images) == 0 or len(stroke_images) == 0:
raise ValueError("No images found in one or both
folders!")
# Train-test split
X_train, X_temp, y_train, y_temp = train_test_split(X, y,
test_size=0.4, random_state=42, stratify=y)
X_val, X_test, y_val, y_test = train_test_split(X_temp,
y_temp, test_size=0.5, random_state=42, stratify=y_temp)
gpus = tf.config.list_physical_devices('GPU')
if gpus:
try:
for gpu in gpus:
tf.config.experimental.set_memory_growth(gpu,
True)
print(f"Using GPU: {gpus}")
except RuntimeError as e:
print(e)
else:
print("No GPU detected, using CPU.")
inputs = Input(shape=(224, 224, 3))
# Dense Block 1
for _ in range(6):
shortcut = x
7
x = Conv2D(32, (3, 3), padding='same')(x)
x = BatchNormalization()(x)
x = ReLU()(x)
x = tf.keras.layers.Concatenate()([x, shortcut])
x = GlobalAveragePooling2D()(x)
x = Dense(256, activation='relu')(x)
outputs = Dense(2, activation='softmax')(x)
model.summary()
checkpoint = ModelCheckpoint("best_model.keras",
monitor="val_accuracy", save_best_only=True, verbose=1)
# early_stopping = EarlyStopping(monitor="val_accuracy",
patience=5, verbose=1)
history = model.fit(
X_train,
y_train,
validation_data=(X_val, y_val),
epochs=25, # Adjust based on performance
batch_size=16,
callbacks=[checkpoint]
)
# Confusion Matrix
cm = confusion_matrix(y_true_classes, y_pred_classes)
plt.figure(figsize=(8, 6))
8
sns.heatmap(cm, annot=True, fmt='d', cmap='Blues',
xticklabels=['Normal', 'Stroke'], yticklabels=['Normal',
'Stroke'])
plt.xlabel('Predicted')
plt.ylabel('True')
plt.title('Confusion Matrix')
plt.show()
print("Classification Report:")
print(classification_report(y_true_classes, y_pred_classes,
target_names=['Normal', 'Stroke']))
plt.figure(figsize=(12, 5))
plt.subplot(1, 2, 1)
plt.plot(history.history['loss'], label='Train Loss')
plt.plot(history.history['val_loss'], label='Validation Loss')
plt.title('Training Loss vs Epochs')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
9
5. PROJECT DETAILS FROM CODE ANALYSIS
10
Data Preprocessing
Model Architecture
Training Parameters
o Optimizer: Adam
o Epochs: 25
o Batch Size: 16
11
6. RESULTS
1. Confusion Matrix (Heatmap)
12
2. Training Loss Plot
3. Classification Report
13
14