0% found this document useful (0 votes)
12 views48 pages

Segmentation On Brain MRI

The document outlines a mini project on brain MRI segmentation using MATLAB, detailing the methodology, literature review, and various segmentation techniques. It emphasizes the importance of image segmentation in medical analysis, particularly for brain tumors, and discusses preprocessing steps necessary for accurate results. The project includes MATLAB code and flowcharts to illustrate the segmentation process and its applications.

Uploaded by

udoodala
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views48 pages

Segmentation On Brain MRI

The document outlines a mini project on brain MRI segmentation using MATLAB, detailing the methodology, literature review, and various segmentation techniques. It emphasizes the importance of image segmentation in medical analysis, particularly for brain tumors, and discusses preprocessing steps necessary for accurate results. The project includes MATLAB code and flowcharts to illustrate the segmentation process and its applications.

Uploaded by

udoodala
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 48

Stanley College of engineering and technology for women,

Abids, Hyderabad.

Mini Project
SEGMENTATION ON
BRAIN MRI
Segmentation on brain MRI(Magnetic Resonance Imaging) using MATLAB.

160619735112~Kola Aarthy
160619735312~Doodala Ushasree
160619735322~Thatipally Sharanya
Guide~Mrs.C.V. Keerthilatha
(Assi.Prof)ECE dept.
CONTENTS
• Abstract
• Introduction
• Literature review
• Methodology
• => Theory
• => Figures
• => Graphs
• => MATLAB code and flowchart
• Discussion
• Advantages and disadvantages
• Results
• Conclusion
• Future scope
• References
ABSTRACT
Image segmentation is one of the most important tasks in medical image
analysis and is often the first and the most critical step in many clinical
applications. In brain MRI analysis, image segmentation is commonly used
for measuring and visualizing the brain's anatomical structures, for
analysing brain changes, for delineating pathological regions, and for
surgical planning and image-guided interventions. In the last few decades,
various segmentation techniques of different accuracy and degree of
complexity have been developed and reported in the literature. Benefits
of MRI is non-invasive, does not use radiation and does not involve
radiation. MRI contrasting agent is less likely to produce an allergic
reaction that may occur when iodine-based substances are used for x-rays
and CT scans. This project describes how to detect and extraction of brain
tumour from patient's MRI scan images of the brain. Here by using
MATLAB software, segmentation on MRI scan images of the brain is done.
Keywords: Brain tumour, MRI image, MATLAB.
INTRODUCTION
In this project, we develop the process of segmentation on
brain using magnetic resonance imaging (MRI) through the
MATLAB. It is used to access the brain injury and it's condition
and also provides large amount of data with high level of
quality.We review the most popular methods commonly used
for brain MRI segmentation. We discuss their capabilities,
advantages, and limitations.We first introduce the basic
concepts of image segmentation. This includes defining 2D
and 3D images, describing an image segmentation problem
and image features, and introducing MRI intensity
distributions of the brain tissue. Then, we explain different
MRI preprocessing steps including image registration, bias
field correction, and removal of nonbrain tissue. We observes
the output using MATLAB code.
LITERATURE REVIEW
We collected this mini project information from
Physionet. We searched on the topic of segmentation
on brain MRI using MATLAB in the available network
i.e, Physionet. We got images and theory on the basis
of segmentation on brain using MRI. And also we use
GitHub, google and youtube to get information on
this project.
METHODOLOGY
2D and 3D images
An image can be
defined as a
function I(i,j) in 2D
space or I(I,j,k) in
3D space. Intensity
values are typically
represented by a
gray value
{0,.......,255} in MRI
of the brain; see
Figure 1.
Figure 1
Every image consists of
a finite set of image
elements called pixels
in 2D space or voxels in
3D space. It's
coordinates, where i
is the image row
number, j is the image
column number, and k
is the slice number in a
volumetric stack; see
Figure 2.

Figure 2
Image segmentation
The goal of image
segmentation is to divide an
image into a set of
semantically meaningful,
homogeneous, and non-
overlapping regions of similar
attributes such as intensity,
depth, color, or texture.The
segmentation result is either
an image of labels identifying
each homogeneous region or a
set of contours which describe
the region
boundaries .Classification
means to assign to each
element in the image a tissue
class, where the classes are
defined in advance.

Fig (a): Original MR image


The problems of segmentation
and classification are interlinked
because segmentation implies a
classification, while a classifier
implicitly segments an image. In
the case of brain MRI, image
elements are typically classified
into three main tissue types:
white matter (WM), gray matter
(GM), and cerebrospinal fluid
(CSF); see Figure 3. The
segmentation results are further
used in different applications
such as for analyzing anatomical
structures, for studying
pathological regions, for surgical
planning, and for visualization.

Fig (b): Segmented image with


three labels: WM, GM and CSF.
Markov random field
(MRF)
Markov random field
(MRF) theory provides a
basis for modeling local
properties of an image,
where the global image
properties follow the
local interactions. MRF
models have been
successfully integrated
in various brain MRI
segmentation methods
to decrease
misclassification errors
due to image noise .
The first and the second
order neighborhoods are
the most commonly used
neighborhoods in image
segmentation. The first
order neighborhood
consists of 4 nearest
nodes in a 2D image and
6 nearest nodes in a 3D
image, while the second
order neighborhood
consists of 8 nearest
nodes in a 2D image and
18 nearest nodes in a 3D Figure 4
image; see Figure 4.
Image features
Image features represent
distinctive characteristics of
an object or an image
structure to be
segmented.Typically,
statistical approach is used
for feature extraction and
classification in MRI, where
pattern/texture is defined by
a set of statistically extracted
features represented as a
vector in multidimensional
feature space. The statistical
features are based on first
and second order statistics of
gray level intensities in an
image.

Figure 5:Illustration of 2D (a) and 3D (b) spatial


interactions between neighboring pixel/voxel
intensities.
Intensity Distribution
in Brain MRI
The intensity of brain tissue is one of
the most important features for brain
MRI segmentation. However, when
intensity values are corrupted by MRI
artifacts such as image noise, partial
volume effect (PVE), and bias field
effect, intensity-based segmentation
algorithms will lead to wrong results.
Thus, to obtain relevant and accurate
segmentation results, very often
several preprocessing steps are
necessary to prepare MRI data. For
instance, it is necessary to remove
background voxels, extract brain
tissue, perform image registration for
multimodal segmentation, and
remove the bias field effect.The PVE
describes the loss of small tissue
regions due to the limited resolution
of the MRI scanner. It means that one
pixel/voxel lies in the interface
between two (or more) classes and is Figure 6: Preprocessing steps: (a) the original -W MR image of
a mix of different tissues. the adult brain; (b) the brain tissue image after removing
nonbrain structures; (c) the bias field; (d) the brain tissue image
after bias field correction.
It has been shown that the
noise in the magnitude
images is governed by a
Rician distribution, based on
the assumption that the
noise on the real and
imaginary channels is
Gaussian [19]. The
probability density function
for a Rician distribution is
defined as eqn 4.
A special case of the Rician
distribution is in image
regions where only noise is
present .This special case of
the Rician distributionis also
known as the Rayleigh
distribution eqn 5.
In the image regions where
the NMR signal is present
and SNR >=3, the noise
distribution approximates a
Gaussian distribution eqn 6.
T1-W and T2-W Intensity Distribution

It can be noted from the 1D histogram of the bias-


corrected T1-W MRI of an adult brain in Figure 8(a) that
there is an overlap between different tissue classes. Also,
it can be seen that an overlap between WM and GM
tissue is higher than between GM and CSF. This overlap
between the class distributions can cause ambiguities in
the decision boundaries when intensity-based
segmentation methods are used [21]. However, many
researchers showed that adding additional MRI
sequences with different contrast properties (e.g., T2-W
MRI, Proton Density MRI) can improve intensity-based
segmentation and help separate the class distributions .
MRI preprocessing
After MRI acquisition several preprocessing
steps are necessary to prepare MR images
for segmentation; see Figure 6. The most
important steps include MRI bias field
correction, image registration (in the case
of multimodal image analysis), and removal
of nonbrain tissue (also called a brain
extraction).
Bias Field Correction
The bias field, also called the intensity
inhomogeneity, is a low-frequency
spatially varying MRI artifact causing a
smooth signal intensity variation within
tissue of the same physical properties.
The bias field arises from spatial
inhomogeneity of the magnetic field,
variations in the sensitivity of the
reception coil, and the interaction
between the magnetic field and the
human body.The bias field is dependent
on the strength of the magnetic field.
When MR images are scanned at 0.5 T,
the bias field is almost invisible and can
be neglected..This is because most of
the segmentation algorithms assume
intensity homogeneity within each class.
Therefore, the correction of the bias
field is an important step for the
efficient segmentation and registration
of brain MRI.
Figure 9:Influence of the bias field on brain MRI segmentation. (a) An
example of the sagittal brain MRI slice with bias field is shown in the top of
the figure. The image histogram is shown in the middle and the three-label
segmentation in the bottom. (b) The bias-corrected MRI slice is shown in
the top, the corresponding histogram in the middle, and three-label
segmentation in the bottom.
Image registration
Image registration is the process of overlaying (spatially
aligning) two or more images of the same content
taken at different times, from different viewpoints,
and/or by different sensors. Registration is required in
medical image analysis for obtaining more complete
information about the patient’s health when using
multimodal images (e.g., MRI, CT, PET, and SPECT) and
for treatment verification by comparison of pre- and
postintervention images.Image registration involves
finding the transformation between images so that
corresponding image features are spatially aligned.
Removal of non-brain tissue
Nonbrain tissues such as fat, skull, or neck
have intensities overlapping with
intensities of brain tissues. Therefore, the
brain has to be extracted before brain
segmentation methods can be used. This
step classifies voxels as brain or nonbrain.
The result can be either a new image with
just brain voxels or a binary mask, which
has a value of 1 for brain voxels and 0 for
the rest of tissues. In general, the brain
voxels comprise GM, WM, and CSF of the
cerebral cortex and subcortical structures,
including the brain stem and cerebellum.
The scalp, dura matter, fat, skin, muscles,
eyes, and bones are always classified as
nonbrain voxels.

Figure 10: Result of brain extraction on a


T1 MR image in an axial plane. (a) shows
the original T1-W MRI. (b) depicts the
estimated brain mask. (c) presents an
overlap of the brain mask and original MR
image.
MRI Segmentation
Methods
The segmentation methods, with application to brain
MRI, may be grouped as follows:
(i) manual segmentation;
(ii) intensity-based methods (including thresholding,
region growing, classification, and clustering);
(iii) atlas-based methods;
(iv) surface-based methods (including active contours
and surfaces, and multiphase active contours);
(v) hybrid segmentation methods.
MATLAB
MATLAB (an abbreviation
of "MATrix LABoratory") is
a proprietary multi-
paradigm programming
language and numeric
computing environment
developed by
MathWorks. ... Although
MATLAB is intended
primarily for numeric
computing, an optional
toolbox uses the MuPAD
symbolic engine allowing
access to symbolic
computing abilities.
Start

Create a directory to store the


BraTS data set.

Preprocessing the data can take


about 30 minutes to complete.

Create an imageDatastore to store


the 3-D image data.
Create a pixelLabelDatastore
(Computer Vision Toolbox) to store
the labels.

Preview one image volume and label. Display the


labeled volume using the labelvolshow function.
Make the background fully transparent by setting
the visibility of the background label (1) to 0.

Create a
randomPatchExtractionDatastore
that contains the training image and
pixel label data.
Follow the same steps to create a
randomPatchExtractionDatastore that contains
the validation image and pixel label data.

Create a default 3-D U-Net


network by using the unetLayers
(Computer Vision Toolbox)
function.

Augment the training and validation


data by using the transform function
with custom preprocessing
operations
Replace the pixel classification
layer with the Dice pixel
classification layer.

Replace the input layer with an


input layer that does not have data
normalization.

Plot the graph of the updated 3-D U-


Net network.
Specify the hyperparameter
settings using the trainingOptions
(Deep Learning Toolbox) function.

Download Pretrained Network and


Sample Test Set

False:
Model data
time Train
the model using the
trainNetwork (Deep
Learning Toolbox)
function.
True:
Patch size
Perform Segmentation of
Test Data

False: To Perform
True:
preprocess Segmentation
imageDir
Data Loc of Test Data

The voldsTest variable stores the


ground truth test images. The
pxdsTest variable stores the ground
truth labels.
Processing test volume

Compare Ground Truth Against


Network Prediction

Select one of the test images to


evaluate the accuracy of the
semantic segmentation
Display in a montage the center slice
of the ground truth and predicted
labels along the depth direction.

Display the ground-truth labeled


volume using the labelvolshow
function.

Give context to the spatial location of


the tumor inside the brain.
For the same volume, display
the predicted labels.

Quantify Segmentation
Accuracy(Measure the
segmentation accuracy using the
dice function.)

Calculate the average Dice score


across the set of test volumes.
To create a boxplot, set the
createBoxplot variable in the
following code to true.

Create Box
False plot variable True
code

Boxplot not
created

Boxplot created

Stop
MATLAB code for
segmentation on Brain MRI
imageDir = fullfile(tempdir,'BraTS'); lblLoc = fullfile(preprocessDataLoc,'labelsTr');
if ~exist(imageDir,'dir') classNames = ["background","tumor"];
pixelLabelID = [0 1];
mkdir(imageDir); pxds =
pixelLabelDatastore(lblLoc,classNames,pixelLabelID,
end
...
sourceDataLoc = [imageDir filesep 'FileExtensions','.mat','ReadFcn',volReader);
'Task01_BrainTumour']; volume = preview(volds);
label = preview(pxds);
preprocessDataLoc = viewPnl = uipanel(figure,'Title','Labeled Training
fullfile(tempdir,'BraTS','preprocessedDataset Volume');
'); hPred =
preprocessBraTSdataset(preprocessDataLoc, labelvolshow(label,volume(:,:,:,1),'Parent',viewPnl, ..
.
sourceDataLoc);
'LabelColor',[0 0 0;1 0 0]);
volReader = @(x) matRead(x); hPred.LabelVisibility(1) = 0;
patchSize = [132 132 132];
volLoc = patchPerImage = 16;
fullfile(preprocessDataLoc,'imagesTr'); miniBatchSize = 8;
volds = imageDatastore(volLoc, ... patchds =
randomPatchExtractionDatastore(volds,pxds,patchSi
ze, ...
'FileExtensions','.mat','ReadFcn',volReader); 'PatchesPerImage',patchPerImage);
patchds.MiniBatchSize = miniBatchSize;
volLocVal = dataSource = 'Training';
fullfile(preprocessDataLoc,'imagesVal');
dsTrain =
voldsVal = imageDatastore(volLocVal, ... transform(patchds,@(patchIn)augmentAn
dCrop3dPatch(patchIn,outPatchSize,dataS
'FileExtensions','.mat','ReadFcn',volReader ource));
);
dataSource = 'Validation';
lblLocVal =
fullfile(preprocessDataLoc,'labelsVal'); dsVal =
transform(dsVal,@(patchIn)augmentAndC
pxdsVal = rop3dPatch(patchIn,outPatchSize,dataSou
pixelLabelDatastore(lblLocVal,classNames,
rce));
pixelLabelID, ...
outputLayer =
'FileExtensions','.mat','ReadFcn',volReader dicePixelClassificationLayer('Name','Outp
); ut');
dsVal = lgraph =
randomPatchExtractionDatastore(voldsVal, replaceLayer(lgraph,'Segmentation-
pxdsVal,patchSize, ... Layer',outputLayer);
'PatchesPerImage',patchPerImage); inputLayer =
dsVal.MiniBatchSize = miniBatchSize; image3dInputLayer(inputPatchSize,'Norm
numChannels = 4; alization','none','Name','ImageInputLayer'
);
inputPatchSize = [patchSize numChannels];
numClasses = 2; lgraph =
replaceLayer(lgraph,'ImageInputLayer',in
[lgraph,outPatchSize] = putLayer);
unet3dLayers(inputPatchSize,numClasses,'
ConvolutionPadding','valid'); analyzeNetwork(lgraph)
options = trainingOptions('adam', ... doTraining = false;
'MaxEpochs',50, ... if doTraining

'InitialLearnRate',5e-4, ... modelDateTime = string(datetime('now','Format',"yyyy-


MM-dd-HH-mm-ss"));
'LearnRateSchedule','piecewise', ...
[net,info] = trainNetwork(dsTrain,lgraph,options);
'LearnRateDropPeriod',5, ... save(strcat("trained3DUNet-",modelDateTime,"-
'LearnRateDropFactor',0.95, ... Epoch-",num2str(options.MaxEpochs),".mat"),'net');
else
'ValidationData',dsVal, ...
inputPatchSize = [132 132 132 4];
'ValidationFrequency',400, ...
outPatchSize = [44 44 44 2];
'Plots','training-progress', ...
'Verbose',false, ... load(fullfile(imageDir,'trained3DUNet','brainTumor3DUNet
Valid.mat'));
'MiniBatchSize',miniBatchSize);
end
trained3DUnet_url =
'https://www.mathworks.com/supportfiles/visio useFullTestSet = false;
n/data/brainTumor3DUNetValid.mat'; if useFullTestSet
sampleData_url = volLocTest = fullfile(preprocessDataLoc,'imagesTest');
'https://www.mathworks.com/supportfiles/visio
lblLocTest = fullfile(preprocessDataLoc,'labelsTest');
n/data/sampleBraTSTestSetValid.tar.gz';
else
imageDir = fullfile(tempdir,'BraTS');
volLocTest =
if ~exist(imageDir,'dir') fullfile(imageDir,'sampleBraTSTestSetValid','imagesTest');
mkdir(imageDir); lblLocTest =
fullfile(imageDir,'sampleBraTSTestSetValid','labelsTest');
end
classNames = ["background","tumor"];
downloadTrained3DUnetSampleData(trained3D
Unet_url,sampleData_url,imageDir); pixelLabelID = [0 1];
volReader = @(x) matRead(x); % Overlap-tile strategy for segmentation of volumes.
voldsTest = imageDatastore(volLocTest, ... for k = 1:outPatchSize(3):depthPad-inputPatchSize(3)+1
'FileExtensions','.mat','ReadFcn',volReader); for j = 1:outPatchSize(2):widthPad-inputPatchSize(2)+1
pxdsTest = pixelLabelDatastore(lblLocTest,classNames,pixelLabelID, ... for i = 1:outPatchSize(1):heightPad-inputPatchSize(1)+1
'FileExtensions','.mat','ReadFcn',volReader); patch = volPadded( i:i+inputPatchSize(1)-1,...
id = 1; j:j+inputPatchSize(2)-1,...
while hasdata(voldsTest) k:k+inputPatchSize(3)-1,:);
disp(['Processing test volume ' num2str(id)]); patchSeg = semanticseg(patch,net);

tempSeg(i:i+outPatchSize(1)-1, ...
tempGroundTruth = read(pxdsTest); j:j+outPatchSize(2)-1, ...
groundTruthLabels{id} = tempGroundTruth{1}; k:k+outPatchSize(3)-1) = patchSeg;
vol{id} = read(voldsTest); end

end
% Use reflection padding for the test image. end
% Avoid padding of different modalities.

volSize = size(vol{id},(1:3)); % Crop out the extra padded region.


padSizePre = (inputPatchSize(1:3)-outPatchSize(1:3))/2; tempSeg = tempSeg(1:height,1:width,1:depth);
padSizePost = (inputPatchSize(1:3)-outPatchSize(1:3))/2 + % Save the predicted volume result.
(outPatchSize(1:3)-mod(volSize,outPatchSize(1:3)));
predictedLabels{id} = tempSeg;
volPaddedPre = padarray(vol{id},padSizePre,'symmetric','pre');
id=id+1;
volPadded = padarray(volPaddedPre,padSizePost,'symmetric','post');
end
[heightPad,widthPad,depthPad,~] = size(volPadded);
volId = 1;
[height,width,depth,~] = size(vol{id});
vol3d = vol{volId}(:,:,:,1);

zID = size(vol3d,3)/2;
tempSeg = categorical(zeros([height,width,depth],'uint8'),
[0;1],classNames); zSliceGT = labeloverlay(vol3d(:,:,zID),groundTruthLabels{volId}(:,:,zID));
zSlicePred = diceResult = zeros(length(voldsTest.Files),2);
labeloverlay(vol3d(:,:,zID),predictedLabels{vo for j = 1:length(vol)
lId}(:,:,zID));
diceResult(j,:) =
figure dice(groundTruthLabels{j},predictedLabels{j});
montage({zSliceGT,zSlicePred},'Size',[1 end
2],'BorderSize',5)
meanDiceBackground = mean(diceResult(:,1));
title('Labeled Ground Truth (Left) vs. Network
Prediction (Right)') disp(['Average Dice score of background across
',num2str(j), ...
viewPnlTruth = uipanel(figure,'Title','Ground-
' test volumes =
Truth Labeled Volume'); ',num2str(meanDiceBackground)])
hTruth = meanDiceTumor = mean(diceResult(:,2));
labelvolshow(groundTruthLabels{volId},vol3d
,'Parent',viewPnlTruth, ... disp(['Average Dice score of tumor across
',num2str(j), ...
'LabelColor',[0 0 0;1 0
0],'VolumeThreshold',0.68); ' test volumes = ',num2str(meanDiceTumor)])

hTruth.LabelVisibility(1) = 0; createBoxplot = false;

viewPnlPred = if createBoxplot
uipanel(figure,'Title','Predicted Labeled figure
Volume');
boxplot(diceResult)
hPred =
labelvolshow(predictedLabels{volId},vol3d,'P title('Test Set Dice Accuracy')
arent',viewPnlPred, ... xticklabels(classNames)
'LabelColor',[0 0 0;1 0 ylabel('Dice Coefficient')
0],'VolumeThreshold',0.68);
end
hPred.LabelVisibility(1) = 0;
DISCUSSION
we have defined the basic concepts necessary for
understanding MRI segmentation methods, such as
2D and 3D image definition, image features, and
brain MRI intensity distributions.Following this,
preprocessing steps necessary to prepare images for
MRI segmentation using MATLAB have been
described.The most important steps include bias field
correction, image registration, and removal of
nonbrain tissues or brain extraction. The correction of
intensity inhomogeneity is an important step for the
efficient segmentation and registration of brain MRI.
ADVANTAGES
• Identifies the presenceof tumour in humanbrain.
• Increase the chances of the patient's recovery after
treatment.
• Substantial development in the medical imaging
technologies.
• MRI is non-invasive
• MRI does not use radiation and does not involve
radiation.
DISADVANTAGES
• A major challenge for brain tumor detection arises
from the variations in tumor location, shape, and
size.
Comparison of X-ray, CT
scan and MRI:

Example
RESULTS
• MRI can detect a variety of
conditions of the brain such as
cysts, tumors, bleeding,
swelling, developmental and
structural abnormalities,
infections, inflammatory
conditions, or problems with
the blood vessels. It can
determine if a shunt is working
and detect damage to the
brain caused by an injury or a
stroke.
• By using MATLAB code we
observed these outputs in a
system.
CONCLUSION
Image segmentation is an important step in many
medical applications involving 3D visualization,
computer-aided diagnosis, measurements, and
registration. This paper has provided a brief
introduction to the fundamental concepts of MRI
segmentation of the human brain and methods that
are commonly used. MATLAB is the easiest software
to understand the conditionof brain using MRI
preprocessing.
FUTURE SCOPE
• In future, this technique can be developed to
classify the tumours based on feature extraction.
• This technique can be applied for ovarian, breast,
lung, skin tumours.
• Instead of rectangular boxes, can work with general
boundaries: level set based framework.
REFERENCES
• Teacher notes and lab manual
• MATLAB software
• Physionet and GitHub
• Google and YouTube

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy