0% found this document useful (0 votes)
18 views66 pages

This ISproject Report 23233

The document discusses a method for restoring composite images using image fusion and alpha blending. It involves preprocessing input images, fusing the images to create a composite, and evaluating the quality of the composite. The method aims to enhance diagnostic image quality by addressing issues like noise and missing areas through techniques like segmentation, registration, and classification.

Uploaded by

Nishanth Nish
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views66 pages

This ISproject Report 23233

The document discusses a method for restoring composite images using image fusion and alpha blending. It involves preprocessing input images, fusing the images to create a composite, and evaluating the quality of the composite. The method aims to enhance diagnostic image quality by addressing issues like noise and missing areas through techniques like segmentation, registration, and classification.

Uploaded by

Nishanth Nish
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 66

MIE-IF/AB: MEDICAL IMAGE

ENHANCEMENT USING IMAGE


FUSION AND ALPHA BLENDING
ACKNOWLEDGEMENT

We thank the Supreme Lord for imparting us with the spiritual energy
in the right direction which has led to the successful completion of the
Project Report. We would also like to thank the Principal, Dr. Joseph
Kutty Jacob, for the facilities provided by him during the preparation
of the report. We are extremely thankful to the Head of the
Department of Electronics and Communication Engineering, Dr.
Anilkumar K K, for giving all the support and valuable directions in
overcoming the difficulties faced during the preparation of the report.
We express our sincere thanks to the Project Coordinator, Dr.
Anilkumar K K, for giving innovative suggestions, timely advice,
correction advice and suggestions during this endeavour. We feel to
acknowledge our indebtedness and deep sense of gratitude to our
Guide, Dr. Anilkumar K K, whose guidance and kind supervision
given us throughout the course which shaped the present work as its
show. We also express our gratitude towards all faculties of CUCEK
for their encouragement. We also express our deep sense of thanks to
all of our classmates and friends for their support, constructive
criticism and suggestions.
ABSTRACT

This project proposes a method for restoring composite images using


a novel image restoration process with alpha blending for image
fusion enhancement. The proposed method consists of several stages:
preprocessing, image fusion, and post-processing.In the pre-
processing stage, the input images are first normalized. Then, noise
removal techniques are applied to remove any unwanted noise from
the images. Next, the images are augmented, which may involve
techniques such as contrast adjustment and re sizing. In the image
fusion stage, the pre-processed images are fused together to create a
composite image. Image fusion is a technique that combines
information from multiple images into a single image. The goal of
image fusion is to create an image that is more informative than any
of the input images. Here, alpha blending is employed to refine the
fused image. Alpha blending allows for pixel-level control over the
contribution of each source image to the final result. This can be
particularly beneficial for highlighting specific features or reducing
artifacts at image seams.In the postprocessing stage, the quality of the
composite image is evaluated. This may involve techniques such as
block diagram analysis.The proposed method can be used to restore
composite images that have been degraded by noise, blur, or other
imperfections. The method can also be used to improve the quality of
composite images that were created from low-quality input images,
with alpha blending offering additional control over the final image
characteristics
1. INTRODUCTION

Medical imaging, crucial for healthcare, encounters challenges like


motion artifacts and missing areas, compromising diagnostic
accuracy. Medical image inpainting addresses these issues by
reconstructing images with missing regions. Our project aims to
enhance diagnostic image quality by addressing machine-induced
errors through image processing techniques. Segmentation isolates
structures, aiding analysis, while registration aligns images for
comparison. Classification categorizes tissue types, aiding diagnosis,
and feature extraction quantifies characteristics. 3D reconstruction
provides anatomical representations, and image fusion integrates
modalities for a unified view. Texture analysis and deep learning
enhance analysis accuracy for improved diagnostic outcomes.

Challenges in medical imaging arise from various sources, including


motion artifacts, processing errors, equipment malfunctions, and
complex patient anatomy. Addressing these requires meticulous
patient positioning, rigorous quality control, and optimized imaging
parameters. In MATLAB, segmentation, registration, and
deconvolution techniques mitigate defects, while artifact removal
algorithms eliminate common image artifacts. Machine learning,
particularly CNNs, enhances defect detection and correction. By
integrating these diverse techniques, diagnostic accuracy is bolstered,
advancing patient care in medical imaging.
Image blending and fusion are fundamental in image processing, each
serving distinct purposes. Blending merges images smoothly, ideal for
panoramic stitching. Fusion integrates information from multiple
images to create a single, enriched composition, essential in low-light
conditions, multi-modal imaging, and object detection. These
techniques find application in medical imaging, remote sensing, and
surveillance, where integrated information facilitates informed
decision-making.
AIM AND OBJECTIVE

Aim:
The aim of this project is to develop and implement a method for
restoring composite images using a novel image restoration process
with alpha blending for image fusion enhancement.

Objectives:
 To design and implement a preprocessing stage that includes
normalization, noise removal, and image augmentation techniques
to prepare input images for fusion.
 To develop an image fusion stage that combines preprocessed
images using alpha blending to create a composite image that is
more informative than any of the input images.
 To evaluate the quality of the composite image through
postprocessing techniques such as block diagram analysis.
 To demonstrate the effectiveness of the proposed method in
restoring composite images degraded by noise, blur, or other
imperfections.
 To assess the capability of the method in improving the quality of
composite images created from low-quality input images, utilizing
alpha blending for additional control over final image
characteristics.
LITERATURE SURVEY

1. Enhancement Of Medical Images Using Image Processing In Matlab


Authors: UdayKumbhar, Vishal Patil, Shekhar Rudrakshi

Image enhancement encompasses a range of techniques aimed at


refining specific features within digital images for subsequent
analysis or display. Examples include contrast adjustment, edge
enhancement, pseudocoloring, noise filtering, sharpening, and
magnification. While these techniques amplify certain image
characteristics, they do not inherently increase the underlying data's
information content; rather, they emphasize predefined attributes.
Enhancement algorithms are typically interactive and application-
dependent, with methods such as contrast stretching remapping grey
levels using predetermined transformations, as seen in histogram
equalization. These methodologies, alongside others utilizing local
neighborhood operations or transformative techniques like discrete
Fourier transforms, are indispensable tools across various fields
including medical imaging, art studies, forensics, and atmospheric
sciences. The significance of image enhancement lies in its ability to
refine visual data, facilitate deeper insights, and aid decision-making
processes without compromising image integrity. As such, ongoing
research focuses on developing robust enhancement methodologies
that achieve optimal results while preserving the authenticity and
interpretability of the underlying data.

2. Medical Image Enhancement Application Using Histogram


Equalization in Computational Libraries
Authors: Mohamed Y. Adam, Mozamel M. Saeed, Al Samani A. Ahmed
Digital Image Processing refers to the manipulation of two-
dimensional images by digital computers to alter existing images
according to desired specifications. This may involve tasks such as
noise removal, contrast enhancement, correction of blurring resulting
from camera movement during image acquisition, and rectification of
geometrical distortions caused by lenses. Prior to undertaking image
processing, image enhancement is often necessary to improve the
overall quality of the image. While not exhaustive, this section
focuses on image enhancement through techniques such as histogram
equalization. Histogram equalization is particularly effective for
enhancing low-contrast and dark images by improving contrast and
brightness uniformly across the image, especially in cases where the
original image exhibits irregular illumination. Such enhancement
techniques are crucial for enhancing the visibility of features within
the scene, thereby facilitating easier visualization, classification, and
interpretation of images. Contrast stretching, a common method for
enhancing images, involves spreading out the range of scene
illumination, with linear contrast stretch being one approach.
However, linear contrast stretch may assign an equal number of gray
levels to both frequently and infrequently occurring gray levels,
leading to ambiguous feature distinction. To address this limitation,
histogram equalization allocates more gray levels to frequently
occurring ones, thereby enhancing feature contrast. While global
histogram equalization may result in intensity saturation in dark and
bright areas, color image enhancement can be achieved by encoding
red, green, and blue components into separate spectral images.
Overall, these enhancement techniques play a critical role in
improving image quality and aiding in subsequent image analysis and
interpretation.

3. A Novel Approach for Contrast Enhancement and Noise


Removal of Medical Images
Authors: Vijeesh Govind, Arun A. Balakrishnan, Dominic Mathew
Medical image enhancement is crucial for identifying specific regions
within an image by enhancing the desired areas. Various approaches
to image enhancement, as detailed in references, operate in either
spatial or frequency domains. Histogram equalization is a widely used
technique that enhances image contrast by expanding the dynamic
range, albeit with the drawback of potential over-enhancement in
certain image regions. Adaptive histogram equalization attempts to
mitigate this issue by forming histograms from localized data but
requires significant computational resources. Transform domain
techniques, such as 2-D Discrete Cosine Transform (DCT) or Fourier
transform, convert input images into the desired domain, though they
may introduce objectionable artifacts necessitating further processing.
In our proposed method, we employ the Perona-Malik filter for noise
removal from the enhanced image. The subsequent sections of this
paper are organized as follows: Section II provides the theoretical
background, including Weighted Histogram Equalization (WHE),
transform domain approaches, and the Perona-Malik filter. Section III
outlines the implementation of our proposed method. Section IV
presents experimental results and compares our method with
traditional Histogram Equalization. Finally, Section V concludes the
paper.

4. A Novel Approach for Contrast Enhancement Based on


Histogram Equalization
Authors: Hojat Yegane, Ali Ziaei, Amirhossein Rezaie.

Contrast enhancement techniques are widely utilized in image and


video processing to achieve a broader dynamic range. Among these
techniques, histogram modification-based algorithms are particularly
popular for achieving enhanced contrast. Histogram Equalization
(HE) stands out as one of the most commonly employed algorithms
due to its simplicity and effectiveness. HE works by uniformly
distributing pixel values, resulting in an image with a linear
cumulative histogram. HE finds applications in various fields such as
medical image processing, speech recognition, and texture synthesis,
often in conjunction with histogram modification techniques.
However, HE has two main disadvantages that affect its efficiency.
Firstly, it assigns one gray level to two neighboring gray levels with
different intensities. Secondly, it may lead to a "washed-out" effect
when a majority of the image comprises a particular gray level with
higher intensity. Recent research in image and video contrast
enhancement has yielded advancements aimed at overcoming these
limitations. For instance, Mean Preserving Bi-Histogram Equalization
(BHE) addresses brightness preservation issues by separating the
input histogram into two parts based on the input mean before
equalizing them independently. Another notable improvement is
Dualistic Sub-Image Histogram Equalization (DSIHE), which divides
the histogram into segments based on entropy and applies histogram
equalization to each segment separately. These advancements
demonstrate ongoing efforts to refine contrast enhancement methods
for improved performance and applicability.
EXISTING TECHNOLOGY

1. Histogram Equalization (HE):

This is a well-established technique used for contrast enhancement in digital


images. It works by redistributing pixel intensity values in the image to achieve
a more uniform histogram, thereby enhancing contrast. While HE is effective in
improving image quality, it has limitations such as over-enhancement in certain
regions and the potential for a "washed-out" effect.
Merits:
 Simple and computationally efficient method for enhancing image contrast.
 Can be effective in improving the visual appearance of images with low
contrast.
Demerits:
 May lead to over-enhancement in certain regions, resulting in unnatural-
looking images.
 Can cause a "washed-out" effect when a majority of the image comprises a
particular intensity level.

2. Adaptive Histogram Equalization (AHE):

AHE addresses some of the limitations of traditional histogram equalization by


performing local histogram equalization in smaller regions of the image instead
of globally. This adaptive approach can better preserve local contrast and detail,
making it particularly useful in medical imaging where fine details are
important.
Merits:
 Performs local histogram equalization, preserving local contrast and
enhancing details.
 Particularly effective in medical imaging where fine details need to be
preserved.
Demerits:
 Higher computational complexity compared to traditional histogram
equalization.
 May result in amplified noise in regions with low contrast or near-uniform
intensity distributions.

3. Perona-Malik Filter:

The Perona-Malik filter is a type of edge-preserving smoothing filter commonly


used for noise reduction in digital images. It works by diffusing image gradients
while preserving strong edges, effectively reducing noise while retaining
important image features. The filter parameters can be adjusted to control the
amount of smoothing and edge preservation, making it versatile for various
image processing tasks.
Merits:
 Effective in noise reduction while preserving important image features such
as edges.
 Offers flexibility in adjusting parameters to control the amount of smoothing
and edge preservation.
Demerits:
 Parameter selection may require some tuning to achieve optimal results for
different types of images.
 Smoothing effect may blur fine details in the image if parameters are not
carefully chosen.

4. Wavelet Transform-Based Techniques:

Wavelet transform-based methods are widely used for image enhancement and
noise reduction in medical imaging. These techniques decompose the image into
different frequency bands, allowing for targeted enhancement or filtering of
specific image features. Wavelet-based denoising methods, in particular, have
shown effectiveness in preserving image details while reducing noise.
Merits:
 Provide multi-resolution analysis, allowing for targeted enhancement or
noise reduction in specific frequency bands.
 Effective in preserving image details while reducing noise.
Demerits:
 Provide multi-resolution analysis, allowing for targeted enhancement or
noise reduction in specific frequency bands.
 Effective in preserving image details while reducing noise.

5. Deep Learning-Based Approaches:

With recent advancements in deep learning, convolutional neural networks


(CNNs) have been increasingly used for image enhancement tasks. These
approaches learn complex mappings from input images to desired outputs,
enabling them to adaptively enhance image quality based on training data. Deep
learning-based methods have shown promising results in various medical
imaging applications, including denoising, super-resolution, and contrast
enhancement.
Merits:
 Can learn complex mappings from input images to desired outputs,
adaptively enhancing image quality.
 Have shown promising results in various medical imaging tasks, including
denoising and super-resolution.
Demerits:
 Require large amounts of training data and computational resources for
training deep neural networks.
 Lack of interpretability compared to traditional image processing
techniques, making it challenging to understand the underlying reasons for
their decisions.
Causes of reduced Quality of Medical Images

CAUSES
Motion Artifacts

Image Processing Errors

Equipment Malfunction

Patients Anatomy

Inadequate Imaging Parameter

Segmentation

SOLUTIONS Blending

Registration

Deconvolution

Artifact Removal

Machine
Learning

*Registration: Align multiple images of the same patient or different imaging modalities
*Deconvolution: Enhance image resolution
*Machine learning: Unet, CNN

METHODOLGY

2.1 Collecting Images from DATASET

In the process of collecting medical images for your project report, several
crucial steps ensure the integrity, relevance, and ethical compliance of the data.
Firstly, select a dataset that aligns with your research question, considering
factors like size, relevance, and the availability of required image types (e.g., X-
rays, MRI scans). Ensure legal permission for dataset use, verify image quality,
and standardize format.
 Define Criteria: Determine the specific criteria for selecting images based
on your project requirements.
 Visual Inspection: Review the sampled images visually to assess their
quality and relevance.
 Validation: Validate the selected images against your predefined criteria
to ensure they meet the desired standards
 Annotation: If necessary, annotate the selected images with relevant
labels or annotations for further analysis or machine learning tasks
 Storage and Organization
 Backup and Version Control
 Ethical Considerations

Figure 1: Two collected images


2.2 Masking of the Required Portion

In medical image analysis, masking refers to the process of isolating


or highlighting specific regions or structures within an image for
further analysis or visualization. This technique is particularly useful
for focusing on areas of interest, such as abnormalities, organs, or
anatomical landmarks.

Figure 2: Images Post Masking

2.3 PREPROCESSING

Preprocessing is a crucial step in medical image analysis aimed at


enhancing the quality, consistency, and usability of images before
further analysis or interpretation.

Preprocessing involves various methods which can be used, a brief


explanation has been discussed below:
2.3.1 Intensity Normalization:
Intensity normalization scales pixel values within medical images
to a standardized range, improving consistency for quantitative
analysis. It ensures that pixel intensities are comparable across
images acquired with different imaging parameters or scanners.

Figure 3: Normalized Image

2.3.2 Noise Reduction:


Noise reduction techniques aim to remove or reduce unwanted
artifacts or distortions present in medical images. Common
methods include Gaussian blur, median filtering, and bilateral
filtering.
Figure 4: (a, b,c, d)

In the original image histogram (left), the wider spread of pixel


intensities suggests a higher presence of noise. By contrast, the noise-
reduced image histogram (right) appears narrower, indicating a more
concentrated distribution of pixel intensities. This suggests that the
noise reduction process has successfully removed or suppressed some
of the random variations in the original image.
2.3.3 Contrast Enhancement:
Contrast enhancement techniques improve the visibility of
structures within medical images by adjusting the intensity
distribution. Histogram equalization, contrast stretching, and
adaptive histogram equalization are commonly used for this
purpose.

Figure 5: (a: original image, b: equalized image, c,d: histogram of original image, equalized image)

Histogram equalization enhances image contrast by adjusting the


distribution of pixel intensities, resulting in improved visibility of
details. The process involves transforming the histogram to spread out
pixel values, making brighter areas more prominent. This technique is
particularly useful in medical imaging, such as MRIs, where
improved contrast aids in better visualization of anatomical structures.
Before Equalization:

This histogram might show a peak at a specific intensity value,


indicating a large portion of the image has a similar
brightness.There could be limited spread across the entire intensity
range, suggesting low contrast between different tissues.

After Equalization:

This histogram will ideally be more spread out across the entire
intensity range.Each intensity value should have a more balanced
representation, indicating improved contrast between different
brain regions.

2.3.4 Resizing:
Re-sizing adjusts image dimensions to a desired size, crucial in
standardizing resolution and aspect ratio across data sets. It ensures
consistency in prepossessing and model training, reducing
variability in image sizes. Various interpolation methods preserve
image quality while resizing, including nearest-neighbour, bi-
linear, or cubic. Resizing may also involve cropping or padding
images for specific input requirements. Overall, resizing facilitates
compatibility across different stages of the image processing
pipeline, promoting efficient analysis and model deployment.
2.3.5 Augmentation:
Augmentation is a data enhancement technique used in machine
learning, including medical image analysis. It involves applying
transformations like rotation, flipping, and scaling to increase
dataset diversity. By exposing the model to varied data,
augmentation helps prevent overfitting and improves
generalization. In medical imaging, it's valuable for tasks like
classification and segmentation, where diverse image appearances
are common.

Figure 6: Augmented Images


2.4 IMAGE FUSION AND ALPHA BLENDING

Image Fusion:
Image fusion involves combining multiple images acquired from
different sources or modalities, each potentially containing its own set
of errors or artifacts. By fusing these images, the errors present in one
image may be compensated for by the information from other images,
leading to a final fused image with reduced overall errors. For
example, if one image has noise artifacts while another has blur
artifacts, fusion techniques can combine the sharp details from one
image with the noise reduction from another, resulting in a fused
image with improved clarity and reduced noise.

Image Blending:
Image blending techniques can further refine the fused images by
seamlessly integrating them to create a visually coherent composite
image. Blending methods such as alpha blending or gradient domain
blending can be used to blend images with different errors, ensuring
smooth transitions and maintaining consistency across the composite
image. By blending images with different errors, you can effectively
combine their strengths while minimizing the impact of individual
errors, resulting in a final image that is visually appealing and
accurately represents the underlying data.
Equation that is used to implement these is given below with a brief
explanation:
blended_image = uint8((alphas(i) * double(image1) + alphas(i) *
double(image2) + alphas(i) * double(image3) + alphas(i) *
double(image4) + alphas(i) * double(image5)) / 5);

blended Image (blended_image):


 This variable holds the resulting blended image after combining
multiple input images.

Blending Parameter (alphas(i)) and Input Images (image1, image2,


image3, image4, image5):

 alphas(i) is a scalar value representing the blending parameter


for each input image.

 image1, image2, image3, image4, and image5 are the input


images to be blended. These images can be grayscale or color
images, but they must have the same dimensions.

Type Conversion:

 double(image1), double(image2), double(image3),


double(image4), and double(image5) convert the pixel values of
the input images to double precision.

 This conversion is necessary to prevent overflow during


calculations, especially when the blending parameters are close to
1.
Weighted Sum:

 The equation calculates a weighted sum of pixel values from all


input images.

 For each image, alphas(i) represents the weight applied to its


pixel values.

 By multiplying each image's pixel values with its corresponding


blending parameter (alphas(i)), the equation gives more weight
to images with higher blending parameters and less weight to
images with lower blending parameters.

Normalization:

 After obtaining the weighted sum of pixel values from all input
images, the sum is divided by the total number of images (5 in
this case).

 This step normalizes the blended image to ensure that the pixel
values are within the valid range (0 to 255 for uint8), suitable
for display and further processing.

Data Type Conversion:

 uint8() is used to convert the resulting blended image back to 8-


bit representation.

 This ensures that the pixel values of the blended image are
within the valid range for uint8 data type, making it suitable for
display and further processing.
Figure 7: Images after Blending
SOFTWARE REQUIREMENTS
1. MATLAB:

MATLAB is a high-level programming language and interactive environment


widely used in scientific and engineering applications. It is utilized in this
project for implementing image processing algorithms and analyzing medical
images.

Version: The code provided was developed and tested using MATLAB version
X.XX.

Obtaining MATLAB: MATLAB can be obtained from MathWorks


(https://www.mathworks.com/) through purchasing a license or accessing it via
academic institutions or organizations that provide MATLAB licenses.

MATLAB is a programming environment designed for numerical computing


and visualization.

It provides an interactive platform for developing algorithms, analyzing data,


creating models, and visualizing results.

MATLAB supports matrix manipulations, plotting of functions and data,


implementation of algorithms, creation of user interfaces, and interfacing with
programs written in other languages.

You can obtain MATLAB from MathWorks, the company that develops
MATLAB, either through purchasing a license or accessing it via academic
institutions or organizations that provide MATLAB licenses.
2. Image Processing Toolbox:

The Image Processing Toolbox is an essential component for performing


various image processing tasks such as noise reduction, image enhancement,
and analysis. It provides a rich set of functions specifically designed for image
processing applications.

Version: The code relies on functions from the Image Processing Toolbox,
version X.XX.

Functionality: This toolbox facilitates operations such as applying filters,


histogram equalization, image blending, and computing image quality metrics
like PSNR and SSIM.

The Image Processing Toolbox is an add-on for MATLAB that provides a


comprehensive set of functions for processing, analyzing, and visualizing
images.

It includes functions for tasks such as image filtering, segmentation,


morphological operations, feature extraction, image registration, and image
enhancement.

The toolbox is essential for tasks involving image manipulation, as


demonstrated in the provided code, such as applying filters, performing
histogram equalization, rotating images, blending images, and computing
metrics like PSNR (Peak Signal-to-Noise Ratio) and SSIM (Structural
Similarity Index).

Image Processing Toolbox is usually installed separately from MATLAB and


requires a license to use.
3. Additional Toolboxes (if applicable):

Depending on the specific functions used in the project code, additional


toolboxes beyond the Image Processing Toolbox may be required. These could
encompass toolboxes for signal processing, computer vision, statistics, or other
domains. Examples of functions that may belong to different toolboxes or
necessitate additional installations include imgaussfilt, imwarp, imref2d, psnr,
rms, and ssim.

3.1 Image Processing Tool Box

 imgaussfilt:

Functionality: The imgaussfilt function is utilized for Gaussian filtering, a


fundamental technique for noise reduction and image smoothing.

 imwarp:

Functionality: With the imwarp function, we perform geometric


transformations on images, including rotation, scaling, translation, and
affine transformations.

 imref2d:

Functionality: The imref2d function is crucial for creating 2-D spatial


referencing objects, enabling the association of spatial information with
images, such as defining pixel coordinates.
3.2 Inbuilt Matlab Functions

 psnr (Peak Signal-to-Noise Ratio):

Description: The psnr function calculates the Peak Signal-to-Noise Ratio


between two images, providing a quantitative measure of image

 rms (Root Mean Square):

Description: The rms function computes the Root Mean Square value of
an image or a part of an image, representing the average magnitude of
pixel intensities.

 ssim (Structural Similarity Index):

Description: The ssim function calculates the Structural Similarity Index


between two images, indicating the similarity in structural information.

4. Dataset Acquisition and Management

4.1 Kaggle

Kaggle is a platform for data science and machine learning competitions,


datasets, and notebooks. The dataset utilized in this project was sourced from
Kaggle. Accessing and managing the dataset requires the following:

Kaggle Account:

A Kaggle account is necessary for accessing datasets and participating in


competitions. Users can sign up for a free account on the Kaggle website.
Kaggle API:

The Kaggle API enables programmatic access to datasets, competitions, and


other Kaggle resources. It simplifies the process of downloading datasets
directly into the local environment.

Software Requirements:

To utilize the Kaggle API, ensure that Python is installed on your system along
with the kaggle Python package. The kaggle package can be installed via pip,
the Python package manager.

4.2 Kenhub

Kenhub is a platform that provides educational resources for medical students


and professionals, including anatomical models, quizzes, and articles. The
dataset obtained from Kenhub may require specific tools or software for
processing and analysis.

Access to Kenhub Dataset:

Ensure that you have obtained permission or subscribed to Kenhub's services to


access the dataset legally.

Data Format:

Verify the format of the dataset provided by Kenhub. It may be in standard file
formats such as CSV, DICOM, or proprietary formats specific to medical
imaging.
Software Requirements:

Depending on the dataset's format and content, you may need specialized
software for medical image viewing, analysis, or processing. Common software
includes DICOM viewers, medical image analysis software, or programming
environments like MATLAB or Python with libraries such as NumPy, SciPy,
and scikit-image for image processing tasks.

5. DICOM Viewer Software

DICOM (Digital Imaging and Communications in Medicine) is the standard


format for the communication and management of medical imaging
information. Viewing DICOM images requires specialized DICOM viewer
software that is capable of interpreting and displaying medical images in the
DICOM format.

Software Requirement:

DICOM Viewer:

A DICOM viewer software is necessary for viewing medical images stored in


the DICOM format. This software provides functionalities such as image
viewing, manipulation, annotation, measurement, and analysis.
3. ALGORITHMS & FLOW-CHART

3.1 Masking of the medical image

3.1.1 Pseudocode

% Load your image

imagepath = ‘original image path’;

originalImage = imread(imagepath);

% Define a mask

mask = false(size(originalImage, 1), size(originalImage, 2));

mask(50:150, 100:200) = true;

% Apply the mask

maskedImage = originalImage;

maskedImage(repmat(mask, [1, 1, size(originalImage, 3)])) = 0;

imshow(maskedImage);
3.1.2 Algorithm

1. Load the Image:

 Read the medical image from the specified path using an


appropriate function (e.g., imread in MATLAB).

 Store the loaded image in a variable named originalImage.

2. Define the Mask:

 Create a new image (mask) with the same dimensions (height


and width) as the original image.

 Initialize the mask with all pixels set to a value representing the
background (e.g., logical false for binary masks).

 Define the region of interest (ROI) by setting the corresponding


pixels within the mask to a value representing the foreground
(e.g., logical true). This can be achieved using various
techniques depending on the desired mask shape:

 Rectangular mask: Specify a bounding box using row and


column indices (similar to the code example).

 Freehand mask: Use interactive tools to draw the desired shape


on the image.
 Automatic segmentation: Implement algorithms like
thresholding, edge detection, or region-based segmentation to
identify the ROI.

3. Apply the Mask:

 Create a new image variable (maskedImage) and initialize it


with a copy of the original image.

 Perform element-wise multiplication between the original image


and a replicated version of the mask. The replicated mask should
have the same dimensions as the original image (including
channels for color images). This ensures the masking operation
applies to all channels.

 Pixels corresponding to true values in the replicated mask will


remain unchanged, while pixels corresponding to false values
will be set to a background value (e.g., zero for thresholding).
3.1.3 Flowchart: Masking of a Medical Image

Start

Initialize Environment

Load Images

Create Masks

Define Regions

Apply Masks

Display Results

End
Image 1 Image 2

Normalization

Noise Removal
Preprocessing

Augmentation

Contrast

Resize

Image Fusion

Postprocessing
Image Blending

Composite Image/ Resultant Image

Quality Evolution

Block Diagram

Img 5 Img 4 Img 3 Img 2 Img 1


Alpha Cha
3.2 Pre- Processing

3.2.1 Algorithms

Normalization:

 Read the input image.


 Convert the image to double precision.
 Normalize the pixel values in the image between 0 and 1.
 Display the normalized image.

Noise Removal:

 Read the input image.


 Apply a Gaussian filter to the image for noise reduction.
 Adjust the sigma value based on the noise level.
 Display the noise-reduced image.
Enhancement:
 Read the input image.
 Convert the image to grayscale.
 Perform histogram equalization on the grayscale image.
 Display the equalized image and its histogram.

Augmentation:
 Read the input image.
 Define a mask on the image.
 Apply the mask to create a masked image.
 Normalize the masked image.
 Augment the image by rotating it multiple times.
 Display the original and augmented images.

3.2.2 Pseudocodes

1. Normalization:
 Input: Image path
 Output: Normalized image
 Load the image from the specified path
 Convert the image to double precision
 Calculate the minimum and maximum pixel values in the image
 Normalize the pixel values in the image between 0 and 1
 Return the normalized image

2. Noise Removal:
 Input: Image path, Sigma value for Gaussian filter
 Output: Noise-reduced image
 Load the image from the specified path
 Apply a Gaussian filter to the image for noise reduction with the
specified sigma value
 Return the noise-reduced image

3. Enhancement:
 Input: Image path
 Output: Equalized image and its histogram
 Load the image from the specified path
 Convert the image to grayscale
 Perform histogram equalization on the grayscale image
 Return the equalized image and its histogram

4. Augmentation:
Input: Image path, Number of augmented images
Output: Augmented images
Load the image from the specified path
Normalize the masked image
Augment the image by rotating it multiple times
Return the augmented images
3.2.3 FLOW CHART

START

Normalize Noise Contrast


Augmentation
Image Removal Enhancement

 Load
 Load image  Load Image Image
 Load Image
 Normalize
 Convert to  Perform the image
 Apply
double precision histogram
Gaussian Filter  Augment
Equalisation the image
by rotating
 Calculate the in different
min/max angles

 Normalise pixels
values

Output- Normalized
image

Output- Noise Reduced


Image

Output: Equalized Image,


Histogram

Output: Augmented
END
Image
3.3 Image Fusion and Blending

3.3.1 Algorithm

1. Read Input Images:


 Read the input images from the specified file paths.
 Store each image in separate variables: image1, image2,
image3, image4, image5.
2. Resize Images:
 Determine the minimum height and width among all input
images.
 Resize each image to the dimensions of the smallest image.
 Store the resized images back in their respective variables.
3. Define Alpha Values Range:
 Define a range of alpha values from 0 to 1.5 with a step size of
0.1.
 These alpha values will control the blending strength in the
alpha blending process.
4. Create a Figure:
 Create a new figure to display the blended images for different
alpha values.
5. Alpha Blending Loop:
 For each alpha value in the range:
 Combine all input images using alpha blending with the current
alpha value.
 Use a weighted sum of the pixel values in each image, where
the weight is determined by the alpha value.
 Display the blended image in a subplot of the figure.
 Repeat this process for each alpha value.
6. Display Input Images:
 Display each input image in separate subplots of the figure for
comparison.
 Set titles for each subplot to indicate the corresponding image.
7. End

3.3.2 FLOW CHART

START

Read Input Images

Resize Images

Define Alpha Values

Create Figure

Alpha Blending Loop

Display Output Images

End
3.3.3 PSEUDOCODE

1. Read Input Images:


- Read the input images from specified file paths.
- Store each image in separate variables.
2. Resize Images:
- Determine the minimum height and width among all input images.
- Resize each image to the dimensions of the smallest image.
3. Define Alpha Values:
- Define a range of alpha values from 0 to 1.5 with a step size of
0.1.
4. Create Figure:
- Create a new figure for displaying images.
5. Display Input Images:
- Display each input image in separate subplots of the figure for
comparison.
6. Alpha Blending Loop:
- For each alpha value in the range:
- Combine all input images using alpha blending with the current
alpha value.
- Display the blended image in a subplot of the figure.
7. End.
4. QUALITY ANALYSIS

Using Root Mean Square (RMS) values, Peak Signal-to-Noise Ratio


(PSNR), and Structural Similarity Index (SSI) are excellent methods
for quantitatively assessing the quality of images.

4.1 Root Mean Square (RMS):

 RMS is a measure of the average deviation of pixel values from


their mean.
 To calculate RMS:
 Compute the squared difference between each pixel value and the
mean pixel value.
 Take the average of these squared differences.
 Take the square root of the average to get the RMS value.
 Higher RMS values indicate greater deviation from the mean,
which may suggest poorer image quality

RMS=
√ 1
N
( I i − I )2

Ii =Intensity value of a pixel in 1st image


I = Intensity value of a pixel in 2nd image
N = Total number of Pixels
4.2 Peak Signal-to-Noise Ratio (PSNR):

 PSNR measures the quality of an image by comparing it to a


reference image.
 It quantifies the ratio between the maximum possible power of a
signal and the power of corrupting noise.
 PSNR is often calculated in decibels (dB).
 Higher PSNR values indicate higher image quality.
 To calculate PSNR:
 Compute the mean squared error (MSE) between the original
and distorted images.
 Take the logarithm of the maximum possible pixel value
squared.
 Subtract the logarithm of the MSE from the logarithm of the
maximum possible pixel value squared.
 Multiply the result by 10 to obtain PSNR in decibels.

2
max
PSNR= 10 log 10 (
mse
)

max= Maximum Possible Pixel


mse= mean square error
4.3 Structural Similarity Index (SSIM):

 SSIM compares the structural similarity between two images.


 It considers luminance, contrast, and structure similarity.
 SSIM values range from -1 to 1, where 1 indicates perfect
similarity.
 Higher SSIM values indicate higher image quality.
 To calculate SSIM, you can use MATLAB's ssim function or
implement the algorithm manually.

(2 μI μI ′ ​+C 1 ​) (2 σI , I ′ ​+C 2 ​)
SSIM(I,I′) = ( μI 2 ​+ μI ′ 2 ​+C 1 ​)( σI 2 ​+ σI ′ 2 ​+C 2 ​)

Here's how we have incorporated these metrics into our quality


checking process:
 Compute RMS values for each image and analyze them to
assess the overall deviation from the mean pixel value.
 Calculate PSNR values between the original and distorted
images to quantify the level of noise and distortion.
 Use SSIM to compare the structural similarity between images,
indicating the presence of any structural distortions or artifacts.
PSNR, RMS, SSIM resultant Images
TABLE

Alpha PSNR RMS SSIM


0 7.4131 108.6135 0.51813
0.1 8.1511 99.7665 0.51981
0.2 8.9491 91.009 0.56595
0.3 9.8126 82.3971 0.62036
0.4 10.7513 73.9562 0.67612
0.5 11.7733 65.7465 0.7288
0.6 12.8683 57.4598 0.77494
0.7 14.0463 50.6088 0.81381
0.8 15.2434 44.093 0.84431
0.9 16.375 38.7072 0.86706
1.0 17.236 35.0542 0.88247
1.1 17.630 33.4997 0.89126
1.2 17.4897 34.0451 0.89479
Figure 8: Graphical representation of PSNR, RMS & SSIM

PSNR (Peak Signal-to-Noise Ratio):


 The PSNR values generally increase as the alpha value increases,
indicating that the quality of the blended image improves with
higher alpha values.
 This suggests that blending images with higher alpha values
results in less noise and better preservation of signal quality.
RMS (Root Mean Square):
 The RMS values decrease as the alpha value increases, implying
that the deviation of pixel values from their mean decreases with
higher alpha values.
 Lower RMS values indicate better consistency and uniformity in
pixel values across the blended image.
SSIM (Structural Similarity Index):
 The SSIM values generally increase with higher alpha values,
indicating better structural similarity between the original and
blended images.
 This suggests that blending images with higher alpha values
preserves the structural features and details present in the original
images.

Overall Conclusion:
 Increasing the alpha value in the blending process leads to
improvements in image quality, as evidenced by higher PSNR
values, lower RMS values, and higher SSIM values.
 Therefore, selecting higher alpha values for blending results in
better-quality blended images with reduced noise, more consistent
pixel values, and improved structural similarity to the original
images.
CONCLUSION

In this project, we introduce an innovative approach to image


restoration, merging image fusion and alpha blending techniques to
overcome the limitations of single-image processing. By harnessing
the complementary information from multiple sources, our method
produces composite images rich in detail and clarity. Preprocessing
steps are applied to prepare the input images, followed by image
fusion to integrate their information effectively. Subsequently, alpha
blending refines the fused image, offering precise control over the
contribution of each source image at the pixel level. This fine-tuned
control enhances the final image by accentuating specific features,
mitigating artifacts, and potentially yielding more visually compelling
and application-specific results.

Looking ahead, future endeavors will focus on delving into advanced


image fusion algorithms and exploring novel techniques for alpha
channel generation. These refinements aim to further elevate the
efficacy and versatility of the image restoration process. By
continually innovating and refining our methodologies, we aspire to
advance the state-of-the-art in image restoration, ultimately enhancing
the quality and utility of medical imaging and other domains reliant
on high-fidelity visual data.
FUTURE SCOPE

1. Real-time Medical Image Fusion for Surgical Navigation:


The project presents an opportunity for future advancements in real-time
medical image fusion for surgical navigation systems. By extending the current
alpha blending techniques, the project can contribute to the development of
innovative solutions that seamlessly integrate pre-operative imaging data, such
as MRI or CT scans, with intra-operative images, such as endoscopic or
laparoscopic views. This integration enables surgeons to benefit from enhanced
visualization and guidance during minimally invasive procedures, facilitating
better decision-making and improved patient outcomes.

Future research in this area may involve optimizing alpha blending algorithms
to achieve low-latency processing, ensuring real-time image fusion capabilities
during surgical procedures. Additionally, incorporating advanced registration
techniques into the image fusion process can help achieve accurate alignment of
pre-operative and intra-operative images, enhancing the reliability and
effectiveness of surgical navigation systems.

Furthermore, integrating the enhanced alpha blending techniques with existing


surgical navigation platforms or developing standalone navigation systems with
built-in image fusion capabilities can provide surgeons with intuitive interaction
and visualization tools. Surgeons can interactively manipulate fused images,
adjust transparency levels, and explore different views of the patient's anatomy
during surgery, thereby enhancing surgical precision and efficiency.

Overall, the future scope of real-time medical image fusion for surgical
navigation represents a natural extension of the current project, leveraging the
foundational concepts and techniques of alpha blending to address critical
challenges in surgical navigation and improve patient care in minimally
invasive surgery.
2. AI-driven Assistive Technologies for Image-guided
Interventions:

Incorporating AI-driven assistive technologies for image-guided


interventions stands as a promising avenue for future expansion of the
project. By integrating advanced image processing techniques with
artificial intelligence algorithms, the project can potentially develop
innovative solutions aimed at enhancing surgical precision, improving
patient outcomes, and streamlining clinical workflows. Such
technologies could include automated image analysis algorithms for
real-time feedback, intelligent guidance systems for surgical
navigation, and personalized treatment planning tools based on
machine learning models. This future scope aligns closely with the
project's objectives of leveraging alpha blending techniques to
enhance medical imaging applications, paving the way for
transformative advancements in healthcare technology and clinical
practice.

3. Interactive Augmented Reality (AR) for Education and


Training:

Integrating Interactive Augmented Reality (AR) for education and


training presents a compelling future scope for the project. By
leveraging alpha blending techniques within AR environments, the
project can facilitate immersive learning experiences and interactive
training simulations for medical professionals, students, and trainees.
These AR applications could include virtual anatomical structures
overlaid onto real-world scenes, interactive medical simulations, and
hands-on procedural training modules. Through intuitive interaction
and visualization tools, users can manipulate virtual objects, explore
anatomical structures from different perspectives, and practice
surgical procedures in a risk-free environment. This future direction
aligns with the project's focus on enhancing medical imaging
applications and has the potential to revolutionize medical education
and training practices, ultimately improving clinical proficiency and
patient care outcomes.
REFERENCES

 Academia, "Enhancement Of Medical Images Using Image Processing In


Matlab," Academia.edu. [Online]. Available:
https://www.academia.edu/keypass/RXlHa3FGdDVZN0lpdzRyQ0FvNGhia
kV6VjdvQ2E2MzFOcDhzVXBsckUxWT0tLXUvcFRQMENWQnRwZGtj
dk5kZlh1RlE9PQ==--79fb690f8b534360cc32becb33463ba1336503e0/t/
AqScy-RJq308i-whwaE/resource/work/50382431/
Enhancement_Of_Medical_Images_Using_Image_Processing_In_Matlab?
email_work_card=reading-history. [Accessed: 5 April 2024].

 Academia, "Improving Diagnostic Viewing of Medical Images using


Enhancement Algorithms," Academia.edu. [Online]. Available:
https://www.academia.edu/keypass/RXlHa3FGdDVZN0lpdzRyQ0FvNGhia
kV6VjdvQ2E2MzFOcDhzVXBsckUxWT0tLXUvcFRQMENWQnRwZGtj
dk5kZlh1RlE9PQ==--79fb690f8b534360cc32becb33463ba1336503e0/t/
AqScy-RJq308i-whwaE/resource/work/34212577/
Improving_Diagnostic_Viewing_of_medical_Images_using_Enhancement_
Algorithms?email_work_card=reading-history. [Accessed: 5 April 2024].

 Academia, "Paper1: Medical Image Enhancement Application Using


Histogram Equalization in Computational Libraries," Academia.edu.
[Online]. Available:
https://www.academia.edu/keypass/RXlHa3FGdDVZN0lpdzRyQ0FvNGhia
kV6VjdvQ2E2MzFOcDhzVXBsckUxWT0tLXUvcFRQMENWQnRwZGtj
dk5kZlh1RlE9PQ==--79fb690f8b534360cc32becb33463ba1336503e0/t/
AqScy-RJq308i-whwaE/resource/work/95250335/
Paper1_Medical_Image_Enhancement_Application_Using_Histogram_Equ
alization_in_Computational_Libraries?email_work_card=title. [Accessed: 5
April 2024].
 Academia, "A novel approach for contrast enhancement and noise removal
of medical images," Academia.edu. [Online]. Available:
https://www.academia.edu/keypass/anU3cXpMd3lHUmcwYVZwZUpsK2F
kS0lIZTB1a21GRzBuT3B4V0lLQkpTWT0tLTRueVBWQTVVZHdlei83a
ThBNGU4aFE9PQ==--560c84222c1e37f76676b951c34617b87c9af420/t/
AqScy-RHQpyTN-CXhB3/resource/work/78960132/
A_novel_approach_for_contrast_enhancement_and_noise_removal_of_med
ical_images?email_work_card=title. [Accessed: 5 April 2024].

 Academia, "A Novel Approach for Contrast Enhancement In Biomedical


Images Based on Histogram Equalization," Academia.edu. [Online].
Available:
https://www.academia.edu/keypass/aFVLbTI5Q080d3FjUk1mYUgzTmxm
TlZ2SjVIby8xa05kSVpYTWNqcFNJaz0tLTNFTkI0TGZFSEFHcVlrR0pU
cDI5Ymc9PQ==--d5e83a87d3ef53c0368aee4c3c4ec2a7ead23d5d/t/AqScy-
RH4EWUm-08mZd/resource/work/356379/
A_Novel_Approach_for_Contrast_Enhancement_In_Biomedical_Images_B
ased_on_Histogram_Equalization?email_work_card=title. [Accessed: 5
April 2024].

 Image Dataset 1:
Kenhub, "Medical Imaging and Radiological Anatomy," Kenhub.com.
[Online]. Available:
https://www.kenhub.com/en/library/anatomy/medical-imaging-and-
radiological-anatomy. [Accessed: 5 April 2024].

 Image Dataset 2:
K. Mader, "SIIM Medical Images," Kaggle, Month Year. [Online].
Available: https://www.kaggle.com/datasets/kmader/siim-medical-
images. [Accessed: 5 April 2024].
Appendix 1

% Load your image


imagepath = 'D:\project out put image\multipleimage project\ogbraintumor -
Copy.jpg';
originalImage = imread(imagepath);

% Define a mask
mask = false(size(originalImage, 1), size(originalImage, 2));
mask(50:150, 100:200) = true;

% Apply the mask


maskedImage = originalImage;
maskedImage(repmat(mask, [1, 1, size(originalImage, 3)])) = 0;

% Normalize the image


normalizedImage = double(maskedImage);
normalizedImage = (normalizedImage - min(normalizedImage(:))) /
(max(normalizedImage(:)) - min(normalizedImage(:)));

% Apply Gaussian filter for noise reduction


sigma = 1.0; % Adjust sigma value based on noise level
smoothed_image = imgaussfilt(normalizedImage, sigma);

% Perform histogram equalization


equalizedImage = histeq(normalizedImage);
% Augmentation: Rotate the image manually
numAugmentedImages = 15; % Adjust the number of augmented images as
needed
augmentedImages = cell(1, numAugmentedImages);
for i = 1:numAugmentedImages
angle = randi([1, 360]); % Random rotation angle between 1 and 360 degrees

% Rotate the image manually using interpolation


augmentedImages{i} = rotateImage(normalizedImage, angle);
end

% Display images
figure;

% Original Image
subplot(5, 3, 1);
imshow(originalImage);
title('Original Image');

% Normalized Image
subplot(5, 3, 2);
imshow(normalizedImage);
title('Normalized Image');

% Noise-Removed Image (Gaussian Filter)


subplot(5, 3, 3);
imshow(smoothed_image);
title('Noise-Removed Image (Gaussian Filter)');
% Contrast-Enhanced Image
subplot(5, 3, 4);
imshow(equalizedImage);
title('Contrast-Enhanced Image');

% Augmented Images
for i = 1:numAugmentedImages
subplot(5, 3, i + 4);
imshow(augmentedImages{i});
title(['Augmented Image ' num2str(i)]);
end

function rotatedImage = rotateImage(image, angle)


% Rotate the image manually using interpolation

% Convert angle to radians


angleRad = deg2rad(angle);

% Compute rotation matrix


R = [cos(angleRad) -sin(angleRad); sin(angleRad) cos(angleRad)];

% Compute rotated image size


imageSize = size(image);
rotatedSize = round(max(abs(imageSize(1:2)*R), [], 2)');

% Compute padding to prevent cropping


padAmount = max(rotatedSize - imageSize(1:2), 0);
padAmount = padAmount(1:2);

% Apply rotation using interpolation


rotatedImage = imwarp(image, affine2d([R [0; 0]; 0 0 1]), ...
'OutputView', imref2d(rotatedSize + padAmount));
end
Appendix 2

% Read the input images


image1 = imread('D:\project out put image\multipleimage project\normalizedBT
01.png');
image2 = imread('D:\project out put image\multipleimage project\normalizedBT
2.png');
image3 = imread('D:\project out put image\multipleimage project\normalizedBT
3.png'); % Provide the path to image3
image4 = imread('D:\project out put image\multipleimage project\normalizedBT
4.png'); % Provide the path to image4
image5 = imread('D:\project out put image\multipleimage project\normalizedBT
5.png'); % Provide the path to image5

% Resize images to the same dimensions


min_height = min([size(image1, 1), size(image2, 1), size(image3, 1),
size(image4, 1), size(image5, 1)]);
min_width = min([size(image1, 2), size(image2, 2), size(image3, 2),
size(image4, 2), size(image5, 2)]);
image1 = imresize(image1, [min_height, min_width]);
image2 = imresize(image2, [min_height, min_width]);
image3 = imresize(image3, [min_height, min_width]);
image4 = imresize(image4, [min_height, min_width]);
image5 = imresize(image5, [min_height, min_width]);

% Define alpha values range


alphas = 0:0.1:1.25;

% Initialize arrays to store PSNR, RMS, and SSIM values


psnr_values = zeros(size(alphas));
rms_values = zeros(size(alphas));
ssim_values = zeros(size(alphas));

% Create a figure for displaying images and plots


figure;

% Display the original images


subplot(5, 5, 1);
imshow(image1);
title('Image 1');
subplot(5, 5, 2);
imshow(image2);
title('Image 2');
subplot(5, 5, 3);
imshow(image3);
title('Image 3');
subplot(5, 5, 4);
imshow(image4);
title('Image 4');
subplot(5, 5, 5);
imshow(image5);
title('Image 5');

% Loop over each alpha value


for i = 1:numel(alphas)
% Combine all images using alpha blending
blended_image = uint8((alphas(i) * double(image1) + alphas(i) *
double(image2) + alphas(i) * double(image3) + alphas(i) * double(image4) +
alphas(i) * double(image5)) / 5);

% Display the blended image


subplot(5, 5, i + 5);
imshow(blended_image);
title(['Alpha = ' num2str(alphas(i))]);

% Compute PSNR, RMS, and SSIM


psnr_values(i) = psnr(image1, blended_image);
rms_values(i) = rms(double(image1(:)) - double(blended_image(:)));
ssim_values(i) = ssim(image1, blended_image);
end

% Plot PSNR, RMS, and SSIM values


subplot(5, 5, 22);
plot(alphas, psnr_values, 'o-');
title('PSNR Curve');
xlabel('Alpha');
ylabel('PSNR Value');
grid on;

subplot(5, 5, 23);
plot(alphas, rms_values, 'o-');
title('RMS Curve');
xlabel('Alpha');
ylabel('RMS Value');
grid on;

subplot(5, 5, 24);
plot(alphas, ssim_values, 'o-');
title('SSIM Curve');
xlabel('Alpha');
ylabel('SSIM Value');
grid on;

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy