0% found this document useful (0 votes)
2 views29 pages

Chapter 4

This document discusses the potential of generative AI, particularly Generative Adversarial Networks (GANs), to enhance malaria diagnosis through synthetic data augmentation and advanced image analysis. It highlights the limitations of traditional diagnostic methods and presents a methodology that combines deep learning frameworks to achieve high accuracy in differentiating malaria-infected and non-infected blood smear images. The chapter also addresses the challenges and future research directions for AI applications in malaria diagnosis, emphasizing the need for improved data quality and clinical integration.

Uploaded by

miniproject618
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views29 pages

Chapter 4

This document discusses the potential of generative AI, particularly Generative Adversarial Networks (GANs), to enhance malaria diagnosis through synthetic data augmentation and advanced image analysis. It highlights the limitations of traditional diagnostic methods and presents a methodology that combines deep learning frameworks to achieve high accuracy in differentiating malaria-infected and non-infected blood smear images. The chapter also addresses the challenges and future research directions for AI applications in malaria diagnosis, emphasizing the need for improved data quality and clinical integration.

Uploaded by

miniproject618
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 29

Page 1 of 29 - Cover Page Submission ID trn:oid:::1:3276845506

Chapter 4 Chapter 4
Chapter_4.pdf
Quick Submit

Quick Submit

Dr. B R Ambedkar National Institute of Technology, Jalandhar

Document Details

Submission ID

trn:oid:::1:3276845506 27 Pages

Submission Date 6,627 Words

Jun 15, 2025, 11:06 AM GMT+5:30


40,634 Characters

Download Date

Jun 15, 2025, 11:54 AM GMT+5:30

File Name

Chapter_4.pdf

File Size

2.9 MB

Page 1 of 29 - Cover Page Submission ID trn:oid:::1:3276845506


Page 2 of 29 - AI Writing Overview Submission ID trn:oid:::1:3276845506

23% detected as AI Caution: Review required.

The percentage indicates the combined amount of likely AI-generated text as It is essential to understand the limitations of AI detection before making decisions
well as likely AI-generated text that was also likely AI-paraphrased. about a student’s work. We encourage you to learn more about Turnitin’s AI detection
capabilities before using the tool.

Detection Groups
9 AI-generated only 21%
Likely AI-generated text from a large-language model.

1 AI-generated text that was AI-paraphrased 2%


Likely AI-generated text that was likely revised using an AI-paraphrase tool
or word spinner.

Disclaimer
Our AI writing assessment is designed to help educators identify text that might be prepared by a generative AI tool. Our AI writing assessment may not always be accurate (it may misidentify
writing that is likely AI generated as AI generated and AI paraphrased or likely AI generated and AI paraphrased writing as only AI generated) so it should not be used as the sole basis for
adverse actions against a student. It takes further scrutiny and human judgment in conjunction with an organization's application of its specific academic policies to determine whether any
academic misconduct has occurred.

Frequently Asked Questions

How should I interpret Turnitin's AI writing percentage and false positives?


The percentage shown in the AI writing report is the amount of qualifying text within the submission that Turnitin’s AI writing
detection model determines was either likely AI-generated text from a large-language model or likely AI-generated text that was
likely revised using an AI-paraphrase tool or word spinner.

False positives (incorrectly flagging human-written text as AI-generated) are a possibility in AI models.

AI detection scores under 20%, which we do not surface in new reports, have a higher likelihood of false positives. To reduce the
likelihood of misinterpretation, no score or highlights are attributed and are indicated with an asterisk in the report (*%).

The AI writing percentage should not be the sole basis to determine whether misconduct has occurred. The reviewer/instructor
should use the percentage as a means to start a formative conversation with their student and/or use it to examine the submitted
assignment in accordance with their school's policies.

What does 'qualifying text' mean?


Our model only processes qualifying text in the form of long-form writing. Long-form writing means individual sentences contained in paragraphs that make up a
longer piece of written work, such as an essay, a dissertation, or an article, etc. Qualifying text that has been determined to be likely AI-generated will be
highlighted in cyan in the submission, and likely AI-generated and then likely AI-paraphrased will be highlighted purple.

Non-qualifying text, such as bullet points, annotated bibliographies, etc., will not be processed and can create disparity between the submission highlights and the
percentage shown.

Page 2 of 29 - AI Writing Overview Submission ID trn:oid:::1:3276845506


Page 3 of 29 - AI Writing Submission Submission ID trn:oid:::1:3276845506

​ CHAPTER 4
Generative Models in Infectious Disease Research
Garima Shukla, Vanshaj Awasthi and Prashant Dubey

Department of Computer Science and Engineering, Amity University, Maharashtra,


India

Abstract: Malaria is still a serious world health problem, especially in low- and
middle-income nations, where prompt and precise diagnosis is essential for ad-equate
treatment and disease control. Conventionally used microscopic di-agnostic methods,
albeit regarded as the gold standard, are beset with disadvantages including reliance on
expert technicians, time, and heightened susceptibility to human inaccuracy. This
chapter delves into the promise of generative artificial intelligence (AI), namely
Generative Adversarial Networks (GANs), for enhancing malaria diagnosis through
synthetic data augmentation and advanced image analysis. Through producing
high-fidelity synthetic blood smear images, such models aid more resilient
deep-learning classifiers, resolving data scarcity and variability issues in malaria
detection. The suggested methodology combined deep learning frameworks, including
MobileNetV2, with remarkable accuracy of 95.80%, precision of 93.87%, recall of
98.00%, and F1-score of 95.8% in differentiating malaria infected and non-infected
blood smear images. In addition, generative models were utilized to produce synthetic
training samples, which tremendously improved model strength and minimized
misclassification errors. The adversarial training process between Discriminator and
Generator in the GAN architecture exhibited oscillating loss patterns, indicative of the
dynamic learning process necessary for realistic image generation. In addition, the
possibility of AI-powered molecular generation for drug discovery and epidemiological
prediction is discussed, highlighting the trans-formative potential of AI in malaria
research. Although AI-driven diagnostic tools bring significant improvements, issues
like data quality, model generalization, and clinical uptake persist. This chapter offers a

Page 3 of 29 - AI Writing Submission Submission ID trn:oid:::1:3276845506


Page 4 of 29 - AI Writing Submission Submission ID trn:oid:::1:3276845506

comprehensive review of the advantages, limitations, and future research avenues,


opening the door to AI-driven solutions in global malaria eradication.

Keywords: Malaria Diagnosis, Generative AI, Deep Learning, Generative Adversarial


Networks (GANs), Variational Autoencoders (VAEs), Synthetic Data Augmentation,
Medical Image Analysis, MobileNetV2, Artificial Intelligence in Healthcare, Infectious
Disease Detection.

1.​ INTRODUCTION

1.1​ Context:

Malaria continues to be one of the most common and deadly infectious illnesses globally,
infecting millions of individuals each year, especially in low- and middle-income countries
(LMICs) [1]. According to the World Health Organization (WHO), in 2021 alone there were
nearly 247 million cases of malaria that caused almost 620,000 deaths, with most of the disease
burden resting with Sub-Saharan Africa and Southeast Asia [2]. Despite concerted efforts to
fight malaria through interventions like insecticide-treated nets, antimalarial medications, and
vector control programs [3], the disease remains a significant public health problem. The
intricacy of the transmission cycle of malaria, the swift development of drug-resistant
Plasmodium parasite strains, and the inadequacies in healthcare infrastructure are among the
factors that account for the continued existence of this global health problem[4].

At the center of malaria control lies correct and early diagnosis. The conventional diagnostic
techniques, including blood smear microscopy [5], have been the reference point for decades.
The methods are to stain a blood sample using Romanowsky stain (RNC) and observe it with a
microscope to detect Plasmodium parasites [6]. Despite its effectiveness, this diagnostic tool
has several limitations that affect its efficiency: it greatly relies on laboratory technicians'
expertise and experience [7], it is very time-consuming, and it can be affected by human error,
particularly in developing countries with a lack of resources. Moreover, malaria usually occurs
in different stages, and for a successful treatment, the species and stage of the infection must be
identified. While great progress has been made, conventional methods fall short in diagnosing
low parasitemia as well as infections at early stages and making an early diagnosis of malaria
difficult and lessening the efficacy of preventive interventions.

Page 4 of 29 - AI Writing Submission Submission ID trn:oid:::1:3276845506


Page 5 of 29 - AI Writing Submission Submission ID trn:oid:::1:3276845506

1.2​Research Objective and Structure of the Chapter:

This chapter discusses the potential of generative AI to advance malaria diagnosis in terms of
predictive modeling from RNC smear images. The main aim is to discuss how generative
models, including GANs and CNNs, can be used for malaria diagnosis with an emphasis on the
utilization of synthetic data to enhance classification models. We will review the existing body of
research on AI-based malaria diagnosis, the advantages and disadvantages of generative models,
and the challenges specifically of using such models in clinical practice [8]. The organization of
the chapter is as follows: The Background is a brief overview of malaria, its lifecycle, and how it
was originally diagnosed, culminating in a discussion on challenges facing diagnosis in malaria,
particularly in low-resource settings. Generative AI Models in Medical Imaging is an
introduction to the definition of generative AI models such as GANs as well as their application
to medical use. We will explain how these models can create synthetic data and improve the
training process for machine learning algorithms. The Methodology section de-scribes the major
methodologies involved in the application of AI for malaria diagnosis, such as dataset
description, preprocessing, model selection, and training processes. We will examine how
generative models are integrated into the training pipeline and assess the performance of
AI-based malaria diagnostic systems. The Results and Discussion section introduces findings
from different studies, revealing how AI models have enhanced diagnostic accuracy, minimized
human error, and shed light on the likely future effect of these technologies[9].

The Challenges and Future Work section acknowledges that although there is potential for AI in
the diagnosis of malaria, there are several challenges that remain. The section will touch on data
quality and labeling issues, model generalizability to real environments, and the ethical
considerations of using AI in medicine [10]. We will also outline future research avenues,
including the potential for AI integration with mobile technologies to bring malaria diagnostics
to the field. With this chapter, we aim to provide a clear picture of how generative AI can be
employed to transform malaria diagnosis, particularly in resource-limited environments [11]. By
examining recent advancements, methodologies, and challenges, this chapter will contribute
valuable insights to the growing body of research on AI in global health [12].

Page 5 of 29 - AI Writing Submission Submission ID trn:oid:::1:3276845506


Page 6 of 29 - AI Writing Submission Submission ID trn:oid:::1:3276845506

2.​ Literature Review:


This research [13] introduces an effective deep learning model, EfficientNet-B2, for detecting
malaria from red blood cell images with a remarkable accuracy of 97.57% and AUC of 99.21%.
In contrast to conventional methods of malaria diagnosis, which are labor-intensive and
susceptible to human error, deep learning provides an accurate and automated solution. The
authors compare with CNN, VGG-16, Dense-Net, and other deep learning architectures, showing
better performance in terms of accuracy, precision, recall, and F1-score. For robustness, k-fold
cross-validation was used, verifying the model's generalization capability. In addition, analysis of
the con-fusion matrix pointed to its high accuracy in classifying parasitized and uninfected cell
images. The research puts emphasis on EfficientNet-B2's computational cost-effectiveness and
usability in clinical practice, making it potentially comfortable for healthcare providers to work
with. Nevertheless, although the model is good for the provided data, more study is required to
validate its generalizability in multiple clinical environments as well as establish its reliability for
real-world applications.

This research [14] proposes a deep learning-based method for malaria diagnosis, targeting the
detection of malaria parasites and leukocytes in thick blood smear images. Using YOLOv8 as the
detection model, the work improves its performance using data augmentation methods, achieving
95% accuracy in parasite detection and 98% in leukocyte detection. One of the major
contributions of the research is that it can estimate parasite density according to WHO guidelines
with a 93% agreement with clinical experts. Also, the AI model saves significant time in
diagnostics, processing 50 images in 30 seconds, while human experts take several minutes. This
highlights the potential scalability and efficacy of AI-based malaria diagnosis, especially in
low-resource settings where expert healthcare practitioners may be out of reach. Yet, even so, the
research recognizes quality of datasets, interpretability concerns, and deployment challenges as
paramount limitations, warranting additional investigation to make the proposed method stronger
and more suitable for real-world use.

This paper [15] introduces AIDMAN, a computer vision object detection system based on AI for
malaria diagnosis using smartphone thin-blood-smear images. The system integrates the
YOLOv5 model to detect cells within the images and a Trans-former model to classify those

Page 6 of 29 - AI Writing Submission Submission ID trn:oid:::1:3276845506


Page 7 of 29 - AI Writing Submission Submission ID trn:oid:::1:3276845506

cells as infected or uninfected. A convolutional neural network then processes heatmaps of the
most representative cells to diagnose the whole blood smear image. AIDMAN is highly accurate,
at 98.62% for cell classification and 97% for diagnosis of blood smears. In future clinical
validation, AIDMAN's accuracy of 98.44% was on par with microscopists. The authors suggest
that AIDMAN may help in malaria diagnosis, particularly in resource-poor settings with-out
skilled parasitologists and equipment.

This work [16] describes a strong convolutional neural network (CNN) model for precise
detection and classification of malaria parasites, i.e., Plasmodium falciparum and Plasmodium
vivax, from thick blood smear images. The method uses state-of-the-art image preprocessing
methods, such as noise removal, feature enhancement, and edge detection, to enhance
classification accuracy. One of the innovations of this model is its application of a seven-channel
input tensor, which supports better feature extraction and contributes to its outstanding
performance. The model has 99.51% accuracy, 99.26% precision, 99.26% recall, 99.63%
specificity, and a 99.26% F1-score, which indicates its success in malaria classification.
Cross-validation results also affirm the reliability of the model, producing 63,654 true
predictions out of 64,126 cases over five iterations. The research highlights the superiority of the
pro-posed method over conventional methods in tackling the complexity of multiclass
classification of malaria species, which is still difficult for human experts. For im-proving its
clinical usefulness, the authors suggest cross-validation of the model on real-world data and
combining user-friendly interfaces customized for healthcare professionals, which can lead to an
effective AI-based diagnostic tool in resource-poor environments.

This research [17] presents an innovative automated system, iMAGING, for malaria diagnosis by
coupling artificial intelligence software with a low-cost robotized micro-scope. A corpus of
2,571 labeled thick blood smear images was created to train convolutional neural networks
(CNNs) to identify Plasmodium parasites and leukocytes. Of the models that were tested,
YOLOv5x had the best performance with 92.10% precision, 93.50% recall, 92.79% F-score, and
94.40% mean Average Precision (mAP) in leukocyte, early trophozoite, and mature trophozoite
detection. For purposes of improving affordability and accessibility, the research came up with a
3D-printed prototype that automates standard optical microscopes through auto-focusing, slide

Page 7 of 29 - AI Writing Submission Submission ID trn:oid:::1:3276845506


Page 8 of 29 - AI Writing Submission Submission ID trn:oid:::1:3276845506

tracking, and smartphone image capture. The learned CNN models were embedded in a
smartphone app, iMAGING, that operates the robotized microscope and exe-cutes automated
malaria diagnosis. This entire system is a leap in digital image analysis as well as microscope
automation, with potential applications to facilitate malaria diagnosis in resource-constrained
communities and assist in the battle against infectious diseases worldwide.

This research [18] proposes a new CNN architecture for malaria diagnosis from blood samples
with outstanding accuracy of 99.68%, outperforming current methods in terms of accuracy and
speed. The CNN was trained using many blood smear images and proved to be highly sensitive
and specific in differentiating infected and uninfected samples. One of the innovations of the
method is incorporating a semantic segmentation network that provides accurate microscopic
image analysis for malarial parasites. The output is then further transformed into a easy-to-use
output, enabling fast diagnosis and visual checking. The study highlights the empirical
im-portance of using this method in resource-poor environments, in which early accurate
diagnosis is lifesaving. Additionally, the findings contribute to the broader application of deep
learning in infectious disease detection, paving the way for future advancements and
optimizations in AI-driven medical diagnostics.

This paper [19] introduces an innovative malaria diagnosis method that incorporates
InfoGainAttributeEval feature selection coupled with Artificial Intelligence (AI) and Machine
Learning (ML) classifiers to make accurate classification of Malignant, Tertian, Quar-tan, and
Suspected malaria cases. From 4,000 samples, the research identifies 100 useful features through
which it classifies using Artificial Neural Networks (ANNs), Naïve Bayes (NB), Random Forest
(RF), and Ensemble algorithms like Meta Bagging, Random Committee Meta, and Voting. The
suggested approach attained an unprecedented 100% accuracy in malaria classification, far
surpassing current methods. Furthermore, the study identifies healthcare accessibility issues in
Western Kenya's remote populations, where data collection was conducted, underlining the
necessity of technological solutions in resource-constrained settings. The authors envision a fully
realized malaria diagnosis app, using their work to offer real-time, high-accuracy diagnosis for
remote and underserved areas.

Page 8 of 29 - AI Writing Submission Submission ID trn:oid:::1:3276845506


Page 9 of 29 - AI Writing Submission Submission ID trn:oid:::1:3276845506

This work [20] conducts a systematic review of the literature for AI-based malaria detection and
diagnosis from blood smear images, examining 135 articles published between 2014 and 2024.
The review identifies an increasing use of deep learning (DL) methods, specifically
convolutional neural networks (CNNs), for malaria para-site identification and classification.
Most research utilized pre-trained CNN models such as VGG and ResNet, while others
investigated proprietary CNNs or hybrid models combining DL with conventional machine
learning methods. The NIH dataset was the most used benchmark, referenced in more than 50%
of the studies, although issues were raised about potential annotation errors impacting model
reliability. Even with progress, the research points to imperative shortcomings in external
validation and model interpretability, vital for clinical acceptance and confidence in AI-assisted
diagnostics. Further, the review highlights the burgeoning potential of mobile and web
applications to assist in malaria diagnosis in resource-scarce settings, although data quality
issues, computational limitations, and approval by regulatory agencies remain a challenge.
Although AI-driven malaria diagnosis has progressed considerably, the research highlights the
necessity for more studies to increase model robustness, make them real-world applicable, and
allow for safe clinical integration.

The research [21] introduces a new Deep Boosted and Ensemble Learning (DBEL) paradigm for
identifying malaria parasites from red blood cell image data. The paradigm consists of a new
Boosted-BR-STM CNN and a machine learning ensemble classifier. The Boosted-BR-STM
CNN includes a new Split Transform Merge (STM) block for extracting homogenous and
boundary patterns of parasitic areas. It also uses channel Squeezing-Boosting (SB) methods at
various levels to extract textural differences between artifacts and parasites. The boosted deep
feature maps from the Boost-ed-BR-STM are input into an ensemble of classifiers such as SVM,
MLP, and Ada-BoostM1 to facilitate discrimination and generalization. DBEL framework in this
work exploits discrete wavelet transform for data augmentation, transfer learning for
initialization of the model, and data augmentation to enhance robust-ness. It performs better than
current methods on the NIH malaria dataset with 98.50% accuracy, 0.985 F-score, 0.992
sensitivity, and 0.996 AUC. The new ideas proposed, i.e., boundary-region feature extraction,
channel boosting, and ensemble learning, all help the framework perform better in detecting
malaria-infected cells precisely. Early and accu-rate detection can avoid permanent disability due

Page 9 of 29 - AI Writing Submission Submission ID trn:oid:::1:3276845506


Page 10 of 29 - AI Writing Submission Submission ID trn:oid:::1:3276845506

to the disease.

The paper [22] stresses the significance of enhancing malaria microscopy with AI strategies in
India. It reveals the shortcomings of present microscopy strategies, including it being time
intensive, dependent upon qualified microscopists, and characteristically variable with regards to
its sensitivity and specificity. Artificial intelligence-based methods such as deep learning models
can enhance particle tracking, image enhancement, and image segmentation to improve accuracy
and speed in diagnosing malaria. The authors also stress the importance of creating locally
annotated databases of images of malaria parasites for effectively training AI models.
Micro-scopes with AI capability have already yielded promising results for identifying malaria
parasites and drug-resistant strains. Combining AI-based techniques with conventional
microscopy practices can enhance di-agnostic precision, particularly in re-mote and remote
regions of India. Nevertheless, the use of AI-based microscopy in-volves ongoing vigilance,
establishment of infrastructure facilities in rural localities, and physician-patient relationship
maintenance. Investments in AI-based microscopy techniques within the next ten years can go a
long way in enhancing the diagnostic sensitivity and accuracy, a critical factor to achieve malaria
elimination in India.

Table 1: Tabular comparison of literature review

Study Findings Backlogs


[13] EfficientNet-B2 Achieved 97.57% accuracy Needs further validation in diverse
for Malaria Detection and 99.21% AUC. clinical settings.
Outperformed CNN,
VGG-16, and DenseNet. Used
k-fold cross-validation and
confusion matrix analysis.
[14] YOLOv8 for 95% accuracy for parasite Dataset quality, interpretation
Parasite & Leukocyte detection, 98% for challenges, and real-world
Detection leukocytes. Estimated implementation issues.
parasite density with 93%
agreement with experts.

Page 10 of 29 - AI Writing Submission Submission ID trn:oid:::1:3276845506


Page 11 of 29 - AI Writing Submission Submission ID trn:oid:::1:3276845506

Reduced diagnosis time to


30s for 50 images.
[15] AIDMAN Achieved 98.62% accuracy Needs real-world validation and
(YOLOv5 + for cell classification and accessibility improvements.
Transformer) 97% for blood smear
diagnosis. Comparable to
expert microscopists.
[16] CNN for Achieved 99.51% accuracy. Needs validation on real-world
Plasmodium Used seven-channel input data and user-friendly interfaces
Classification tensors for improved for healthcare use.
feature extraction.
Cross-validation confirmed
reliability.
[17] iMAGING (AI + Used YOLOv5x, achieving Needs scalability and field testing
Robotized 92.10% precision, 93.50% in real-world clinics.
Microscope) recall. Developed a
3D-printed low-cost
automated microscope for
malaria diagnosis.
[18] CNN with Achieved 99.68% accuracy. Requires field validation for
Semantic Used segmentation for deployment in clinical settings.
Segmentation precise parasite detection.
Optimized for high
sensitivity and specificity.

[19] Achieved 100% accuracy 100% accuracy suggests


InfoGainAttributeEval using ANN, RF, and possibilities of overfitting
+ ML for Malaria Ensemble methods.
Types Classified malaria into
Malignant, Tertian,
Quartan, and Suspected

Page 11 of 29 - AI Writing Submission Submission ID trn:oid:::1:3276845506


Page 12 of 29 - AI Writing Submission Submission ID trn:oid:::1:3276845506

cases.
[20] Systematic Identified CNN-based Concerns over annotation errors in
Review (135 Studies, approaches as dominant. datasets, lack of real-world
2014-2024) NIH dataset used in 50% of validation.
studies. Highlighted issues
with external validation
and explainability.
[21] Deep Boosted & Achieved 98.50% accuracy, Requires testing on more diverse
Ensemble Learning 0.992 sensitivity, and 0.996 datasets and clinical validation.
(DBEL) AUC. Introduced novel
Split Transform Merge
(STM) and
Squeezing-Boosting (SB)
techniques.

Despite advancements in AI-driven malaria diagnosis, [23] existing approaches face limitations
in real-world applicability Table 1. Many deep learning models, including CNN-based classifiers
[24]and YOLO-based object detection frameworks [23], achieve high accuracy but struggle with
data scarcity, model generalization, and clinical interpretability. Dependence on manually tagged
data restricts variety [25], losing strength when generalizing over a range of geography. What's
more, fast object detection algorithms are guilty of missing the very low levels of parasitemia
and pre-microscopic disease phases important in proper disease care[26]. Black box modeling
also pre-empts clinic al adoption since such models will always require interpretation. In effort to
counter them, this paper makes use of synthetic data augmentation from generative AI
algorithms (GANs, and VAEs), amplifying model endurance within sparse datasets. By creating
high-fidelity blood smear images, the technique enhances generalizability and detection of
malaria over infection stages. The use of MobileNetV2 guarantees computationally efficient
functionality, making it applicable in low-resource settings. In addition to diagnosis, the research
extends the application of AI into molecular creation for drug development and epidemiological
prediction, connecting gaps in treatment, diagnosis, and disease monitoring. By tackling data
variability, model interpretability, and scalability, this research presents a comprehensive

Page 12 of 29 - AI Writing Submission Submission ID trn:oid:::1:3276845506


Page 13 of 29 - AI Writing Submission Submission ID trn:oid:::1:3276845506

AI-driven malaria detection framework with significant potential for clinical adoption.

3.​ Methodology:

Figure 1. Methodology workflow utilized in study

Page 13 of 29 - AI Writing Submission Submission ID trn:oid:::1:3276845506


Page 14 of 29 - AI Writing Submission Submission ID trn:oid:::1:3276845506

The approach in this study is based on creating an AI-based malaria diagnosis model through the
integration of deep learning and generative models to improve data availability and accuracy of
classification Figure 1. Preprocessing is performed on microscopic blood smear images through
resizing, normalization, and augmentation for high-quality input to train. To overcome data
paucity, Generative Adversarial Networks (GANs) synthesize simulated blood smear images,
enhancing model robustness for different stages of infection [27]. MobileNetV2 is utilized for
classification because of its efficiency and accuracy, trained on binary cross-entropy loss and the
Adam optimizer. Performance is measured based on important metrics to ensure a scalable and
efficient AI-based solution for malaria detection, especially in resource-constrained
environments.

3.1​ Data Preprocessing:

The approach used in the present study is carefully designed to enable an efficient and optimal
deep learning-based technique for malaria detection through generative models Figure 2. The
method starts with thorough data preprocessing, ensuring high-quality input for training the
model. The dataset, represented in the form of microscopic blood smear images as belonging to a
parasi-tized or an uninfected class, faces strong transformation[28]. The images are resized to the
standardized size of 128×128 pixels, normalized to mean of 0.5 and standard deviation of 0.5,
and transformed to the tensor representation to enable a smooth integra-tion into deep learning
structures [29].

Page 14 of 29 - AI Writing Submission Submission ID trn:oid:::1:3276845506


Page 15 of 29 - AI Writing Submission Submission ID trn:oid:::1:3276845506

Figure 2. Smear images of dataset

3.2​ Dataset Distribution:

The data set is rigorously divided into three separate subsets with an equal balance of parasitized
and uninfected samples Table 2. This not only increases the model's strength but also facilitates
an effective assessment of its performance for various data segments. The splits are formed by
shuffling the indices of the dataset and splitting them evenly, providing an unbiased sample of
the original data distribution. These subsets are then loaded into the model with
high-performance data loaders, maximizing memory management and computation speed
through mini-batch processing. Pinning memory and using multiple worker threads speed up
data transfer, allowing for seam-less execution of deep learning pipelines on GPU-based
environments

Table 2: Image distribution for training


Class Total Split 1 Split 2 Split 3
Images

Page 15 of 29 - AI Writing Submission Submission ID trn:oid:::1:3276845506


Page 16 of 29 - AI Writing Submission Submission ID trn:oid:::1:3276845506

Parasitized 13,781 4,593 4,594 4,594


Uninfected 13,781 4,593 4,594 4,594
Total 27,562 9,186 9,188 9,188

3.3​ Model Selection and Architecture:

For model selection, a high-performance deep learning model, MobileNetV2, is utilized because
of its performance in dealing with medical image classification tasks with low computational
overhead. The pre-trained model is then fine-tuned to be specifically tailored for malaria
detection by replacing the classification head with a fully connected layer, providing a single
neuron for binary classification. This adjustment guarantees that the model is able to distinguish
between infected and non-infected samples efficiently. The model is trained with Binary
Cross-Entropy with Logits Loss, which is appropriate for binary classification problems. The
Adam optimizer is used with a learning rate of 0.0001, finding a balance between convergence
rate and stability.

The choice of MobileNetV2 instead of ResNet-50 and ViT-B/16 was due to its high accuracy in
classification and efficiency in computation, which made it suitable for low-resource
environments. MobileNetV2 has only 3.4 million parameters, which is 7.5× lower than
ResNet-50 and 25× lower than ViT, drastically cutting down memory and processing
requirements. It consumes only 300 million MAdds per inference, as opposed to 3.8 billion for
ResNet-50, rendering it more than 12× more efficient. Although ResNet-50 has robust feature
extraction, its considerable latency and memory consumption make it inappropriate for real-time
malaria diagnosis in low-power devices. ViTs, although powerful in modeling long-range
dependency, require extensive datasets and state-of-the-art GPUs, which restrict real-world use.
MobileNetV2's depth wise separable convolutions max out performance for fast, scalable, and
energy-efficient AI deployment in resource-constrained healthcare settings where real-time
mobile-based diagnosis is paramount Table 3.

Table 3: MobilnetV2 comparison with other models


Model Parameters Computational Inference Dataset Suitability for

Page 16 of 29 - AI Writing Submission Submission ID trn:oid:::1:3276845506


Page 17 of 29 - AI Writing Submission Submission ID trn:oid:::1:3276845506

(Million) Cost (MAdds) Time (ms) on Requirement Edge Devices


CPU
MobileNetV2 3.4M 300M <10ms Small to Highly Efficient
Medium (Designed for
mobile/edge)
ResNet-50 25.6M 3.8B 50–100ms Medium to Moderate (High
Large computational
cost)
ViT-B/16 86M 17.5B 200+ms Very Large Low (Requires
high-end GPUs)

3.4​ Model Training:

To further optimize training efficiency, an innovative asynchronous training scheme is presented,


taking advantage of multiple CUDA streams for concurrent execution. Such a method supports
simultaneous training across the three splits of the dataset, such that each subset of data is trained
independently without conflict over resources. At every training iteration, mixed precision
training is implemented with Automatic Mixed Precision (AMP) being utilized to achieve
maximum utilization of memory and speed up computations. The model goes through several
epochs, calculating loss and adjusting the weights in response. Accuracy measures are constantly
calculated, with intermediate results saved for performance tracking Table 4.

Table 4: Model parameters


Hyperparameter Value
Model Backbone MobileNetV2
Optimizer Adam
Learning Rate 0.0001
Batch Size 64
Loss Function BCEWithLogitsLoss
Mixed Precision Yes (AMP)
Number of Splits 3

3.5​Generative Adversarial Network Model:

Page 17 of 29 - AI Writing Submission Submission ID trn:oid:::1:3276845506


Page 18 of 29 - AI Writing Submission Submission ID trn:oid:::1:3276845506

A Deep Convolutional Generative Adversarial Network (DCGAN) was employed to produce


synthesized blood smear images for malaria investigation. The design features two neural
networks, namely the Generator and Discriminator, that are trained adversarial to make synthetic
images more real-like. The Generator is tasked with converting random noise vectors in a latent
space into realistic blood smear images. It uses a sequence of transposed convolutional layers,
each with batch normalization and ReLU activation, to successively up sample and detail the
image structure. The last layer uses a Tanh activation function to normalize pixel values between
-1 and 1 to match the real dataset. By learning the underlying distribution of real images, the
Generator improves its ability to create high-fidelity synthetic samples[30].

The Discriminator is a binary classifier that separates true blood smear images from synthetic
ones generated by the Generator. It contains several convolutional layers with Leaky ReLU
activations and batch normalization, facilitating successful feature extraction and stable training.
The last layer uses a Sigmoid activation function to produce a probability score to identify
whether an image is real or synthetic. Iteratively increasing its classification precision, the
Discriminator pushes the Generator to create more realistic im-ages. Adversarial optimization
was utilized in training where the two networks were updated sequentially. Binary Cross Entropy
(BCE) loss was utilized to direct learning so that the Generator reduces the Discriminator's
capability to discriminate between counterfeit images and the Discriminator increases its
classification accuracy Figure 3.

The Adam optimizer was utilized with finely adjusted learning rates and momentum parameters
to ensure training stability and avoid problems like mode collapse. A constant noise vector was
applied during training to produce uniform sample images, enabling visual observation of the
Generator's evolution. The trained GAN model effectively produced synthetic blood smear
images that can be added to malaria detection pipelines to boost dataset diversity and improve
diagnostic precision. Future improvement may involve integrating attention mechanisms or
hybrid generative models like Variational Autoencoders (VAEs) for additional image fidelity and
structural consistency improvements.

The use of Generative Adversarial Networks (GANs) to generate synthetic blood smear images

Page 18 of 29 - AI Writing Submission Submission ID trn:oid:::1:3276845506


Page 19 of 29 - AI Writing Submission Submission ID trn:oid:::1:3276845506

in this research was not for explicit model training of the deep learning model but to deal with
data deficiency, class imbalance, and real-world variation of malaria-infected blood smears. In
actual healthcare environments, differences in staining protocols, image quality, and microscopic
hardware introduce variability that can compromise model performance.

Page 19 of 29 - AI Writing Submission Submission ID trn:oid:::1:3276845506


Page 20 of 29 - AI Writing Submission Submission ID trn:oid:::1:3276845506

Figure 3. GAN image generation transformation

Page 20 of 29 - AI Writing Submission Submission ID trn:oid:::1:3276845506


Page 21 of 29 - AI Writing Submission Submission ID trn:oid:::1:3276845506

4.​ Results:
The MobileNetV2 model performance was tested using important classification metrics on the
malaria blood smear dataset. The model produced a high accuracy of 95.80%, reflecting its
excellent ability to differentiate Parasitized from Uninfected blood cells.

The precision score of 0.9387 implies that the model best reduces the rate of false positives,
meaning the model strongly holds high-confidence output for predicted cases of being infected.
A recall (sensitivity) value of 0.9800 indicates the strong capability of the model in determining
true malaria-infected cases, which in the case of medical diagnosis could result in harmful
missing of positive cases Figure 4.

Figure 4. Precision-Recall Curve


The F1-measure of 0.9589, or the harmonic mean of precision and recall, serves to further
indicate that the balanced performance of the model neither unduly favors the positive
predictions nor wrongly misses a lot of infected samples. The discriminative power, along with

Page 21 of 29 - AI Writing Submission Submission ID trn:oid:::1:3276845506


Page 22 of 29 - AI Writing Submission Submission ID trn:oid:::1:3276845506

being very robust under distinguishing between the infected and non-infected samples, is even
high under differential class distributions reflected in the 0.9950 AUC-ROC measure and 0.9949
AUC-PR measure Figure 5.

The minimal number of false negatives (10 cases) is especially interesting, as it suggests that the
model can identify malaria with high dependability, minimizing the possibility of undetected
cases. The 32 false positives, though, raise a slight trade-off where infected samples are
mislabeled as not infected, but since this is not desirable, there is a preference here in medical
use in which early discovery is paramount Figure 6.

Figure 5. True Positive/False Positive Rate Plot


Overall, the MobileNetV2 model exhibits strong generalization, maintaining high accuracy
alongside robust sensitivity and specificity. Its light architecture makes it the best candidate to be
deployed within resource-limited environments, for example, far-off healthcare facilities, where
computation is critical. Future research might emphasize further smoothing out
misclassifications through ensemble methods or higher-level post-processing approaches to

Page 22 of 29 - AI Writing Submission Submission ID trn:oid:::1:3276845506


Page 23 of 29 - AI Writing Submission Submission ID trn:oid:::1:3276845506

ensure improved interpretability and reliability.

Figure 6. Confusion matrix for MobileNetV2

Values for the training losses of both Discriminator (Loss_D) and Generator (Loss_G)
throughout the GAN training process reflect a dynamic, fluctuating nature over 30, showing the
intricate nature of the adversarial learning process.

At first, both losses exhibit dramatic fluctuations, with the Discriminator (Loss_D) having
precipitous decreases, particularly between epochs 0 to 4, where the values oscillate from
extreme peaks (e.g., 6.70) to extremely low values (e.g., 0.02). This might be because of the
learning process at first when the Discriminator is learning how to separate real from fabricated
images. Concurrently, the Generator (Loss_G) is also experiencing such volatility, with losses
like 16.41 during the first epoch, reflecting inability to produce plausible samples for the
Discriminator to differentiate from authentic ones.

Page 23 of 29 - AI Writing Submission Submission ID trn:oid:::1:3276845506


Page 24 of 29 - AI Writing Submission Submission ID trn:oid:::1:3276845506

During the middle epochs, Loss_D tends to be lower overall than the starting values, but
continues to exhibit some fluctuation, indicating that the Discriminator is slowly getting better at
identifying real versus fake images. Loss_G, however, tends to fluctuate more wildly in this
range, exhibiting sporadic spikes where the Generator is having trouble generating believable
fake images. The Loss_G values in such epochs appear to experience sudden rises, which might
reflect the Generator not being able to adjust suitably to the Discriminator's changing
expectations Figure 7.

Figure 7. Discriminator Loss curve in GAN


Notably, as the epochs advance into the later phases, the loss for Discriminator ap-pears to
stabilize and orbit around lower magnitudes, suggesting that the Discriminator has probably
acquired good approach for discriminating between real and simulated data. But Loss_G still

Page 24 of 29 - AI Writing Submission Submission ID trn:oid:::1:3276845506


Page 25 of 29 - AI Writing Submission Submission ID trn:oid:::1:3276845506

exhibits significant oscillations, sometimes dipping to lower levels, but there are still some peaks
where Generator is probably struggling to generate data that the Discriminator can easily
misclassify Figure 8

Figure 8. Generator Loss curve in GAN

In general, the GAN appears to have followed a regular adversarial training process, whereby
both networks improve and learn, but struggle with stabilizing and converging from the intrinsic
instability of GAN training. The fluctuations in the loss values of both the Discriminator and
Generator are typical of the nature of GANs, where both networks are engaged in a continuous
game of bettering one another, with neither fully converging in this instance.

Page 25 of 29 - AI Writing Submission Submission ID trn:oid:::1:3276845506


Page 26 of 29 - AI Writing Submission Submission ID trn:oid:::1:3276845506

5.​ Novelty of Methodology:


The methodological decisions in the present research followed a tradeoff between performance,
scalability, and usability in real-life malaria diagnosis settings. Mo-bileNetV2 was selected based
on its lightweight network architecture, which provides high prediction accuracy and enables
deployment on mobile devices and edge computers, thus guaranteeing AI-aided diagnosis
extends to low-resource environments. Although other CNN architectures such as ResNet and
DenseNet yield deeper feature extraction, their energy demands, and computational costs render
them impractical for deployment in the field. GANs was utilized for the generation of synthetic
blood smear images to offset data paucity and bias to guarantee a diversified and adequately
represented training dataset. Both traditional data augmentation techniques, though powerful,
lack the ability to generate totally new image variations like GANs and hence generative models
are suited for enhancing dataset variability. The binary classification model was selected to
present a straightforward and interpretable model for detecting malaria, which is critical for
practical clinical deployment. In addition, asynchronous training with mixed precision
optimization was utilized to achieve maximum efficiency and minimize memory usage, enabling
deep learning to be more viable on low-power computing devices. Through the integration of
these models and procedures, this study guarantees an optimized, scalable, and clinically relevant
AI-based malaria diagnostic system that is both accurate and feasible for real-world healthcare
use.

6.​ Limitations and Future Work:


Although the promising performance shown by AI-based malaria diagnosis has been
encouraging, some challenges need to be overcome to be deployed in the real world. One of the
major limitations is the dependence on high-quality annotated data, because differences in
staining methods, imaging devices, and local parasite strains can cause variations in model
behavior. In addition, the applicability of AI models is also in question, since models that have
been trained using certain datasets are not likely to be as accurate when applied across other
demo-graphic groups or clinical environments. Another challenge is the interpretability of deep
learning models, in which the black-box approach of AI decision-making is of concern to clinical
trust and regulatory licensure. Also, though generative models immensely improve training data
availability, realism and biological fidelity of generated images need to be stringently verified to

Page 26 of 29 - AI Writing Submission Submission ID trn:oid:::1:3276845506


Page 27 of 29 - AI Writing Submission Submission ID trn:oid:::1:3276845506

avoid implicit biases in diagnosis. More interpretable AI models need to be developed in future
work, and explainability methods need to be incorporated into the models to enhance clinical
adoption. Additional investigations of hybrid AI strategies, fusing generative models with
conventional feature-based models, could lead to more thorough diagnostic solutions. In
addition, incorporating AI-assisted diagnostic devices into mobile applications and cloud
platforms may enable real-time, remote diagnosis in low-resource settings. Solving these issues
will be key to progressing AI-based malaria diagnostics towards clinical-scale implementation.

7.​ Conclusion:

The application of AI and deep learning in malaria diagnosis is a transformative development in


the detection of infectious disease, bringing with it greater accuracy, efficiency, and access. This
chapter identifies the promise of generative AI models, specifically GANs in complementing
training data sets, enhancing classification performance, and overcoming the problems of data
scarcity. Experimental findings high-light the effectiveness of deep learning models, including
Mo-bileNetV2, in separating malaria-infected from uninfected blood smear images with high
accuracy. Although AI-based diagnostics are extremely promising, real-world implementation
re-quires addressing issues concerning dataset variability, model generalization, and clinical
validation. It is critical to ensure that robust and interpretable AI models are developed to build
trust among healthcare professionals and regulatory agencies. Future studies should investigate
hybrid AI approaches, multi-modal diagnostic platforms, and real-time deployment strategies to
maximize malaria detection across various clinical environments. By tapping into the strengths
of AI, especially in areas with scarce healthcare resources, this technology has the potential to be
a key driver of malaria eradication and global public health improvement.

References:
[1]​ “Malaria.” Accessed: Mar. 04, 2025. [Online]. Available:
https://www.who.int/news-room/fact-sheets/detail/malaria
[2]​ “World Malaria Report 2022,” 2022.
[3]​ W. Siłka, M. Wieczorek, J. Siłka, and M. Woźniak, “Malaria Detection Using Advanced Deep Learning
Architecture,” Sensors, vol. 23, no. 3, Feb. 2023, doi: 10.3390/S23031501.
[4]​ Y. M. Kassim, F. Yang, H. Yu, R. J. Maude, and S. Jaeger, “Diagnosing malaria patients with plasmodium
falciparum and vivax using deep learning for thick smear images,” Diagnostics, vol. 11, no. 11, Nov.
2021, doi: 10.3390/DIAGNOSTICS11111994.

Page 27 of 29 - AI Writing Submission Submission ID trn:oid:::1:3276845506


Page 28 of 29 - AI Writing Submission Submission ID trn:oid:::1:3276845506

[5]​ S. K. Jo, H. S. Kim, S. W. Cho, and S. H. Seo, “Pathogenesis and inflammatory responses of swine H1N2
influenza viruses in pigs,” Virus Res, vol. 129, no. 1, pp. 64–70, Oct. 2007, doi:
10.1016/J.VIRUSRES.2007.05.005.
[6]​ K. Hemachandran et al., “Performance Analysis of Deep Learning Algorithms in Diagnosis of Malaria
Disease,” Diagnostics, vol. 13, no. 3, Feb. 2023, doi: 10.3390/DIAGNOSTICS13030534.
[7]​ T. Go, J. H. Kim, H. Byeon, and S. J. Lee, “Machine learning-based in-line holographic sensing of
unstained malaria-infected red blood cells,” J. Biophoton., vol. 11, no. 9, p. e201800101, Sep. 2018, doi:
10.1002/jbio.201800101.
[8]​ S. Rajaraman, S. Jaeger, and S. K. Antani, “Performance evaluation of deep neural ensembles toward
malaria parasite detection in thin-blood smear images,” PeerJ, vol. 7, p. e6977, May 2019, doi:
10.7717/peerj.6977.
[9]​ G. Madhu, A. W. Mohamed, S. Kautish, M. A. Shah, and I. Ali, “Intelligent diagnostic model for malaria
parasite detection and classification using imperative inception-based capsule neural networks,” Sci. Rep.,
vol. 13, no. 1, p. 13377, Dec. 2023, doi: 10.1038/s41598-023-40317-z.
[10]​ D. A. Ramos-Briceño, A. Flammia-D’Aleo, G. Fernández-López, F. S. Carrión-Nessi, and D. A.
Forero-Peña, “Deep learning-based malaria parasite detection: convolutional neural networks model for
accurate species identification of Plasmodium falciparum and Plasmodium vivax,” Scientific Reports
2025 15:1, vol. 15, no. 1, pp. 1–11, Jan. 2025, doi: 10.1038/s41598-025-87979-5.
[11]​ M. W. J. S. M. W. W Siłka, “Malaria detection using advanced deep learning architecture,” Sens. (Basel),
vol. 23, p. 103390s23031501, 2023.
[12]​ K. Hemachandran, “Performance analysis of deep learning algorithms in diagnosis of malaria disease,”
Diagn. (Basel), vol. 13, p. 103390diagnostics13030534, 2023.
[13]​ M. Mujahid et al., “Efficient deep learning-based approach for malaria detection using red blood cell
smears,” Scientific Reports 2024 14:1, vol. 14, no. 1, pp. 1–16, Jun. 2024, doi:
10.1038/s41598-024-63831-0.
[14]​ K. Hoyos and W. Hoyos, “Supporting Malaria Diagnosis Using Deep Learning and Data Augmentation,”
Diagnostics, vol. 14, no. 7, p. 690, Apr. 2024, doi: 10.3390/DIAGNOSTICS14070690.
[15]​ R. Liu et al., “AIDMAN: An AI-based object detection system for malaria diagnosis from smartphone
thin-blood-smear images,” Patterns, vol. 4, no. 9, p. 100806, Sep. 2023, doi:
10.1016/J.PATTER.2023.100806.
[16]​ D. A. Ramos-Briceño, A. Flammia-D’Aleo, G. Fernández-López, F. S. Carrión-Nessi, and D. A.
Forero-Peña, “Deep learning-based malaria parasite detection: convolutional neural networks model for
accurate species identification of Plasmodium falciparum and Plasmodium vivax,” Scientific Reports
2025 15:1, vol. 15, no. 1, pp. 1–11, Jan. 2025, doi: 10.1038/s41598-025-87979-5.
[17]​ C. R. Maturana et al., “iMAGING: a novel automated system for malaria diagnosis by using artificial
intelligence tools and a universal low-cost robotized microscope,” Front Microbiol, vol. 14, p. 1240936,
Nov. 2023, doi: 10.3389/FMICB.2023.1240936/BIBTEX.
[18]​ W. Siłka, M. Wieczorek, J. Siłka, and M. Woźniak, “Malaria Detection Using Advanced Deep Learning
Architecture,” Sensors 2023, Vol. 23, Page 1501, vol. 23, no. 3, p. 1501, Jan. 2023, doi:
10.3390/S23031501.
[19]​ P. A. Barracloug et al., “Artificial Intelligence System for Malaria Diagnosis,” IJACSA) International
Journal of Advanced Computer Science and Applications, vol. 15, no. 3, 2024, Accessed: Mar. 02, 2025.
[Online]. Available: www.ijacsa.thesai.org
[20]​ F. Grignaffini, P. Simeoni, A. Alisi, and F. Frezza, “Computer-Aided Diagnosis Systems for Automatic
Malaria Parasite Detection and Classification: A Systematic Review,” Electronics 2024, Vol. 13, Page
3174, vol. 13, no. 16, p. 3174, Aug. 2024, doi: 10.3390/ELECTRONICS13163174.
[21]​ H. M. Asif, S. H. Khan, T. J. Alahmadi, T. Alsahfi, and A. Mahmoud, “Malaria parasitic detection using a
new Deep Boosted and Ensemble Learning framework,” Complex and Intelligent Systems, vol. 10, no. 4,
pp. 4835–4851, Aug. 2024, doi: 10.1007/S40747-024-01406-2/FIGURES/8.
[22]​ S. Nema, M. Rahi, A. Sharma, and P. K. Bharti, “Strengthening malaria microscopy using artificial
intelligence-based approaches in India,” The Lancet Regional Health - Southeast Asia, vol. 5, Oct. 2022,

Page 28 of 29 - AI Writing Submission Submission ID trn:oid:::1:3276845506


Page 29 of 29 - AI Writing Submission Submission ID trn:oid:::1:3276845506

doi: 10.1016/j.lansea.2022.100054.
[23]​ F. Abdurahman, K. A. Fante, and M. Aliy, “Malaria parasite detection in thick blood smear microscopic
images using modified YOLOV3 and YOLOV4 models,” BMC Bioinform., vol. 22, no. 1, p. 112, Dec.
2021, doi: 10.1186/s12859-021-04036-4.
[24]​ M. F. Ahamed et al., “Improving Malaria diagnosis through interpretable customized CNNs
architectures,” Sci Rep, vol. 15, no. 1, Dec. 2025, doi: 10.1038/S41598-025-90851-1.
[25]​ F. Yang et al., “Deep learning for smartphone-based malaria parasite detection in thick blood smears,”
IEEE J. Biomed. Health Inf., vol. 24, no. 5, pp. 1427–1438, May 2020, doi: 10.1109/jbhi.2019.2939121.
[26]​ M. O. F. Goni et al., “Diagnosis of malaria using double hidden layer extreme learning machine algorithm
with CNN feature extraction and parasite inflator,” IEEE Access, vol. 11, pp. 4117–4130, 2023, doi:
10.1109/access.2023.3234279.
[27]​ S. Rajaraman et al., “Pre-trained convolutional neural networks as feature extractors toward improved
malaria parasite detection in thin blood smear images,” PeerJ, vol. 6, no. 4, p. e4568, 2018, doi:
10.7717/peerj.4568.
[28]​ P. A. Barracloug et al., “Artificial intelligence system for malaria diagnosis,” IJACSA, vol. 15, no. 3, pp.
920–932, 2024, doi: 10.14569/ijacsa.2024.0150392.
[29]​ M. Mujahid et al., “Efficient deep learning-based approach for malaria detection using red blood cell
smears,” Sci Rep, vol. 14, no. 1, Dec. 2024, doi: 10.1038/S41598-024-63831-0.
[30]​ D. Uzun Ozsahin, B. B. Duwa, I. Ozsahin, and B. Uzun, “Quantitative Forecasting of Malaria Parasite
Using Machine Learning Models: MLR, ANN, ANFIS and Random Forest,” Diagnostics, vol. 14, no. 4,
Feb. 2024, doi: 10.3390/DIAGNOSTICS14040385.

Page 29 of 29 - AI Writing Submission Submission ID trn:oid:::1:3276845506

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy