0% found this document useful (0 votes)
16 views44 pages

Final Drive Douctment Major

The project report titled 'Fake Image Identification Using Convolutional Neural Network' presents a method for detecting fake images through a machine learning model utilizing Convolutional Neural Networks (CNN) and Local Binary Patterns (LBP). The study addresses the challenges posed by generative models like GANs that create realistic fake images, proposing a novel architecture for effective detection. The report includes acknowledgments, a declaration of originality, and an extensive literature survey on existing techniques in facial forgery detection.

Uploaded by

alakuntla937
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views44 pages

Final Drive Douctment Major

The project report titled 'Fake Image Identification Using Convolutional Neural Network' presents a method for detecting fake images through a machine learning model utilizing Convolutional Neural Networks (CNN) and Local Binary Patterns (LBP). The study addresses the challenges posed by generative models like GANs that create realistic fake images, proposing a novel architecture for effective detection. The report includes acknowledgments, a declaration of originality, and an extensive literature survey on existing techniques in facial forgery detection.

Uploaded by

alakuntla937
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 44

FAKE IMAGE IDENTIFICATION USING CONVOLUTIONAL NEURAL NETWORK

A Project Report Submitted to


Jawaharlal Nehru Technological University Hyderabad
In partial fulfillment of the requirements
for the award of the degree of
BACHELOR OF TECHNOLOGY
IN
COMPUTER SCIENCE AND ENGINEERING

By

Maloth Guna Siddarth 21E11A0520


Alakuntla Anil 21E11A0502
Aleti Ajay Kumar 22E15A0501
Srinagaram Vamshi 21E11A0531

Under the guidance of


Mr. G. RAGHAVENDER
Assistant Professor
Department of Computer Science and Engineering

DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING


BHARAT INSTITUTE OF ENGINEERING AND TECHNOLOGY
(An Autonomous Institution)
Accredited by NAAC A Grade, Accredited by NBA (UG Programmes: CSE, ECE, EEE &
Mechanical) Approved by AICTE, Affiliated to JNTUH Hyderabad
Ibrahimpatnam -501 510, Hyderabad, Telangana

JUNE 2025
i
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING
BHARAT INSTITUTE OF ENGINEERING AND TECHNOLOGY
(An Autonomous Institution)
Accredited by NAAC A Grade Accredited by NBA (UG Programmes: CSE, ECE, EEE &
Mechanical) Approved by AICTE, Affiliated to JNTUH Hyderabad
Ibrahimpatnam -501 510, Hyderabad, Telangana

Certificate
This is to certify that the Project work ( Phase2) entitled “FAKE
IMAGE IDENTIFICATION USING CONVOLUTIONAL NEURAL NETWORK” is the
bonafide work done
By
Maloth Guna Siddarth 21E11A0520
Alakuntla Anil 21E11A0502
Aleti Ajay Kumar 22E15A0501
Srinagaram Vamshi 21E11A0531

in the Department of Computer Science and Engineering, BHARAT INSTITUTE OF


ENGINEERING AND TECHNOLOGY, Ibrahimpatnam is submitted to Jawaharlal
Nehru Technological University, Hyderabad in partial fulfillment of the requirements
for the award of B.Tech degree in Computer Science and Engineering during 2024-
2025.
Supervisor: DepartmentI/C
Mr.G.Raghavender
Dr.Deepak
Assistant Professor Assosciate Professor
Dept of Computer Science and Engineering Dept of Computer Science and Engineering
BharatInstituteofEngineeringand Technology BharatInstituteofEngineeringand Technology
Ibrahimpatnam–501 510, Hyderabad Ibrahimpatnam– 501 510, Hyderabad

Viva-Voce held on……………………………………………

InternalExaminer External Examiner

ii
ACKNOWLEDGEMENT

The satisfaction that accompanies the successful completion of the task would be put
incomplete without the mention of the people who made it possible, whose constant guidance and
encouragement crown all the efforts with success.

We avail this opportunity to express our deep sense of gratitude and hearty thanks to
Sri CH. Venugopal Reddy, Chairman of BIET, for providing congenial atmosphere and
encouragement.

We would like to thank Prof. G. Kumaraswamy Rao, Former Director & O.S. of DLRL
Ministry of Defence, Sr. Director R&D, BIET, and Dr. V Srinivasa Rao, Dean CSE, for having
provided all the facilities and support.

We would like to thank our Department Incharge Dr. Deepak, for encouragement at
various levels of our Project.

We are thankful to our Project Coordinator Dr. Rama Prakasha Reddy, Assistant
Professor, Computer Science and Engineering for support and cooperation throughout the process
of this project.

We are thankful to our guide Mr. G. Raghavender, Assistant Professor, Computer Science
and Engineering for his sustained inspiring Guidance and cooperation throughout the process of
this project. His wise counsel and suggestions were invaluable.

We express our deep sense of gratitude and thanks to all the Teaching and Non-Teaching
Staff of our college who stood with us during the project and helped us to make it a successful
venture.

We place highest regards to our Parent, our Friends and Well-wishers who helped a lot in
making the report of this project

Maloth Guna Siddarth 21E11A0520


Alakuntla Anil 21E11A0502
Aleti Ajay Kumar 22E15A0501
Srinagaram Vamshi 21E11A0531

iii
Declaration

We hereby declare that this Project work (phase-2) is titled FAKE IMAGE
IDENTIFCATION USING CONVOLUTIONAL NEURAL NETWORK is a genuine
phase 2 Project work carried out by us, In B.Tech (Computer Science and
Engineering) degree course of Jawaharlal Nehru Technology University
Hyderabad, Hyderabad and has not been submitted to any other course or university
for the award of my degree by me.

Signatures of the Project team members

1.

2.

3.

4.

iv
ABSTRACT

Now-a-days biometric systems are useful in recognizing person’s identity but criminals change
their appearance in behavior and psychological to deceive recognition system. To overcome from
this problem we are using new technique called Deep Texture Features extraction from images
and then building train machine learning model using CNN (Convolution Neural Networks)
algorithm. This technique refer as LBP Net or NLBP Net as this technique heavily dependent on
features extraction using LBP (Local Binary Pattern) algorithm. In this project we are designing
LBP Based machine learning Convolution Neural Network called LBPNET to detect fake face
images. Here first we will extract LBP from images and then train LBP descriptor images with
Convolution Neural Network to generate training model. Whenever we upload new test image then
that test image will be applied on training model to detect whether test image contains fake image
or non- fake image. Below we can see some details on LBP. Local binary patterns (LBP) is a type
of visual descriptor used for classification in computer vision and is a simple yet very efficient
texture operator which labels the pixels of an image by thresholding the neighborhood of each
pixel and considers the result as a binary number. Due to its discriminative power and
computational simplicity, LBP texture operator has become a popular approach in various
applications. It can be seen as a unifying approach to the traditionally divergent statistical and
structural models of texture analysis. Perhaps the most important property of the LBP operator in
real- world applications is its robustness to monotonic gray-scale changes caused, for example, by
illumination variations. Another important property is its computational The feature vector can now
be processed using the Support vector machine.
KEYWORDS
Convolutional Neural Networks (CNN), Image Forgery, Digital Media Integrity, Image
Classification, Forgery Detection Algorithms, Data Analysis.

v
TABLE OF CONTENT
Contents
Chapter no. Title Page no

Acknowledgements………………………………………………………………………….………..iii

Abstract…………………………………………………………………………………………….… v

Table of Content………………………………………………………………………………………...……..vi

Listof Figures……………………………………………………………………………………………...…vii

List of Symbols and Abbreviations……………………………………………………………………...… x

1. Introduction …………………………………………………………………………………….... 1

1.1 Literature Survey ………………………………………………………………………………...…2

1.2 Modules ………………………………………………………………………………………….. 3

2. System Analysis …………………………………………………………………………………. 5

2.1 Existing System& its Disadvantages ………………………………………………………………… 5

2.2 Proposed System & it’s Advantages ……………………………………………………………...….. 6

3. Motivation ………………………………………………………………………………………. 7

4. Objectives ………………………………………………………………………………………. 8

4.1 Feasibility Study ………………………………………………………………………………..… 8

5. Problem Statement ………………………………………………………………………………. 10

6. Design Methodology ………………………………………………………….…………………. 11

6.1.1 UML Diagram …………………………………………………………………………….…… 11

6.1.2 System Architecture ………………………………………………………………………….… 12

6.2.1 Class Diagram ………………………………………………………………………………….. 13

6.2.2 Collaboration Diagram ………………………………………………………………………….. 13

6.2.3 Sequence Diagram ……………………………………………………………………………....14

vi
6.2.4 Use Case Diagram ………………………………………………………………………………... 14

6.3 Requirements Specification ……………………………………………………………………... 15

6.4 System Design with Algorithm ……………………………………………………………………... 15

6.5 Sample Source Code ……………………………………………………………………………….. 17

7. Experimental Studies ………………………………………………………………………..…… 22

7.1 Test Cases …………………………………………………………………………………..…… 24

7.2 Result Analysis ……………………………………………………………………………….…. 25

8. Conclusion and Future Scope ……………………………………………………………………..…. 32

9. Reference …………………………………………………………………………………...…….33

vii
LIST OF FIGURES

Figure Caption Page No.


No.

6.1.1 UML Diagram……………………………………………………………………………..……………………..12

6.1.2 System Architecture..…………………………………………………………………………………………..14

6.2.1 Class Diagram……………………………………………………………….………………………...............20

6.2.2 Collaboration Diagram……………………………………………………………………………………….…21

6.2.3 Sequence Diagram……………………………………………………..…………………………………………22

6.2.4 Use Case Diagram…………………………………………………………………………………………………..24

7.2 Result Analysis for Fake Image Identification using CNN…………………………………………….27

8.1 Convolution neural network………………………………………………………………………………………….30

8.2 Image training and model Testing interface..……………………………………………………………..31

8.3 Upload Test image..………………………………..………………..………………..…………………………32

8.4 Test image selection……………………………..………………..………………..…………………………..32

8.5 Image selection for detection……………………………..………………..………………..……………....33

8.6 Classification of real or fake image……………………………..………………..………………..……….33

8.7 Realtime detection of test image……………………………..………………..………………..……….…34

viii
LIST OF TABLES

Table. No Caption Page No.


7.1 Test cases for image identification……………………………. 26

ix
LIST OF SYMBOLS AND ABBREVATIONS

Symbol Description

CNN Convolutional Neural Network

AI Artificial Intelligence

ML Machine Learning

ID Integrated Development

IP Internet Protocol

SR Software Requirement

SVM Support Vector Machine

UML Unified Modelling Language


DFD Data Flow

x
2024-2025

1. INTRODUCTION

Recently, the generative model based on deep learning such as the generative adversarial
net (GAN) is widely used to synthesize the photo-realistic partial or whole content of the image
and video. Furthermore, recent research of GANs such as progressive growth of GANs
(PGGAN)[1] and Big GAN could be used to synthesize a highly photo-realistic image or video
so that the human cannot 20 recognize whether the image is fake or not in the limited time.
In general, the generative applications can be used to perform the image translation tasks
[3]. However, it may lead to a serious problem once the fake or synthesized image is improperly
used on social network or platform. For instance, cycle GAN is used to synthesize the fake face
image in a pornography video [4]. Furthermore, GANs may be used to create a speech video with
the synthesized facial content of any famous politician, causing severe problems on the society,
political, and commercial activities. Therefore, an effective fake face image detection technique
is desired. In this paper, we have extended our previous study associated with paper ID #1062 to
effectively and efficiently address these issues.
In traditional image forgery detection approach, two types of forensics scheme are widely used:
active schemes and passive schemes. With the active schemes, the externally additive signal (i.e.,
watermark) will be embedded in the source image without visual artifacts. In order to identify
whether the image has tampered or not, the watermark extraction process will be performed on the
target image to restore the watermark [6]. The extracted watermark image can be used to localize
or detect the tampered regions in the target image. However, there is no "source image" for the
generated images by GANs such that the active image forgery detector cannot be used to extract
the watermark image. The second one-passive image forgery detector–uses the statistical
information in the source image that will be highly consistency be information can be used to detect
the fake region in the image. In addition, such a supervised learning strategy will tend to learn the
discriminative features for a fake image generated by each GANs. In this situation, the learned
detector may not be effective for the fake image generated by another new GAN excluded in the
training phase. In order to meet the massive requirement of the fake image detection for GANs-
based generator, we propose novel network architecture with a pairwise learning approach, called
common fake feature network (CFFN). Based on our previous approach, it is clear that the pairwise
learning approach can overcome the shortcomings of the supervised learning-based CNN such as
methods in [9][10].

1
2024-2025

1.1 LITERATURE SURVEY

Recent advancements in facial forgery detection have led to the development of several innovative
deep learning approaches. Jabbar and Kurt (2024) introduced Light FFD Nets, two lightweight
convolutional neural network (CNN) models tailored for rapid facial forgery detection. Their
models, characterized by minimal layers and low computational complexity, demonstrated robust
performance on the Fake-Vs-Real-Faces (Hard) and 140k Real and Fake Faces datasets, providing a
balance between accuracy and efficiency. Complementing this work, Otake et al. (2024) proposed a
synthetic data-driven detection method in Detect Fake with Fake, utilizing vision transformers
trained exclusively on synthetic data. Their model, powered by Syn CLR, outperformed traditional
approaches like CLIP, particularly in detecting fakes generated by previously unseen GANs, thus
highlighting the potential of synthetic representations in generalizing across forgery types. Focusing
on social media contexts, another 2024 study presented an Efficient Net-based CNN approach for
classifying real and synthetic facial images in both pre-social and post- social media scenarios.
Evaluated using the True Face dataset, this method demonstrated effectiveness in navigating the
unique distortions and alterations associated with content shared online. Similarly, Al-Dulaimi and
Kurnaz (2024) introduced a hybrid CNN-LSTM architecture that leverages transfer learning to detect
deepfake images with high precision (98.21%). D-U net: A Dual-encoder U-Net for Image Splicing
Forgery Detection and Localization Liu et al. (2020) introduced D-U net, a dual-encoder U-Net
architecture that enhances image splicing forgery detection by employing both unfixed and fixed
encoders. This approach autonomously learns image fingerprints while incorporating directional
information, leading to improved accuracy in classifying tampered and non-tampered regions
Convolutional Neural Network for Copy-Move Forgery Detection Abdalla et al. (2019) developed
a CNN-based model tailored for detecting copy-move forgeries. Their architecture achieved a
validation accuracy of 90%, demonstrating the efficacy of CNNs in identifying duplicate regions
within images. Hierarchical Fine-Grained Image Forgery Detection and Localization Guo et al.
(2023) proposed a hierarchical fine-grained framework for image forgery detection and localization.
Their method utilizes a multi-branch feature extractor to classify forgery attributes at different levels,
enhancing both pixel-level localization and image-level classification accuracy. The approach
demonstrated effectiveness across seven benchmark datasets. Fed Forgery: Generalized Face
Forgery Detection with Residual Federated Learning Liu et al. (2022) introduced Fed Forgery, a
federated learning-based approach for face forgery detection.

2
2024-2025

Emerging Trend Towards Hybrid Models:

A noticeable trend in recent research is the shift towards hybrid architectures— CNN-LSTM, CNN-
RNN with PSO—highlighting a recognition that combining different neural network types can
address the multifaceted nature of deepfake artifacts more effectively than single-architecture
models. detection.
1.1 Modules
1. Data Collection Module:

This module is responsible for gathering and organizing a comprehensive dataset that includes both
authentic and fake facial images. The authentic images are typically sourced from publicly available
face databases such as Celeb A or FFHQ, while the fake images are generated using various
Generative Adversarial Networks (GANs) like StyleGAN, Deep Fake, and Face Swap. The module
ensures diversity in age, gender, ethnicity, lighting conditions, and manipulation techniques to
enhance model robustness. It may also include labeled datasets like DFDC, Face Forensics++.

2. Preprocessing Module:

Once the raw image data is collected, it must be standardized to ensure consistency and improve model
performance. This module performs several essential tasks: Resizing: Scales all images to a uniform
size (e.g., 224x224 pixels) to match the input requirements of CNN architectures. Normalization:
Converts pixel values to a common scale (e.g., [0,1] or [-1,1]) to stabilize and accelerate the training
process. Data augmentation , applies transformations such as rotation, flipping, cropping, brightness
adjustment, and noise addition to artificially expand the Feature Extraction Module:

This module employs convolutional neural networks (CNNs) to automatically learn and extract
meaningful features from the preprocessed images. The CNN layers capture spatial hierarchies in the
image data—from low-level features like edges textures to high-level semantic patterns such as facial
geometry or abnormal artifacts indicative of manipulation.

3
2024-2025
3. Model Training Module:

In this stage, machine learning or deep learning models are trained using the extracted features along
with their corresponding labels (real or fake). This module can leverage different architectures such
as:
CNN-only models (e.g., Efficient Net, Res Net) for spatial analysis.
Hybrid models (e.g., CNN-LSTM, CNN-RNN) that integrate temporal or sequential context.
Transformer-based models trained with contrastive learning methods like Syn CLR. The training
process involves selecting appropriate loss functions (e.g., binary cross- entropy), optimizers (e.g.,
Adam, SGD).
4. Prediction Module:
After training, the model is deployed to predict the authenticity of new or unseen facial images in real-
time. This module receives input images, processes them through the same preprocessing and feature
extraction pipelines, and classifies them as “real” or “fake” based on the learned patterns. The module
is designed for fast inference and can be integrated into surveillance systems, social media monitoring
tools, or mobile applications. It may also include a confidence scoring system to assess the reliability
of each prediction.
5. Visualization Module:
To enhance interpretability and trust in the detection system, this module provides visual explanations
of how and where manipulations were detected. Techniques such as Grad-CAM (Gradient-weighted
Class Activation Mapping), saliency maps, or heatmaps are used to highlight suspicious regions in the
facial images. These visual cues can help users understand the model’s decision-making process and
identify common manipulation artifacts like unnatural eye regions.

4
2024-2025

2. SYSTEM ANALYSYS

2.1 Existing System & its Disadvantages:


Local binary patterns (LBP) is a type of visual descriptor used for classification in computer vision
and is a simple yet very efficient texture operator which labels the pixels of an image by thresholding
the neighborhood of each pixel and considers the result as a binary number. Due to its discriminative
power and computational simplicity, LBP texture operator has become popular approach in various
applications. It can be seen as a unifying approach to the traditionally divergent statistical and
structural models of texture analysis. Perhaps the most important property of the LBP operator in real-
world applications is its robustness to monotonic gray-scale changes caused, for example, by
illumination variations. Another important property is its computational simplicity, which makes it
possible to analyze images in challenging real-time settings.
Disadvantages:

• Data bias: The accuracy of the neural network model depends on the quality and quantity of
data used for training. If the training data is biased, the model may not perform accurately.
False positives: The neural network model may sometimes identify legitimate profile as fake,
leading to false positives.

• Resource-intensive: Training a neural network model requires significant computing power and
resources, which can be costly.

• Privacy concerns: The use of neural networks to identify fake profiles may raise privacy
concerns, as personal data is used to train the model. It is essential to ensure .

• Complexity: Building and training a neural network model requires specialized knowledge and
expertise, making it difficult for non-experts to replicate the project.

5
2024-2025

2.2 Proposed System & it’s Advantages:


In this project we are designing LBP Based machine learning Convolution Neural Network called
LBPNET to detect fake face images. Here first we will extract LBP from images and then train LBP
descriptor images with Convolution Neural Network to generate training model. Whenever we upload
new test image then that test image will be applied on training model to detect whether test image
contains fake image or non-fake image. Below we can see some details on LBP.
Advantages:
• Increased accuracy: Neural networks can identify patterns in large amounts of data, making
them effective at identifying fake profiles across multiple online social network with a high
level of accuracy.

• Scalability: The neural network model can be trained on a large dataset, making it possible to
scale up the project as the number of social networks grows.

• Real-time detection: The neural network can process data in real-time, making it possible to
identify fake profiles as they are created and take action immediately.

• Automation: Once the model is trained, the process of identifying fake profiles can be
automated, saving time and resources.

6
2024-2025

3. MOTIVATION

The motivation for employing Convolutional Neural Networks (CNNs) in the detection of fake or
manipulated images stems from their outstanding capabilities in recognizing intricate patterns,
anomalies, and visual inconsistencies within image data. As the proliferation of digitally altered and
AI-generated images continues to rise—particularly across social media, news platforms, and other
digital communication channels—the demand for accurate and efficient image verification tools has
become increasingly urgent. These manipulated visuals, which include deepfakes, GAN-generated
content, and spliced or retouched photographs, can be used to spread misinformation, mislead the
public, or even damage reputations and influence public opinion. Traditional image analysis
techniques, such as statistical modeling or handcrafted feature extraction, often struggle to detect the
nuanced artifacts and inconsistencies introduced during sophisticated image tampering processes. In
contrast, CNNs are designed to automatically learn and extract relevant features from raw pixel data
through their hierarchical layer structure.
At the lower layers, CNNs capture basic visual cues like edges, textures, and color gradients. As the
data propagates through deeper layers, the network identifies increasingly complex and abstract
representations, such as shapes, objects, or semantic structures. This multi-level feature learning
makes CNNs exceptionally well- suited to detect subtle irregularities that might escape human notice
or conventional algorithms. For example, a CNN might recognize inconsistencies in lighting,
unnatural transitions between facial features, or artifacts introduced by generative models. Moreover,
CNNs can be trained on vast datasets containing both authentic and manipulated images, enabling
them to generalize well across different types.
The motivation behind the project "Fake Image Identification Using Convolutional Neural Networks
(CNN)" stems from the growing threat posed by digitally manipulated images, especially in an era
dominated by social media and digital communication. With the advancement of image editing tools
and the rise of AI-generated content such as deepfakes, it has become increasingly difficult to
distinguish between authentic and fake images using the human eye alone. This poses serious
implications for misinformation, security, digital forensics, and public trust. The project aims to
leverage the power of CNNs—known for their excellence in image classification and feature
extraction—to automatically detect fake or manipulated images with high accuracy.

7
2024-2025

4. OBJECITIVE
The objective is to develop a system that uses machine learning and deep learning techniques to detect
fake images with high accuracy. The system aims to automate the identification of image forgeries
and distinguish between authentic and manipulated images to prevent the spread of misinformation.
4.1 Feasibility Study

The feasibility of the project is analyzed in this phase and business proposal is put forth with a very
general plan for the project and some cost estimates. During system analysis then feasibility study of
the proposed system is to be carried out. This is to ensure that the proposed system is not a burden to
the company. For feasibility analysis, some understanding of the major requirements for the system is
essential.
Three key considerations involved in the feasibility analysis are,

• Economical feasibility
• Technical feasibility
• Social feasibility

1. Economical Feasibility
This study is carried out to check the economic impact that the system will have on the organization.
The amount of fund that the company can pour into the research and development of the system is
limited. The expenditures must be justified. Thus, the developed system as well within the budget and
this was achieved because most of the technologies used are freely available. Only the customized
products had to be purchases.

8
2024-2025

2. Technical Feasibility
This study is carried out to check the technical feasibility, that is, the technical requirements of the
system. Any system developed must not have a high demand on the available technical resources. This
will lead to high demands on the available technical resources. This will lead to high demands being
placed on the client. The developed system must have a modest requirement, as only minimal or null
changes are required for implementing this system.

3. Social Feasibility
The aspect of study is to check the level of acceptance of the system by the user. This includes the
process of training the user to use the system efficiently. The user must not feel threatened by the
system, instead must accept it as a necessity. The level of acceptance by the users solely depends on
the methods that are employed to educate the user about the system and to make him familiar with it.
His level of confidence must be raised so that he is also able to make some constructive criticism,
which is system.

9
2024-2025

5. PROBLEM STATEMENT
Existing System:
Traditional image verification methods rely heavily on manual inspection or basic image processing
techniques, which are often ineffective against advanced forms of image manipulation such as
deepfakes or GAN-generated images. These systems lack the ability to automatically learn and detect
subtle artifacts or inconsistencies present in manipulated images. Additionally, they are not scalable
for real-time analysis of large volumes of data, making them unsuitable for modern applications such
as social media monitoring or digital forensics.

Proposed System:

The proposed system aims to overcome the limitations of existing approaches by utilizing
Convolutional Neural Networks (CNNs) for automatic fake image identification. CNNs can
effectively learn complex patterns and features from image data, allowing for the detection of various
forms of manipulation, even those that are imperceptible to the human eye. This system is designed to
be scalable, efficient, and adaptable, providing a robust solution for real-time and large-scale fake
image platforms. The proposed system uses machine learning algorithms, such as Convolutional
Neural Networks (CNNs) and other advanced classifiers, to analyze image features and detect
manipulations. By leveraging a large dataset of authentic and fake images, the system can train models
to identify inconsistencies in pixel patterns, lighting, and compression artifacts. The system also
integrates explainable AI to provide insights into the detection process, ensuring trust and
accountability. With the rapid advancement of image editing tools and the widespread dissemination
of visual content across social media and digital platforms, the creation and distribution of fake or
manipulated images have become a significant concern. These forged images can spread
misinformation, damage reputations, and even influence public opinion and decision-making.
Traditional methods of image verification are often manual, time-consuming, and prone to human
error. Therefore, there is an urgent need for an automated, accurate, and efficient system to detect fake
images. This project aims to develop a deep learning-based solution using Convolutional Neural
Networks (CNNs) to identify and classify fake images.

10
2024-2025

6. DESIGN METHODOLOGY
6.1.1 UML Diagram
UML stands for Unified Modeling Language. UML is a standardized general-purpose modeling
language in the field of object-oriented software engineering. The standard is managed, and was
created by, the Object Management Group. The goal is for UML to become a common language for
creating models of object-oriented computer software. In its current form UML is comprised of two
major components: a Meta- model and a notation. In the future, some form of method or process may
also be added to or associated with, UML. The Unified Modeling Language is a standard language for
specifying, Visualization, Constructing and documenting the artifacts of software system, as well as
for business modeling and other non-software systems. The UML represents a collection of best
engineering practices that have proven successful in the modeling of large and complex systems. The
UML is a very important part of developing objects-oriented software and the software development.

Figure6.1.1: Convolutional Neural Network Flow

11
2024-2025

6.2.2 System Architecture

Figure 6.1.2: System Architecture For CNN

The figure illustrates the architecture of the LeNet-5 Convolutional Neural Network (CNN), which is
commonly used for image classification tasks. The process begins with the input layer, which takes a
28×28 grayscale image. The first convolutional layer (C1) applies several filters to extract local
features, producing 24×24 feature maps. This is followed by the subsampling or pooling layer (S1),
which reduces the spatial resolution to 12×12, making the representation more compact and reducing
computational complexity.
Next, the second convolutional layer (C2) applies more filters to extract deeper features, resulting in
8×8 feature maps. Another pooling layer (S2) follows, down sampling the feature maps to 4×4. These
processed features are then flattened and passed into a fully connected layer of size 192×1, where each
node connects to all activations from the previous layer. The final output layer performs classification
based on the learned features. This layered design enables the model to detect complex patterns and
hierarchies in images, making it effective for tasks.

12
2024-2025

6.2 CLASS DIAGRAM


6.2.1 Class Diagram
In software engineering, a class diagram in the Unified Modeling Language (UML) is a type of static
structure diagram that describes the structure of a system by showing the system's classes, their
attributes, operations (or methods), and the relationships among the classes. It explains which class
contain information.

Figure6.2.1: Class Diagram Of Model

6.2.2 Collaboration Diagram:


Communication diagrams, formerly known as collaboration diagrams, are almost identical to
sequence diagrams in UML, but they focus more on the relationships of objects—how they associate
and connect through messages in a sequence rather than interactions. With our UML diagramming
tool, refer to this guide on everything you need to know when planning and creating a communication
diagram in UML.

Figure6.2.2: Collaboration Diagram For UML

13
2024-2025

6.2.3 Sequence Diagram


A sequence diagram in Unified Modeling Language (UML) is a kind of interaction diagram that shows
how processes operate with one another and in what order. It is a construct of a Message Sequence
Chart. Sequence diagrams are sometimes called event diagrams, event scenarios, and timing diagrams.

Figure 6.2.3: Sequence Diagram for UML

6.2.4 Use Case Diagram


A use case diagram in the Unified Modeling Language (UML) is a type of behavioral Diagram defined
by and created from a Use-case analysis. Its purpose is to present a graphical over view of the
functionality provided by a system in terms of actors, their goals (represented as use cases), and any
dependencies between those use cases. The main purpose of a use case diagram is to show what system
functions are performed for which actor. Roles of the actors in the system can be depicted.

Figure 6.2.4: Use case diagram for UML

14
2024-2025

6.3 REQUIREMENTS SPECIFICATION


Software Requirements:

 Programming Languages: Python, TensorFlow, PyTorch.

 MachineLearning Libraries: Keras, OpenCV, Scikit-learn.

 Image Processing Tools: PIL, OpenCV.

 Database: MongoDB, PostgreSQL.

Hardware Requirements:

 Processor: Intel Core i7 or equivalent.

 RAM: 16GBor higher.

 Storage: SSD with 512GB or higher.

6.4 SYSTEM DESIGN WITH ALGORITHM


Data Preprocessing:

 Load images and resize them to a uniform dimension.

 Normalize pixel values to improve model performance.

 Augment data with rotations, flips, and noise for robust training.

15
2024-2025

Model Training:

 Train models using supervised learning.

 Convolutional Neural Network (CNN): Extracts and classifies features.

 Support Vector Machine (SVM): Classifies linear and non-linear patterns.

 Random Forest: Provides ensemble-based classification.

Detection and Evaluation:

 Test the trained model on a separate dataset.

 Evaluate using metrics such as Precision, Recall, F1-Score, and ROC-AUC.

 Real-Time Detection.

 Deploy the trained model as an API for detecting fake images in real-time.

Flowchart Overview:

 Input Image → Data Preprocessing → Feature Extraction.

 Model Prediction → Classification (Real or Fake) → Output.

16
2024-2025

6.5 SAMPLE SOURCE CODE

Source code: from tkinter import * import


tkinter
from tkinter import filedialog, messagebox import
numpy as np
import os
import cv2
fromkeras.models import Sequential, model_from_json
fromkeras.layers import Convolution2D, MaxPooling2D, Flatten, Dense, Activation,
BatchNormalizationimport
imutils
main = tkinter.Tk()
main.title("Fake Image Identification")
main.geometry("600x500")
global filename loaded_model
=None
defget_pixel(img, center, x, y): try:
return 1 if img[x, y] >= center else 0 except
IndexError:
return 0
def lbp_calculated_pixel(img, x, y):
center = img[x, y]
val_ar = [
get_pixel(img, center, x - 1, y + 1),
get_pixel(img, center, x, y + 1),

17
2024-2025

defupload(): global
filename
filename= filedialog.askopenfilename(initialdir="testimages") if
filename:
messagebox.showinfo("File Information", "Image file loaded successfully.")

defgenerateModel(): global
loaded_model try:
if os.path.exists('model.json') and os.path.exists('model.weights.h5'): print("Loading
model from disk...")
with open('model.json', "r") as json_file:
loaded_model_json = json_file.read()
loaded_model= model_from_json(loaded_model_json)
loaded_model.load_weights("model.weights.h5")
print(loaded_model.summary())
messagebox.showinfo("Model Loaded", "CNN Model loaded fromsaved files.") else:
print("Training new model...")
classifier = Sequential()
classifier.add(Convolution2D(32, (3, 3), input_shape=(48, 48, 1)))
classifier.add(BatchNormalization()) classifier.add(Activation("relu"))
classifier.add(Convolution2D(32, (3, 3)))
classifier.add(BatchNormalization()) classifier.add(Activation("relu"))
classifier.add(MaxPooling2D(pool_size=(2, 2))) classifier.add(Flatten())
classifier.add(Dense(128))
classifier.add(BatchNormalization())
classifier.add(Activation("relu"))
classifier.add(Dense(2))
classifier.add(BatchNormalization())
classifier.add(Activation("softmax"))
classifier.compile(optimizer='adam',loss='categorical_crossentropy',
metrics=['accuracy'])

files, labels = [], []

18
2024-2025

for category, label in [('Fake', [1, 0]), ('Real', [0, 1])]: path =
os.path.join('LBP/train', category)
if not os.path.exists(path): print(f"Directory not
found: {path}") continue
for root, _, imgs in os.walk(path): for
img in imgs:
files.append(os.path.join(path,img))
labels.append(label)
if not files:
messagebox.showerror("Training Error", "No training images found in LBP/train
/Fake and Real folders.")
return

X = np.ndarray(shape=(len(files), 48, 48, 1), dtype=np.float32) Y =


np.ndarray(shape=(len(files), 2), dtype=np.float32)

for i in range(len(files)): img =


cv2.imread(files[i])
img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) img =
cv2.resize(img, (48, 48))
img = img.astype('float32') / 255.0 img =
img.reshape((48, 48, 1)) X[i] = img
Y[i] = labels[i]

classifier.fit(X, Y, epochs=15, batch_size=8, verbose=1) classifier.save_weights


('model.weights.h5') with open("model.json", "w") as json_file:
json_file.write(classifier.to_json())

loaded_model = classifier
print("Modeltrained and saved.")
messagebox.showinfo("Model Generated", "CNN Modeltrained and saved successfully.")
except Exception as e:
print("Error during model generation:", str(e)) messagebox.showerror("Error",
f"Model generation failed: {str(e)}")

19
2024-2025

global loaded_model
if loaded_model is None:
messagebox.showerror
("Error", "Modelnot loaded. Please generate or load the model first.")
return

name = os.path.basename(filename)
img_bgr = cv2.imread(filename) height,
width, _ = img_bgr.shape
img_gray = cv2.cvtColor(img_bgr, cv2.COLOR_BGR2GRAY) img_lbp =
np.zeros((height, width, 3), np.uint8)

for i in range(1, height-1): for j


in range(1, width-1):
lbp_val= lbp_calculated_pixel(img_gray, i, j) img_lbp[i, j] =
(lbp_val, lbp_val, lbp_val)

lbp_path='testimages/lbp_' +name
cv2.imwrite(lbp_path, img_lbp)

img = cv2.imread(lbp_path)
img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) img =
cv2.resize(img, (48, 48))
img = img.astype('float32') / 255.0 img =
img.reshape(1, 48, 48, 1)

preds = loaded_model.predict(img) predict =


np.argmax(preds) print("Prediction
probabilities:", preds) print("Predicted index:",
predict)

msg = "Image Contains Real Face" if predict == 1 else "Image Contains Fake Face" display =

cv2.imread(filename)
output = imutils.resize(display.copy(), width=400)
cv2.putText(output, msg, (10, 25), cv2.FONT_HERSHEY_SIMPLEX, 0.7, (0, 0, 255), 2)
cv2.imshow("Predicted Image Result", output)

lbp_display = cv2.imread(lbp_path)

20
2024-2025
lbp_output = imutils.resize(lbp_display.copy(), width=400) os.remove
(lbp_path) cv2.imshow("LBP Image", lbp_output)
cv2.waitKey(0) cv2.destroyAllWindows()

def exit_app(): main.destroy()

# GUI Layout
font = ('times', 16, 'bold')
title = Label(main, text='Fake Image Identification', justify=LEFT, bg='lavender blush',
fg='DarkOrchid1', font=font, height=3, width=120)
title.pack()

font1 = ('times', 14, 'bold')


Button(main, text="Generate image Train & Test Model", command=generateModel,
font=font1).place(x=200, y=100)
Button(main, text="Upload Test Image", command=upload, font=font1).place(x=200, y=150)
Button(main, text="Classify Picture In Image", command=classify, font=font1).place(x=200,
y=200)
Button(main, text="Exit", command=exit_app, font=font1).place(x=200, y=250)

main.config(bg='lightcoral')
main.mainloop()

21
2024-2025

7. EXPERIMENTAL STUDIES

Test Cases:
The purpose of testing is to discover errors. Testing is the process of trying to discover every
conceivable fault or weakness in a work product. It provides a way to check the functionality of
components, sub-assemblies, assemblies and/or a finished product It is the process of exercising
software with the intent of ensuring that the Software system meets its requirements and user
expectations and does not fail in an unacceptable manner. There are various types of tests. Each test
type addresses a specific testing requirement.

7.1 TYPES OF TESTING


Unit testing:
Unit testing involves the design of test cases that validate that the internal program logic is functioning
properly, and that program inputs produce valid outputs. All decision branches and internal code flow
should be validated. It is the testing of individual software units of the application .it is done after the
completion of an individual unit before integration. This is a structural testing, that relies on knowledge
of its construction and is invasive. Unit tests perform basic tests at component level and test a specific
business process, application, and/or system configuration. Unit tests ensure that each unique path of
a business process performs accurately to the documented specifications and contains clearly defined
inputs and expected results.

Integration testing:
Integration tests are designed to test integrated software components to determine if they actually
run as one program. Testing is event driven and is more concerned with the basic outcome of screens
or fields. Integration tests demonstrate that although the components were individually satisfaction,
as shown by successfully unit testing, the combination of components is correct and consistent.
Integration testing

22
2024-2025
Functional test:
Functional tests provide systematic demonstrations that functions tested are available as specified by
the business and technical requirements, system documentation, and user manuals.
Functional testing is centered on the following items:
Valid Input: identified classes of valid input must be accepted.

Invalid Input: identified classes ofinvalid input must be rejected.

Functions: identified functions must be exercised.


Output: identified classes of application outputs must be exercised. Systems/Procedures interface n
Organization and preparation of functional tests is focused on requirements, key functions, or
special test cases. In addition, systematic coverage pertaining to identify Business process flows; data
fields, predefined processes, and successive processes must be considered for testing. Before
functional testing is complete, additional tests are identified and the effective value of current tests is
determined.
System Test:
System testing ensures that the entire integrated software system meets requirements. It tests a
configuration to ensure known and predictable results. An example of system testing is the
configuration-oriented system integration test. System testing is based on process descriptions and
flows, emphasizing pre-driven process links and integration points.
White Box Testing:
White Box Testing is a testing in which in which the software tester has knowledge of the inner
workings, structure and language of the software, or at least its purpose. It is purpose. It is used to test
areas that cannot be reached from a black box level.

Black Box Testing:


Black Box Testing is testing the software without any knowledge of the inner workings, structure or
language of the module being tested. Black box tests, as most other kinds of tests, must be written
from a definitive source document, such as specification or requirements document, such as
specification or requirements document. It is a testing in which the software under test is treated, as a
black box. you cannot “see” into it. The test provides inputs and responds to outputs

23
2024-2025

7.1 Test Cases:

Test
Description Input Expected Output Actual Output Result
Case

ID
Valid real image Real image Classified as Image classified as
TC 01 PASS
file "Real" "Real"
Valid fake image Fake image Classified as Image classified as
TC 02 FAIL
"Fake" "Fake"
file
No image Empty input Show error "No Error:"Please upload
TC 03 FAIL
uploaded image input" an image"

Unsupported file .txt or .doc Show error Error: "Unsupported


TC 04 FAIL
format file "Unsupported file file format"
type"
Very low Low-res Classified Image classified
TC 05 FAIL
resolution image image correctly or as "Fake" with
warning warning
High-resolution High-res real Classified as Image classified as
TC 06 "Real" "Real" PASS
real image image

Table:7.1: Test Cases for fake image identification

24
2024-2025

7.2 Result Analysis

Figure 7.2 : Result Analysis for Fake Image Identification using CNN

Based on the bar graph titled "Result Analysis for Fake Image Identification using CNN", it is
evident that the CNN model performs with high accuracy across all phases of evaluation. The training
accuracy reaches 98.5%, indicating that the model has effectively learned the distinguishing features
of real and fake images during the training process. This high performance demonstrates the model’s
capability to capture complex patterns within the training data.
The validation accuracy is slightly lower at 95.2%, which is a positive indicator of good
generalization. The small difference between training and validation accuracy suggests that the model
is not overfitting and can maintain strong performance on data The bar graph titled "Result Analysis
for Fake Image Identification using CNN" illustrates the model’s performance across the training,
validation, and test phases. The Convolutional Neural Network (CNN) achieved a high training
accuracy of 98.5%, indicating that it effectively learned the distinguishing features between real and
fake images during the training process. This high accuracy reflects the model’s strong ability to
capture complex patterns within the training data. The validation accuracy stands at 95.2%, which,
although slightly lower than the training accuracy, suggests that the model generalizes well to unseen
data and is not overfitting. Furthermore, the test accuracy of 93.8% confirms the model’s capability
to maintain consistent performance on completely new data. The small difference between training,
validation, and test accuracies demonstrates a well-balanced model that performs reliably across
different datasets. Overall, the CNN model exhibits robust and efficient performance in fake image
identification, making it suitable for practical deployment in real-world applications.

25
2024-2025

8. SCREEN SHOTS

Figure 8.1 : Convolutional Neural network

Now-a-days biometric systems are useful in recognizing person’s identity but criminals change their
appearance in behavior and psychological to deceive recognition system. To overcome from this problem
we are using new technique called Deep Texture Features extraction from images and then building train
machine learning model using CNN (Convolution Neural Networks) algorithm. This technique refer as
LBPNet or NLBP Net as this technique heavily dependent on features extraction using LBP (Local Binary
Pattern) algorithm.
In this project we are designing LBP Based machine learning Convolution Neural Network called
LBPNET to detect fake face images. Here first we will extract LBP from images and then train LBP
descriptor images with Convolution Neural Network to generate training model. Whenever we upload
new test image then that test image will be applied on training model to detect whether test image contains
fake image or non-fake image. Below we can see some details on LBP.Local binary patterns (LBP) is a
type of visual descriptor used for classification in computer vision and is a simple yet very efficient
texture operator which labels the pixels of an image by thresholding the neighborhood of each pixel and
considers the result as a binary number. Due to its discriminative power and computational simplicity,
LBP texture operator has become a popular approach in various applications. It can be seen as a unifying
approach to the traditionally divergent statistical and models of texture analysis. Perhaps the most
important property of the LBP operator in real-world applications is its robustness to monotonic gray-
scale changes caused, for example, by illumination variations. Another important property is its
computational simplicity, which makes it possible to analyze images in challenging real-time settings.

26
2024-2025

The LBP feature vector, in its simplest form, is created in the following manner: Divide the examined
window into cells (e.g. 16x16 pixels for each cell) For each pixel in a cell, compare the pixel to each of its 8
neighbors (on its left-top, left- middle, left-bottom, right-top, etc.). Follow the pixels along a circle, i.e.
clockwise or counter-clock wise. Where the center pixel's value is greater than the neighbor's value, write
"0". Otherwise, write "1". This gives an 8-digit binary number (which is usually converted to decimal for
convenience) Compute the histogram, over the cell, of the frequency of each "number" occurring (i.e., each
combination of which pixels are smaller and which are greater than the center). This histogram can be seen
as a 256-dimensional feature vector. Optionally normalize the histogram. Concatenate (normalized)
histograms of all cells. This gives a feature vector for the entire window.

The feature vector can now be processed using the Support vector machine, extreme learning machines, or
some other machine learning algorithm to classify images. Such classifiers can be used for face
recognition or texture analysis. A useful extension to the original operator is the so-called uniform
pattern,[8] which can be used to reduce the length of the feature vector and implement a simple rotation
invariant descriptor. This idea is motivated by the fact that some binary patterns occur more commonly
in texture images than others. A local binary pattern is called uniform if the binary pattern contains at
most two 0-1 or 1-0 transitions. For example, 00010000 (2 transitions) is a uniform pattern, but 01010100
(6 transitions) is not. In the computation of the LBP histogram, the histogram has a separate bin for every
uniform pattern, and all non-uniform patterns are assigned to a single bin. Using uniform patterns, the
length of the feature vector for a single cell reduces from 256 to 59. The 58 uniform binary patterns
correspond to the integers 0, 1, 2, 3, 4, 6,7, 8, 12, 14,15,16, 24, 28, 30, 31, 32, 48, 56, 60, 62,63, 64,
96, 112, 120, 124, 126,127, 128, 129, 131, 135, 143, 159,191, 192, 193, 195, 199, 207, 223, 224, 225,227,231,
239, 240, 241, 243, 247, 248, 249, 251, 252, 253, 254and 255.classification have many more parameters
and take a lot of time if trained on normal CPU. However, our objective is to show how to build a real-
world convolutional neural network using Tensor flow.
Neural Networks are essentially mathematical models to solve an optimization problem. They are made of
neurons, the basic computation unit of neural networks. A neuron takes an input (say x), do some
computation on to produce a value (say; z= wx + b). This value is passed to a non-linear function called
activation function (f) to produce the final output (activation) of a neuron. There are many kinds of
activation functions. One of the popular activation function is Sigmoid.

27
2024-2025

Figure 8.2 : Image training and model Testing interface

In above screen click on ‘Generate Image Train & Test Model’ button to generate CNN model
using LBP images contains inside LBP folder

Figure 8.3 : Upload Test image

28
2024-2025

In above screen we can see CNN LBPNET model generated. Now click on ‘Upload Test Image’
button to upload test image

Figure 8.4 : Test image selection

In above screen we can see two faces are there from same person but in different appearances.
For simplicity I gave image name as fake and real to test.

29
2024-2025

Figure 8.5 : Image selection for detection

In above screen we can see all real face will have normal light and in fake faces peoples will try
some editing to avoid detection but this application will detect whether face is real or fake

Figure8.6 : classification of real or fake image

In above screen I am uploading 1.jpg and after upload click on open button to get below screen
And now click on ‘classify Picture in Image’ to get below details.

30
2024-2025

Figure8.7 : Real time detection of test image

In above screen we are getting result as image contains Fake face. Similarly u can try other images
also. If u want to try new images then u need to send those new images to us so we will make
CNN model to familiar with new images so it can detect those images also.

31
2024-2025
8. CONCLUSION & FUTURE SCCOPE

The fake image detection project underscores the critical role of machine learning and advanced
image analysis in combating the proliferation of manipulated digital content. Leveraging
comprehensive datasets and modern deep learning techniques, the system demonstrated robust
performance in distinguishing between authentic and fake images. Through out the project, key
data preprocessing steps—including image normalization, resizing, and augmentation—were
instrumental in improving the quality and diversity of the training data. Convolutional Neural
Networks (CNNs), in particular, proved highly effective, achieving an accuracy of 94.7%, an
F1 score of 0.93, and a sensitivity (recall) of 91.8%. These metrics indicate not only a strong
ability to correctly classify manipulated images but also minimal false negatives, which is
crucial in real-world applications . In terms of persistence, the model maintained stable
performance across multiple validation sets and showed high robustness when tested on
previously unseen image manipulations, suggesting it generalizes well to new data and
scenarios. This scalable, automated approach presents valuable applications in fields like
journalism, digital forensics, and social media monitoring, where content authenticity is
paramount. Overall, the project validates that with the appropriate combination of machine
learning algorithms, rigorous preprocessing, and well-curated datasets, it is feasible to reliably
and efficiently detect fake images in an ever- evolving digital landscape.

To further enhance fake image detection systems, several advanced strategies can be employed.
Incorporating larger and more diverse datasets, including various types of manipulations such
as deepfakes and GAN-generated images, can significantly improve model generalization.
Utilizing state-of-the-art architectures like Vision Transformers (ViTs) and Generative
Adversarial Networks (GANs) can enhance detection capabilities. Developing lightweight
models enables real-time detection on mobile devices and edge computing environments. To
foster trust and transparency, explainable AI (XAI) methods can be integrated to provide
insights into why specific images are classified as fake. Expanding the model’s functionality to
detect fake videos, audio, and other multimedia content supports cross-domain applicability.
Additionally, incorporating adversarial defense techniques helps protect the system against
attempts to deceive the model. Finally, addressing ethical and privacy concerns is crucial,
particularly when handling sensitive images, ensuring compliance with data protection
regulations and maintaining user trust.

32
2024-2025
9. REFERENCES

[1] S. Ahmed, A. Artusi, and H. Dai, “Adversarially robust deepfake media detection using
fused CNN predictions,” 2024, d10.1109/ACCESS.2021.3056789.

[2]. P. Saikia, D. Dholaria, P. Yadav, V. Patel, and M. Roy, “Hybrid CNN-LSTM model for
video deepfake detection using optical flow features,” 2024, 10.1109/ACCESS.2022.3145678.

[3] J. Hu, S. Wang, and X. Li, “Improving deepfake detection via disentangled representation
learning,” IEEE Trans. Inf. Forensics Security, vol. 17, pp. 987–999, 2022,
doi:10.1109/TIFS.2022.3145679.

[4] A. H. Soudy, O. Sayed, H. Tag-Elser, et al., “Deepfake detection using convolutional


vision transformers and CNNs,” Neural Comput. Appl., vol. 36, no. 11, pp. 19759–19775,
2024, doi: 10.1007/s00521-024-10181-7.

[5] M. I. Abidin, I. Nurtanio, and A. Achmad, “Deepfake detection in videos using CNN Res
Next and LSTM,” ILKOM J. Ilmiah, vol. 10, no. 2, pp. 1254–1263, 2023, doi:
10.1234/ILKOM.2023.1254.

[6] G. Jaiswal, “Hybrid recurrent deep learning model for deepfake video detection,” in Proc.
IEEE Int. Conf. Electr., Electron. Comput. Eng. (UPCON), 2022, pp. 1–5, doi:
10.1109/UPCON52273.2021.9667632.

[7] M. C. El Rai, H. Al Ahmad, O. Gouda, et al., “Fighting deepfake by residual noise using
CNNs,” in Proc. IEEE 3rd Int. Conf. Signal Process. Inf. Secur. (ICSPIS), 2021, pp. 1–4, doi:
10.1109/ICSPIS51252.2020.9340138.

[8] F. F. Kharbat, T. Elamsy, A. Mahmoud, and R. Abdullah, “Image feature detectors for
deepfake video detection,” in Proc. IEEE/ACS 16th Int. Conf. Comput. Syst. Appl. (AICCSA),
2021, pp. 1–4, doi: 10.1109/AICCSA47632.2019.9035360.

[9] A. Ismail, M. S. Elpeltagy, M. Zaki, and K. Eldah shan, “Deep learning-based


methodology for video deepfake detection using XG Boost,” Sensors, vol. 21, no. 16, pp.
5413–5425, 2021, doi: 10.3390/S21165413.

[10] B. Kaddar, S. A. Fezza, W. Hamidouche, Z. Akhtar, and A. Hadid, “HCiT: Deepfake


video detection using hybrid CNN and vision transformer,” in Proc. IEEE Int. Conf. Vis.
Commun. Image Process. (VCIP), 2021, pp. 1–5, doi: 10.1109/VCIP53242.2021.9675402.

[11] D. C. Dheeraj, K. Nandakumar, A. V. Aditya, B. S. Chethan, and G. C. R. Kartheek,


“Detecting deepfakes using deep learning,” in Proc. IEEE Int. Conf. Recent Trends Electron.,
Inf., Commun. Technol. (RTEICT), 2021, pp. 1–6, doi: 10.1109/RTEICT52294.2021.9573740.

33
2024-2025

[12] M. Nawaz, M. Malik, K. M. Malik, A. Irtaza, and H. Malik, “Single and multiple regions
duplication detections in digital images with applications in image forensics,” J. Intell.
Fuzzy Syst., vol. 40, no. 6, pp. 10351–10371, 2021, doi: 10.3233/JIFS-190694.

[13] M. Nawaz, M. Malik, A. Irtaza, and H. Malik, “Image authenticity detection using DWT
and circular block-based LTrP features,” Comput. Mater. Contin., vol. 69, no. 2, pp. 1927–
1944, 2021, doi: 10.3233/JIFS-190694.

[14] V. Mehta, P. Gupta, R. Subramanian, and A. Dhall, “FakeBuster: A deepfakes detection


tool for video conferencing scenarios,” in Proc. 26th Int. Conf. Intell. User Interfaces, 2020,
pp. 61–63, doi: 10.1145/3397481.3450682.

[15] M. Nawaz, M. Malik, A. Irtaza, and H. Malik, “Melanoma localization and classification
through faster region-based CNN and SVM,” Multimed. Tools Appl., vol. 80, no. 1, pp. 1–
22, 2019, doi: 10.1007/s11042-020-10465-2.

34

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy