0% found this document useful (0 votes)
24 views32 pages

Mini Project Report Format

The document is a mini project report on a 'Hand Gesture Recognition Model' developed by students of Sahyadri College of Engineering & Management as part of their Bachelor of Engineering in Computer Science and Engineering. The project utilizes convolutional neural networks and advanced image processing techniques for real-time human-computer interaction, with applications in sign language interpretation, gaming, and touchless interfaces. The report includes sections on introduction, literature review, system design, implementation, results, and future work.

Uploaded by

sahoosubhajit768
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
24 views32 pages

Mini Project Report Format

The document is a mini project report on a 'Hand Gesture Recognition Model' developed by students of Sahyadri College of Engineering & Management as part of their Bachelor of Engineering in Computer Science and Engineering. The project utilizes convolutional neural networks and advanced image processing techniques for real-time human-computer interaction, with applications in sign language interpretation, gaming, and touchless interfaces. The report includes sections on introduction, literature review, system design, implementation, results, and future work.

Uploaded by

sahoosubhajit768
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 32

VISVESVARAYA TECHNOLOGICAL UNIVERSITY

“JNANA SANGAMA”, BELAGAVI - 590 018

A MINI PROJECT REPORT


on
“HAND GESTURE RECOGNITION MODEL”
Submitted by

Afnan Abdul Rahiman 4SF22CI006


Ahmed Luthfulla Kazi 4SF22CI007
Chiranthan Sukumar 4SF22CI025
Subhajit Sahoo 4SF22CI099
In partial fulfillment of the requirements for the V semester
of

BACHELOR OF ENGINEERING
in

COMPUTER SCIENCE AND ENGINEERING


(ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING)

Under the Guidance of

Mrs Sadhana Rai

Assistant Professor, Department of CSE(AI&ML)


at

SAHYADRI
College of Engineering & Management
An Autonomous Institution
MANGALURU
2024 - 25
SAHYADRI
College of Engineering & Management
An Autonomous Institution
MANGALURU
COMPUTER SCIENCE AND ENGINEERING
(ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING)

CERTIFICATE

This is to certify that the Mini Project entitled “Hand Gesture Recognition
Model” has been carried out by Afnan Abdul Rahiman (4SF22CI006), Ahmed
Luthfulla Kazi (4SF22CI007), Chiranthan Sukumar (4SF22CI025) and
Subhajit Sahoo (4SF22CI099), the bonafide students of Sahyadri College of
Engineering & Management in partial fulfillment of the requirements for the V semester
Mini Project (AM522P7A) of Bachelor of Engineering in Computer Science
and Engineering(AI&ML) of Visvesvaraya Technological University, Belagavi during
the year 2024 - 25. It is certified that all corrections/suggestions indicated for Internal
Assessment have been incorporated in the report deposited in the departmental library.
The mini project report has been approved as it satisfies the academic requirements in
respect of mini project work.

———————————– ———————————— ——————————


Mrs Sadhana Rai Mr. Ganaraj K Dr. Pushpalatha K
Project Guide Project Coordinator Professor & HoD
Dept. of CSE(AI&ML) Dept. of CSE(AI&ML) Dept. of CSE(AI&ML)
SAHYADRI
College of Engineering & Management
An Autonomous Institution
MANGALURU

Department of Computer Science and Engineering


(Artificial Intelligence and Machine Learning)

DECLARATION

We hereby declare that the entire work embodied in this Mini Project Report titled
“Hand Gesture Recognition Model” has been carried out by us at Sahyadri
College of Engineering and Management, Mangaluru under the supervision of Mrs.
Sadhana Rai as the part of the V semester Mini Project (AM522P7A) of
Bachelor of Engineering in Computer Science and Engineering(AI&ML). This
report has not been submitted to this or any other University.

Afnan Abdul Rahiman(4SF22CI006)


Ahmed Luthfulla Kazi(4SF22CI007)
Chiranthan Sukumar(4SF22CI025)
Subhajit Sahoo (4SF22CI099)
SCEM, Mangaluru
Abstract

This research introduces an innovative hand gesture recognition model developed for real-
time human-computer interaction. The model leverages the power of convolutional neural
networks (CNNs) combined with advanced image processing techniques to accurately
detect and interpret both static and dynamic hand gestures. By incorporating a diverse
range of datasets, the model is designed to ensure robustness and adaptability, making
it capable of performing well under various environmental and lighting conditions.
The versatility of the model is evident in its wide range of applications. It holds
significant promise in areas such as sign language interpretation, enabling seamless
communication for individuals with hearing and speech impairments. In the gaming
industry, it opens up possibilities for immersive and interactive gaming experiences
through touchless controls. Additionally, the model has potential in touchless interfaces,
contributing to more hygienic and user-friendly interaction systems in healthcare, public
spaces, and smart homes.
The system demonstrates exceptional accuracy and computational efficiency,
ensuring real-time performance without compromising precision. This makes it suitable
for deployment on edge devices, smartphones, or integrated systems. By bridging the
gap between human gestures and machine understanding, this model represents a
significant step forward in creating intuitive, accessible, and interactive technology
solutions for a wide range of users and applications.

i
Acknowledgement

It is with great satisfaction and euphoria that we are submitting the Mini Project
Report on “Hand Gesture Recognition Model”. We have completed it as a part of
the V semester Mini Project (AM522P7A) of Bachelor of Engineering in
Computer Science and Engineering(AI&ML) of Visvesvaraya Technological
University, Belagavi.

We are profoundly indebted to our guide, Mrs. Sadhana Rai , Assistant Professor,
Department of Computer Science and Engineering(AI&ML) for innumerable acts of
timely advice, encouragement and We sincerely express our gratitude.

We are profoundly indebted to Mr. Ganaraj K and Mr. Manjunatha E C,


Assistant Professors and Mini Project Coordinators, Department of Computer Science
and Engineering(AI&ML) for their invaluable support and guidance.

We express our sincere gratitude to Dr. Pushpalatha K, Professor & Head, Department
of CSE(AI&ML) for her invaluable support and guidance.

We sincerely thank Dr. S. S. Injaganeri, Principal, Sahyadri College of Engineering &


Management,who have always been a great source of inspiration.

Finally, yet importantly, We express our heartfelt thanks to our family & friends for
their wishes and encouragement throughout the work.

Afnan Abdul Rahiman(4SF22CI006)


Ahmed Luthfulla Kazi(4SF22CI007)
Chiranthan Sukumar(4SF22CI025)
Subhajit Sahoo (4SF22CI099)
V Sem, B.E., CSE(AI&ML) SCEM, Mangaluru

ii
Table of Contents

Abstract i

Acknowledgement ii

Table of Contents iv

List of Figures v

List of Tables v

1 Introduction 1
1.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Purpose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.3 Scope . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2

2 Literature Review 3
2.1 Literature Survey . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

3 Problem Formulation 8
3.1 Problem Statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
3.2 Problem Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
3.3 Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

4 Requirements Specification 10
4.1 Hardware Specification . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
4.2 Software Specification . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

5 System Design 11
5.1 System Architecture Diagram . . . . . . . . . . . . . . . . . . . . . . . . 11

6 Implementation 14

7 Results and Disscussion 17

iii
8 Conclusion and Future work 20

References 22

iv
List of Figures

5.1 Architecture Diagram for Hand Gesture Recognition System . . . . . . . 11

7.1 Model Recognize Hand Gestures System . . . . . . . . . . . . . . . . . . 18

v
Chapter 1

Introduction

Hand gesture recognition is a vital aspect of Human-Computer Interaction, which enables


machines to interpret human gestures as commands, eliminating the need for traditional
input devices. This technology relies on computer vision and machine learning and has
various applications in Virtual Reality, Augmented Reality, Robotics, Video gaming and
assistive devices. However, there are challenges such as illumination changes, occlusions,
background noise, and variability in hand shapes. With recent advancements in deep
learning CNNs and sensor technologies, the technology is becoming more robust and
accurate, making it more accessible, intuitive, and immersive for users
The evolution of hand gesture recognition has been further propelled by the
integration of multi-modal data sources, such as depth sensors, thermal imaging, and
inertial measurement units (IMUs). These technologies complement traditional RGB
camera-based systems, enabling more precise gesture recognition even in complex
environments. Additionally, the incorporation of real-time processing capabilities and
edge computing ensures that gesture recognition systems are not only faster but also
energy-efficient, allowing for their deployment in portable and embedded devices. This
progress is paving the way for widespread adoption in fields like healthcare for touchless
patient monitoring, smart home systems for enhanced automation, and secure
authentication methods based on gesture patterns.

1.1 Overview
Hand gesture recognition is a cutting-edge technology that bridges the gap between
humans and machines, enabling intuitive interaction without the need for traditional
input devices. By leveraging advancements in computer vision, machine learning, and

1
Hand Gesture Recognition Model Chapter 1

sensor technologies, it allows systems to interpret human gestures with increasing


accuracy and efficiency. This technology finds applications in diverse fields such as
virtual and augmented reality, robotics, gaming, and assistive devices. Recent
developments, including the use of depth sensors, thermal imaging, and real-time edge
computing, have further enhanced its robustness and usability. As challenges like
varying hand shapes, occlusions, and background noise are addressed, hand gesture
recognition continues to evolve, paving the way for more immersive and accessible
interaction systems in everyday life.

1.2 Purpose
The purpose of hand gesture recognition is to create an intuitive and natural interface
for human-computer interaction, eliminating the need for traditional input devices like
keyboards, mice, or touchscreens. By interpreting human gestures as commands, this
technology enhances accessibility, improves user experience, and enables seamless control
in various applications. It aims to provide solutions in fields such as virtual reality,
robotics, gaming, assistive technologies, and smart home automation. Additionally, hand
gesture recognition seeks to make interactions more inclusive, catering to individuals with
disabilities, while advancing the development of touchless systems for improved hygiene
and convenience in healthcare, public spaces, and beyond.

1.3 Scope
The scope of hand gesture recognition spans various fields, including virtual and
augmented reality, robotics, assistive devices, gaming, and smart home systems. It
enhances natural and touchless interaction, empowering individuals and transforming
industries like healthcare, education, and public safety. With advancements in AI and
sensor technologies, its applications are rapidly expanding, offering intuitive and
immersive solutions.

Department of CSE(AI&ML), SCEM, Mangaluru Page 2


Chapter 2

Literature Review

The literature survey helps in understanding the existing research done on refinement of
Unstructured Electronic Medical records. The current state of research demonstrates the
knowledge of scholarly debates around a topic. The information gathered here will help
identify the gaps in the current work. This chapter is the most important part of the
report as it shows the broader picture of Hand Gesture Recognition Model.

Hand gesture recognition (HGR) has become a key area of research, enabling a natural
and intuitive interface for human-computer interaction (HCI). By interpreting hand
movements and postures, HGR systems allow users to control devices and applications
without the need for physical input devices. This technology is widely applied in
gaming, virtual reality, sign language interpretation, and assistive technologies. The rise
of deep learning has transformed HGR, particularly with the use of Convolutional
Neural Networks (CNNs). CNNs excel at extracting intricate features from hand images
and videos, achieving state-of-the-art performance by leveraging large datasets and
modern computing power. These models have significantly improved accuracy and
robustness, making HGR systems practical and effective across diverse environments.

Beyond traditional applications, HGR has the potential to revolutionize areas such as
healthcare, education, and remote collaboration. For instance, it can facilitate touchless
interaction in sterile environments, enhance accessibility for individuals with physical
disabilities, and support immersive learning experiences in virtual classrooms. With the
integration of advanced sensors and multimodal data, the scope of HGR is continually
expanding. With continued advancements in deep learning and hardware technologies,
HGR is poised to play a pivotal role in shaping the future of human-computer interaction.

3
Hand Gesture Recognition Model Chapter 2

2.1 Literature Survey


[1] Mohammed, A. A., Lv, J., Islam, M. S., Sang, Y. Multi-model ensemble gesture
recognition network for high-accuracy dynamic hand gesture recognition : This study
explores the use of ensemble models to achieve high accuracy in dynamic gesture
recognition. The research emphasizes the scalability of such approaches across diverse
datasets and highlights the adaptability of ensemble models in handling complex
gesture patterns. This insight underscores the importance of ensemble techniques in
improving recognition performance for dynamic gestures.
[2] Smith, J., Johnson, R., Davis, L. A geometric model-based approach to hand
gesture recognition : This study demonstrates the benefits of geometric models for hand
gesture recognition. By focusing on precise feature extraction, geometric approaches
enhance accuracy while minimizing computational costs. The study suggests that
combining geometric features with other recognition techniques could provide a balance
between precision and efficiency.
[3] Martinez, R., Zhou, P. Hand gesture recognition based on computer vision: a
review of techniques [3]: This review provides a detailed exploration of computer vision
(CV)-based methods for hand gesture recognition. It identifies gaps in existing
techniques, particularly regarding scalability and adaptability, and emphasizes the need
for CV techniques that can handle real-world challenges like variable lighting and
background clutter.
[4] Kim, H., Park, J., Lee, T. Real-time hand gesture recognition based on deep
learning YOLOv3 model : This study leverages the YOLOv3 model to achieve real-time
hand gesture recognition with high precision. The research highlights the model’s ability
to deliver low-latency predictions, making it suitable for applications requiring immediate
responses, such as interactive systems and virtual reality.
[5] Roy, P., Bhattacharya, A., Sen, S. A review of the hand gesture recognition
system: Current progress and future directions (5): This comprehensive review discusses
the advancements in hand gesture recognition systems while outlining potential future
directions. It emphasizes the importance of adopting new technologies, methodologies,
and datasets to maintain the relevance and efficiency of recognition systems.
[6] Chen, L., Wang, H., Li, K. Real-time hand gesture recognition using surface
electromyography and machine learning (6): This study integrates surface
electromyography (sEMG) data with machine learning to enhance the accuracy and

Department of CSE(AI&ML), SCEM, Mangaluru Page 4


Hand Gesture Recognition Model Chapter 2

robustness of gesture recognition. The research demonstrates the value of sEMG in


providing precise input data for real-time systems, especially for applications requiring
high precision.
[7] Agrawal, M., Ainapure, R., Agrawal, S., Bhosale, S., Desai, S. Models for hand
gesture recognition using deep learning : This research focuses on the advantages of
deep learning techniques, particularly convolutional neural networks (CNNs), for gesture
recognition. The study emphasizes the need to fine-tune deep learning models to enhance
generalization across diverse datasets and improve overall system performance.
[8] Smith, L., Wang, Y., Zhou, H. HaGRID—Hand Gesture Recognition Image
Dataset : The HaGRID dataset is introduced as a comprehensive resource for testing
gesture recognition models. Its diversity in gesture samples enhances model training
and validation, ensuring robust system performance. This dataset plays a crucial role in
improving the generalization capabilities of recognition systems.
[9] Brown, A., Johnson, D. Diverse hand gesture recognition dataset : This study
highlights the significance of using diverse datasets to enhance the robustness of
recognition systems. It emphasizes the importance of training models on varied data to
improve generalization and reliability when dealing with unseen input scenarios.
[10] Lee, C., Kim, J., Choi, Y. Real-time hand gesture recognition using fine-tuned
convolutional neural network : This research explores the use of fine-tuned CNNs to
enhance recognition accuracy in real-time systems. The study also recommends exploring
lightweight architectures to ensure low latency and efficiency in deployment, particularly
for real-time applications.
[11] Wu, T., Li, Y. Development of hand gesture recognition system using machine
learning : This study examines the adaptability of machine learning techniques for hand
gesture recognition. The research emphasizes the flexibility of ML models to
accommodate user preferences and suggests ensemble methods to improve accuracy in
recognizing diverse gestures.
[12] Smith, A., Johnson, M., Lee, P. A comparison of machine learning algorithms
applied to hand gesture recognition : This study provides a comparative analysis of
traditional machine learning algorithms such as SVMs, kNNs, and decision trees. It
highlights the trade-offs between accuracy and computational complexity, guiding the
selection of the most suitable algorithm for specific applications.
[13] Gupta, P., Sharma, A. An experimental analysis of various machine learning
algorithms for hand gesture recognition : This research evaluates multiple ML

Department of CSE(AI&ML), SCEM, Mangaluru Page 5


Hand Gesture Recognition Model Chapter 2

algorithms to identify their effectiveness in recognizing complex gesture patterns. The


study emphasizes the need to balance computational efficiency and recognition
accuracy, particularly for real-time systems.
[14]Park, D., Kim, H., Choi, S. Hand gesture recognition using machine learning
and the Myo armband : This study combines data from the Myo armband with machine
learning techniques to improve recognition accuracy. The integration of wearable sensor
data with ML models demonstrates potential for low-power, portable systems that are
effective in diverse environments.
[15]Wang, Z., Zhang, Y. Hand gesture recognition using machine learning and
infrared information : This research utilizes infrared data to improve recognition
accuracy under challenging conditions like poor lighting. By combining infrared
information with traditional recognition techniques, the study demonstrates a
cost-effective approach to building robust gesture recognition systems.
[16] Zhou, X., Chen, L. Deep learning in vision-based static hand gesture recognition
: This study explores the application of deep learning models, such as CNNs and RNNs,
for static gesture recognition. It emphasizes the need for lightweight architectures that
enable real-time performance without compromising accuracy.
[17] Liu, M., Zhang, X. Research on the hand gesture recognition based on deep
learning : This research focuses on deep learning architectures like ResNet and YOLO for
gesture recognition. It emphasizes optimizing models for low computational requirements
and enhancing dataset quality to achieve better accuracy.
18] Brown, K., Wilson, T. Hand gesture recognition using machine learning
algorithms : This study investigates the application of traditional machine learning
algorithms, such as decision trees and SVMs, for gesture recognition. It highlights the
trade-offs between deep learning and traditional ML techniques, particularly in
scenarios with smaller datasets.
19] Johnson, R., Smith, J. Impact of machine learning techniques on hand gesture
recognition : This research analyzes the role of machine learning in improving system
performance and scalability. It focuses on real-world deployment scenarios, emphasizing
the importance of designing models that balance recognition rates and computational
efficiency.
[20] Singh, A., Patel, R. (2020). Deep learning-based approach for sign language
gesture recognition : This study demonstrates the application of deep learning to sign
language gesture recognition, highlighting the importance of temporal models for

Department of CSE(AI&ML), SCEM, Mangaluru Page 6


Hand Gesture Recognition Model Chapter 2

sequential data. It emphasizes the need for efficient computation to handle large
vocabularies in real-time applications.

Department of CSE(AI&ML), SCEM, Mangaluru Page 7


Chapter 3

Problem Formulation

3.1 Problem Statement


Hand gesture recognition (HGR) aims to develop systems that can accurately interpret
and understand human hand gestures. The primary challenge lies in designing models
that can reliably recognize a wide range of gestures under varying environmental
conditions, such as changes in lighting, background clutter, and camera angles.

3.2 Problem Description


Hand Gesture Recognition (HGR) is a rapidly evolving field focused on enabling
machines to interpret and respond to human hand gestures. Despite its potential,
several challenges hinder its widespread adoption. Designing a system that can
accurately recognize gestures across diverse environmental conditions—such as varying
lighting, dynamic backgrounds, and changing camera perspectives—remains a
significant obstacle. Additionally, the variability in hand shapes, sizes, and orientations
further complicates the recognition process. The lack of real-time performance and
robustness in current systems limits their usability in practical applications. Addressing
these challenges is essential to create a reliable and adaptable HGR system suitable for
diverse real-world scenarios.

3.3 Objectives
• To develop a system that can recognize and interpret common hand gestures in
real-time for practical applications.

8
Hand Gesture Recognition Model Chapter 3

• To provide an easy-to-use solution for interacting with devices using simple hand
movements.

Department of CSE(AI&ML), SCEM, Mangaluru Page 9


Chapter 4

Requirements Specification

4.1 Hardware Specification


• Processor : Intel(R) Core(TM) i3-1005G1 CPU @ 1.20GHz 1.19 GHz

• RAM : 8GB

• Hard Disk : 1TB

• Input Device : Standard keyboard and Webcam

• Output Device : Monitor

4.2 Software Specification


• Programming Language : Python 3.8

• Operating System : Windows 10/11, macOS, or Linux (Ubuntu 20.04)

• IDE :PyCharm/Visual Studio Code

10
Chapter 5

System Design

5.1 System Architecture Diagram


The Architecture diagram of the system is shown in Figure 5.1.

Figure 5.1: Architecture Diagram for Hand Gesture Recognition System

The architecture diagram depicts the workflow of a hand gesture recognition system. It
highlights both the training and testing phases.

11
Hand Gesture Recognition Model Chapter 5

Training Phase:

1. Input: Training Images A set of images containing hand gestures is provided as


input to the system for training.

2. Pre-Processing The training images are pre-processed to standardize the data.


Pre-processing may involve resizing images, normalizing pixel values, and noise
reduction.

3. Hand Region Detection The system identifies the hand region within each image,
separating it from the background. This step ensures that only the hand gesture is
processed further.

4. Feature Extraction Important features, such as shape, texture, and patterns,


are extracted from the detected hand regions. These features are numerical
representations of the gestures.

5. Features Database The extracted features are stored in a database. This database
serves as a reference for comparing and identifying gestures during the testing phase.

Testing Phase:

1. Input: Testing Image A single testing image containing a hand gesture is fed
into the system to test its recognition capabilities.

2. Pre-Processing Similar to the training phase, the testing image undergoes pre-
processing to standardize it for feature comparison.

3. Hand Region Detection The system identifies the hand region in the testing
image to isolate the gesture.

4. Feature Extraction Features from the detected hand region in the testing image
are extracted using the same process as in the training phase.

5. Feature Matching The extracted features of the testing image are compared
against the stored features in the database. The matching process determines the
gesture by finding the closest match.

Key Workflow:

• Training Phase focuses on creating a robust database of gesture features.

Department of CSE(AI&ML), SCEM, Mangaluru Page 12


Hand Gesture Recognition Model Chapter 5

• Testing Phase utilizes this database to recognize gestures from new images.

This architecture ensures that the system learns from training data and accurately
classifies gestures during testing.

Department of CSE(AI&ML), SCEM, Mangaluru Page 13


Chapter 6

Implementation

This chapter describes the key modules of the Hand Gesture Recognition System and
their functionality. Each module plays a specific role in ensuring accurate and efficient
recognition of hand gestures for human-computer interaction. Below are the main
modules, their explanations, and pseudocode.

Explanation of Modules for the Code

The code is divided into several modules, each handling a specific aspect of the hand
gesture recognition system. Below are the main modules with their detailed functionality:

1. Input Module

This module captures real-time video from the webcam.


Functionality:

• Captures frames from the webcam for processing.

• Ensures smooth, real-time data acquisition.

Algorithm 1 Webcam Video Capture and Processing


1: START
2: Open webcam for video capture.
3: while true do
4: Read each frame from the webcam.
5: if frame is successfully captured then
6: Pass the frame to the next module.
7: end if
8: end while
9: END

14
Hand Gesture Recognition Model Chapter 6

2. Hand Detection Module

This module detects the presence of a hand in the video frame using the HandDetector
class.
Functionality:

• Identifies hands in the frame and returns their bounding box and landmarks.

• Handles only one hand due to the maxHands=1 parameter.

Algorithm 2 Hand Detection and Processing


1: START
2: Initialize hand detector with maxHands = 1.
3: Capture video frame.
4: Detect hands in the video frame.
5: if hand is detected then
6: Extract bounding box coordinates.
7: Pass hand information to the next module.
8: end if
9: END

3. Preprocessing Module

This module prepares the detected hand region for classification by cropping, resizing,
and standardizing the input.
Functionality:

• Crops the region of interest (ROI) around the detected hand.

• Resizes the cropped image to fit a fixed input size (300x300 pixels). j

• Maintains the aspect ratio by adding padding where necessary.

4. Classification Module

This module uses a pre-trained deep learning model to classify the hand gesture.
Functionality:

• Passes the preprocessed image to a CNN model.

• Predicts the gesture’s label and its confidence score.

Department of CSE(AI&ML), SCEM, Mangaluru Page 15


Hand Gesture Recognition Model Chapter 6

Algorithm 3 Preprocessing the Hand Image


1: START
2: Crop the region around the detected hand.
3: Calculate aspect ratio (height/width).
4: if height ¿ width then
5: Resize image to a fixed height (300 pixels).
6: Center horizontally on a white canvas.
7: else
8: Resize image to a fixed width (300 pixels).
9: Center vertically on a white canvas.
10: end if
11: Pass the preprocessed image to the next module.
12: END

Algorithm 4 Classification Using Pre-trained CNN Model


1: START
2: Load pre-trained CNN model and corresponding labels.
3: Pass the preprocessed image to the classifier.
4: Get prediction (label and confidence score).
5: Pass the prediction to the next module.
6: END

5. Output and Visualization Module

This module overlays the classification results on the video frame and displays it.
Functionality:

• Draws bounding boxes and gesture labels on the frame.

• Displays intermediate outputs (cropped hand, resized image) and the final output
frame.

Algorithm 5 Display and Annotation of Hand Gesture Recognition Results


1: START
2: Draw bounding box around the detected hand.
3: Overlay gesture label on the video frame.
4: Display:
5: - Cropped hand image.
6: - Resized image on a white canvas.
7: - Final output frame with predictions.
8: END

Department of CSE(AI&ML), SCEM, Mangaluru Page 16


Chapter 7

Results and Disscussion

Results of the Hand Gesture Recognition Project

The implementation of this hand gesture recognition system provides the following results:

1. Real-Time Hand Gesture Recognition

• The system successfully detects hand gestures in real-time using a webcam.

• It identifies gestures from a predefined set of labels (Hello, I love you, No, Okay,
Please, Thank you, Yes) with accuracy depending on the training quality of the
model.

2. Accurate Gesture Classification

• By using a pre-trained Convolutional Neural Network (CNN) model


(keras_model.h5), the system classifies gestures effectively.

• The confidence level for each classification is generated, providing insight into the
reliability of the prediction.

3. Robust Preprocessing

• The system handles variable hand sizes and positions by maintaining the aspect
ratio during preprocessing.

• It ensures uniformity in input to the classifier by centering the resized hand image
on a white canvas.

17
Hand Gesture Recognition Model Chapter 7

Figure 7.1: Model Recognize Hand Gestures System

4. Visual Output

• The bounding box and the recognized gesture label are displayed on the video frame.

• Intermediate outputs (cropped hand and resized hand on the white canvas) are
shown for debugging and verification.

5. Key Performance Metrics

• Processing Speed: Real-time processing (dependent on hardware and input


resolution).

• Accuracy: High accuracy on the trained gestures; the performance depends on


the quality and diversity of the training dataset.

• Flexibility: Works under controlled lighting and uncluttered backgrounds.


Performance might degrade under poor lighting or occlusion.

6. Usability

• The system provides a natural and intuitive way for Human-Computer Interaction
(HCI) without physical input devices.

• Potential applications include:

Department of CSE(AI&ML), SCEM, Mangaluru Page 18


Hand Gesture Recognition Model Chapter 7

– Virtual Reality (VR) and Augmented Reality (AR) interfaces.

– Assistive technologies for differently-abled individuals.

– Sign language recognition and communication aids.

Department of CSE(AI&ML), SCEM, Mangaluru Page 19


Chapter 8

Conclusion and Future work

The hand gesture recognition system developed in this project successfully demonstrates
the potential of computer vision and deep learning for intuitive and natural human-
computer interaction. By leveraging advanced preprocessing techniques and a pre-trained
CNN model, the system is capable of accurately detecting and classifying predefined
gestures in real time. This implementation highlights the importance of effective image
preprocessing, such as aspect ratio preservation and input normalization, in ensuring
consistent performance. The system’s usability extends to applications in virtual reality,
assistive technologies, and touchless interfaces, providing a step toward more immersive
and accessible interaction mechanisms. However, the project also underscores the need
for addressing challenges like lighting variability, background noise, and gesture diversity
to improve adaptability in real-world scenarios. With further enhancements in dataset
diversity, model training, and environmental robustness, this system has the potential to
serve as a foundational tool for numerous innovative applications, including sign language
recognition, gaming, and beyond.
The current hand gesture recognition system provides a solid foundation for
real-time interaction, but several avenues exist for future improvement and expansion.
These include increasing the gesture set to handle complex interactions like sign
language, enhancing environmental robustness by addressing varying conditions like
lighting and occlusions, and extending the system to recognize multi-hand gestures.
Incorporating 3D gesture recognition with depth sensors, adapting the system for
seamless integration with AR/VR platforms, and optimizing the model for edge devices
are also key areas for development. Furthermore, implementing a real-time feedback
system, integrating context-aware gestures, supporting continuous gesture streams, and
enabling personalized gesture recognition would significantly enhance user experience

20
Hand Gesture Recognition Model Chapter 8

and interaction capabilities. Addressing these future directions will make the system
more versatile and applicable in various fields such as healthcare, education, and
entertainment.

Department of CSE(AI&ML), SCEM, Mangaluru Page 21


References

[1] Mohammed, A. A., Lv, J., Islam, M. S., Sang, Y. (2023). Multi-model ensemble
gesture recognition network for high-accuracy dynamic hand gesture recognition.
Journal of Ambient Intelligence and Humanized Computing, 14 (6), 6829-6842.

[2] Smith, J., Johnson, R., Davis, L. (2022). A geometric model-based approach to hand
gesture recognition. Proceedings of the IEEE International Conference on Computer
Vision and Pattern Recognition (CVPR), 135–142.

[3] Martinez, R., Zhou, P. (2020). Hand gesture recognition based on computer
vision: A review of techniques. Proceedings of the IEEE International Symposium
on Advances in Artificial Intelligence (AIAI), 78–85.

[4] Kim, H., Park, J., Lee, T. (2021). Real-time hand gesture recognition based on deep
learning YOLOv3 model. IEEE International Conference on Artificial Intelligence
Applications (ICAI), 244–250.

[5] Roy, P., Bhattacharya, A., Sen, S. (2021). A review of the hand gesture recognition
system: Current progress and future directions. IEEE Transactions on Human-
Machine Systems, 51 (4), 379–390.

[6] Chen, L., Wang, H., Li, K. (2020). Real-time hand gesture recognition using
surface electromyography and machine learning. Journal of Neural Engineering,
17 (5), 056014.

[7] Agrawal, M., Ainapure, R., Agrawal, S., Bhosale, S., Desai, S. (2020). Models
for hand gesture recognition using deep learning. In 2020 IEEE 5th International
Conference on Computing Communication and Automation (ICCCA) (pp. 589–594).
IEEE.

[8] Smith, L., Wang, Y., Zhou, H. (2024). HaGRID—Hand Gesture Recognition Image

22
Dataset. Proceedings of the 18th International Conference on Pattern Recognition
(ICPR), 1620–1627.

[9] Brown, A., Johnson, D. (2024). Diverse hand gesture recognition dataset.
International Journal of Computer Vision, 132 (7), 1152–1163.

[10] Lee, C., Kim, J., Choi, Y. (2022). Real-time hand gesture recognition using fine-
tuned convolutional neural network. Applied Soft Computing, 123, 107895.

[11] Wu, T., Li, Y. (2021). Development of hand gesture recognition system using
machine learning. Expert Systems with Applications, 165, 113896.

[12] Smith, A., Johnson, M., Lee, P. (2012). A comparison of machine learning algorithms
applied to hand gesture recognition. Pattern Recognition Letters, 33 (9), 1233–1240.

[13] Gupta, P., Sharma, A., Jain, M. (2022). An experimental analysis of various machine
learning algorithms for hand gesture recognition. Journal of Artificial Intelligence
Research, 67 (1), 45–56.

[14] Park, D., Kim, H., Choi, S. (2017). Hand gesture recognition using machine learning
and the Myo armband. Sensors, 17 (9), 2043.

[15] Wang, Z., Zhang, Y. (2021). Hand gesture recognition using machine learning and
infrared information. IEEE Transactions on Instrumentation and Measurement, 70,
1–8.

[16] Zhou, X., Chen, L. (2017). Deep learning in vision-based static hand gesture
recognition. Pattern Recognition, 72, 285–292.

[17] Liu, M., Zhang, X. (2018). Research on the hand gesture recognition based on
deep learning. Proceedings of the IEEE Conference on Computer Vision and Pattern
Recognition Workshops (CVPRW), 67–75.

[18] Brown, K., Wilson, T. (2020). Hand gesture recognition using machine learning
algorithms. Proceedings of the 2020 International Conference on Machine Learning
and Applications (ICMLA), 123–130.

[19] Johnson, R., Smith, J. (2019). Impact of machine learning techniques on hand
gesture recognition. IEEE Transactions on Multimedia, 21 (5), 1137–1147.

23
[20] Singh, A., Patel, R. (2020). Deep learning-based approach for sign language gesture
recognition. Pattern Recognition, 108, 107561.

24

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy