0% found this document useful (0 votes)
7 views27 pages

Signlanguagee 2 1

The document outlines a major project defense presentation on sign language detection. It discusses the motivation, objectives, methodology and results of developing a system to recognize sign language gestures using computer vision and deep learning techniques.

Uploaded by

Amrit Sapkota
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views27 pages

Signlanguagee 2 1

The document outlines a major project defense presentation on sign language detection. It discusses the motivation, objectives, methodology and results of developing a system to recognize sign language gestures using computer vision and deep learning techniques.

Uploaded by

Amrit Sapkota
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 27

TRIBHUVAN UNIVERSITY

INSTITUTE OF ENGINEERING
LALITPUR ENGINEERING COLLEGE

MAJOR PROJECT DEFENSE ON

“SIGN LANGUAGE DETECTION”

Presenter:
AMRIT SAPKOTA [076/BCT/05]
ASMIT OLI [076/BCT/43]
NISCHAL MAHARJAN [076/BCT/20]
SAKSHYAM ARYAL [076/BCT/29]
May 29, 2024
OUTLINES

• Motivation • Methodology

• Introduction • Expected Outcomes

• Problem Statement • Future Enhancement

• Project Objectives • Conclusion

• Scope of Project • Project Schedule

• Potential Project Application • References


• Functional Requirements
• Non-Functional Requirements

May 29, 2024 2


MOTIVATION

• Sign language is essential for deaf and hard-of-hearing individuals to communicate


effectively.

• There is a need for technology that can detect and interpret sign language to facilitate
better communication and inclusion.

• This project aims to develop a system that can recognize and understand sign language
gestures, opening up new opportunities for communication.

May 29, 2024 3


INTRODUCTION

• Sign language is a visual language that uses hand gestures, facial expressions, and
body movements to convey meaning.

• Deaf and hard-of-hearing individuals often face challenges in communicating with the
hearing community, leading to isolation and limited opportunities.

• The goal of this project is to create a sign language detection system using machine
learning techniques to bridge the communication gap.

4
May 29, 2024
PROBLEM STATEMENT

• Existing communication barriers between deaf and hearing individuals hinder effective
interaction.

• Traditional methods of sign language interpretation rely on human interpreters, which


can be time-consuming and limited in availability.

• Developing an accurate and real-time sign language detection system can address
these challenges and enhance communication for the deaf community.

5
May 29, 2024
PROJECT OBJECTIVES

• To design and implement a system that can understand the sign language of Hearing-
impaired people.

• To train the model with a variety of datasets using MediaPipe and CNN, and provide the
output real-time.

• To recognize sign language and provide the output as voice or text.

May 29, 2024 6


PROJECT SCOPE

• The project aims to develop a computer vision-based system that can detect and
interpret sign language gestures.

• The system will focus on recognizing a set of predefined sign language gestures,
facilitating basic communication.

• Real-time detection will be a key requirement to ensure seamless interaction between


the signer and the system.

May 29, 2024 7


POTENTIAL PROJECT APPLICATION

• The sign language detection system can be integrated into various devices, such as
smartphones, tablets, or smart glasses, enabling deaf individuals to communicate more
easily in their daily lives.

• It can be used in educational settings, where teachers and students can communicate
more effectively.

8
May 29, 2024
FUNCTIONAL REQUIREMENTS
• Realtime Output
• Performance and accuracy
• Gesture Recognition
• Vocabulary Flexibility

9
May 29, 2024
NON-FUNCTIONAL REQUIREMENTS
• Performance Requirements
• Scalability
• Reliability
• Usability
• Maintainability

10
May 29, 2024
METHODOLOGY – [1]
THEORETICAL BACKGROUND

• Sign language is a visual-spatial language used by deaf and hard-of-hearing individuals


to communicate.

• Deep neural networks, such as CNN, have demonstrated excellent performance in


various computer vision tasks, including sign language recognition.
• The dataset must be preprocessed. This involves steps such as video segmentation,
frame extraction to isolate and extract relevant information from the sign language
gestures.

May 29, 2024 11


METHODOLOGY – [2]
DATA COLLECTION

• We can create our own dataset by recording sign language gestures performed by
individuals proficient in sign language.

• There are publicly available datasets specifically curated for sign language recognition
research.

• Collaborating with sign language experts or organizations that work closely with sign
language communities can provide access to a diverse range of sign language data.

May 29, 2024 12


METHODOLOGY – [3]
DATA PREPROCESSING

• The sign language data is in the form of videos, it is often necessary to segment the
videos into individual sign gestures.

• Frames are extracted from each segment. These frames represent the visual
information at specific time intervals.

• Preprocessing techniques are applied to enhance the quality of the extracted frames.

May 29, 2024 13


METHODOLOGY – [4]
INSTRUMENTATION REQUIREMENT

• A high-quality camera is required to capture video or image data of the sign language
gestures.

• Adequate lighting is essential to ensure clear and well-illuminated visuals of the sign
language gestures.

• Depth sensors, such as Microsoft Kinect or Intel RealSense cameras, can provide
additional depth information about the scene and the position of the hands.

May 29, 2024 14


METHODOLOGY – [5]
INSTRUMENTATION REQUIREMENT

• Programming languages: Python, TensorFlow, OpenCV, Mediapipe.

• Development environment: Jupyter Notebook.

May 29, 2024 15


METHODOLOGY – [7]
WORKING PRINCIPLE

Figure. Convolution Neural Network


May 29, 2024 16
METHODOLOGY – [6]
SYSTEM BLOCK DIAGRAM

Figure. System Block Diagram

May 29, 2024 17


RESULT-[1]
DATA COLLECTION

Figure. Data Collection

May 29, 2024 18


RESULT-[2]
LANDMARKS VISUALIZATION

Figure. Landmarks Visualization

May 29, 2024 19


RESULT-[3]
TRAINING ACCURACY AND LOSS

Figure. Epoch Accuracy

May 29, 2024 20


RESULT-[3]
TRAINING ACCURACY AND LOSS

Figure. Epoch loss

May 29, 2024 21


RESULT-[4]
CONFUSION MATRIX

May 29, 2024 22


FUTURE ENHANCEMENT

• Much efficient algorithm can be used.

• Limitless data set can be used for vocabulary.

• The system can be deployed in various platforms like android , ios and web platform.

May 29, 2024 23


CONCLUSION

• Using media pipe get the necessary key points of face, hand and pose.

• Train the model using CNN.

• Testing and evaluation using confusion matrix and accuracy.

• Python as programming language.

May 29, 2024 24


PROJECT SCHEDULE

Figure. Gantt chart

May 29, 2024 25


REFERENCES
[1] Alexander Toshev, C. S. (2014). DeepPose: Human Pose Estimation via Deep Neural Networks. Retrieved
from Proceedings of the IEEE conference on computer vision and pattern recognition
link: https://arxiv.org/abs/1312.4659
[2] G. Anantha Rao, K. S. (2018). Deep convolutional neural networks for sign language recognition. Retrieved
from Conference on Signal Processing And Communication Engineering Systems (SPACES)
link: https://ieeexplore.ieee.org/abstract/document/8316344/references#references
[3] Lionel Pigou, S. D.-J. (2015, March 19). Sign Language Recognition Using Convolutional Neural
Networks. Retrieved from Springer,
link: https://link.springer.com/chapter/10.1007/978-3-319-16178-5_40
[4] Srujana Gattupalli, A. G. (2016, June 29). Evaluation of Deep Learning based Pose Estimation for Sign
Language Recognition. Retrieved from Proceedings of the 9th ACM international conference on pervasive
technologies related to assistive environments: https://dl.acm.org/doi/10.1145/2910674.2910716
26
May 29, 2024
THANK YOU

May 29, 2024 27

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy