Signlanguagee 2 1
Signlanguagee 2 1
INSTITUTE OF ENGINEERING
LALITPUR ENGINEERING COLLEGE
Presenter:
AMRIT SAPKOTA [076/BCT/05]
ASMIT OLI [076/BCT/43]
NISCHAL MAHARJAN [076/BCT/20]
SAKSHYAM ARYAL [076/BCT/29]
May 29, 2024
OUTLINES
• Motivation • Methodology
• There is a need for technology that can detect and interpret sign language to facilitate
better communication and inclusion.
• This project aims to develop a system that can recognize and understand sign language
gestures, opening up new opportunities for communication.
• Sign language is a visual language that uses hand gestures, facial expressions, and
body movements to convey meaning.
• Deaf and hard-of-hearing individuals often face challenges in communicating with the
hearing community, leading to isolation and limited opportunities.
• The goal of this project is to create a sign language detection system using machine
learning techniques to bridge the communication gap.
4
May 29, 2024
PROBLEM STATEMENT
• Existing communication barriers between deaf and hearing individuals hinder effective
interaction.
• Developing an accurate and real-time sign language detection system can address
these challenges and enhance communication for the deaf community.
5
May 29, 2024
PROJECT OBJECTIVES
• To design and implement a system that can understand the sign language of Hearing-
impaired people.
• To train the model with a variety of datasets using MediaPipe and CNN, and provide the
output real-time.
• The project aims to develop a computer vision-based system that can detect and
interpret sign language gestures.
• The system will focus on recognizing a set of predefined sign language gestures,
facilitating basic communication.
• The sign language detection system can be integrated into various devices, such as
smartphones, tablets, or smart glasses, enabling deaf individuals to communicate more
easily in their daily lives.
• It can be used in educational settings, where teachers and students can communicate
more effectively.
8
May 29, 2024
FUNCTIONAL REQUIREMENTS
• Realtime Output
• Performance and accuracy
• Gesture Recognition
• Vocabulary Flexibility
9
May 29, 2024
NON-FUNCTIONAL REQUIREMENTS
• Performance Requirements
• Scalability
• Reliability
• Usability
• Maintainability
10
May 29, 2024
METHODOLOGY – [1]
THEORETICAL BACKGROUND
• We can create our own dataset by recording sign language gestures performed by
individuals proficient in sign language.
• There are publicly available datasets specifically curated for sign language recognition
research.
• Collaborating with sign language experts or organizations that work closely with sign
language communities can provide access to a diverse range of sign language data.
• The sign language data is in the form of videos, it is often necessary to segment the
videos into individual sign gestures.
• Frames are extracted from each segment. These frames represent the visual
information at specific time intervals.
• Preprocessing techniques are applied to enhance the quality of the extracted frames.
• A high-quality camera is required to capture video or image data of the sign language
gestures.
• Adequate lighting is essential to ensure clear and well-illuminated visuals of the sign
language gestures.
• Depth sensors, such as Microsoft Kinect or Intel RealSense cameras, can provide
additional depth information about the scene and the position of the hands.
• The system can be deployed in various platforms like android , ios and web platform.
• Using media pipe get the necessary key points of face, hand and pose.