0% found this document useful (0 votes)
4 views4 pages

Fin Irjmets1684077417

This document presents a project focused on developing a real-time sign language detection system using computer vision techniques to bridge communication gaps for the deaf and hard-of-hearing community. The system captures images of sign language gestures, processes them to recognize hand movements, and translates them into text using machine learning algorithms. The proposed methodology includes dataset collection, gesture segmentation, feature extraction, and model training, aiming for high accuracy and potential future enhancements for continuous sign language recognition.

Uploaded by

mishrapurthi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views4 pages

Fin Irjmets1684077417

This document presents a project focused on developing a real-time sign language detection system using computer vision techniques to bridge communication gaps for the deaf and hard-of-hearing community. The system captures images of sign language gestures, processes them to recognize hand movements, and translates them into text using machine learning algorithms. The proposed methodology includes dataset collection, gesture segmentation, feature extraction, and model training, aiming for high accuracy and potential future enhancements for continuous sign language recognition.

Uploaded by

mishrapurthi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

e-ISSN: 2582-5208

International Research Journal of Modernization in Engineering Technology and Science


( Peer-Reviewed, Open Access, Fully Refereed International Journal )
Volume:05/Issue:04/April-2023 Impact Factor- 7.868 www.irjmets.com
REAL TIME SIGN LANGUAGE DETECTION
Prof. Mrs. Maheshwari Chitampalli*1, Dnyaneshwari Takalkar*2, Gaytri Pillai*3,
Pradnya Gaykar*4, Sanya Khubchandani*5
*1Asst. Professor, Dept. of Computer Engineering, Dr. D. Y. Patil Institute of Engineering, Management and
Research, Akurdi, Pune, Maharashtra, India
*2,3,4,5Dept. of Computer Engineering, Dr. D. Y. Patil Institute of Engineering, Management and Research, Akurdi,
Pune, Maharashtra, India
DOI : https://www.doi.org/10.56726/IRJMETS36648
ABSTRACT
This project focuses on the development of a sign language detection system using computer vision techniques.
Sign language is a vital means of communication for the deaf and hard-of-hearing community, and this system
aims to bridge the communication gap by automatically recognizing sign language gestures and translating
them into text. The proposed system uses a camera to capture images of a person signing, and then processes
the image frames to detect and recognize hand gestures. The system utilizes algorithms to classify the detected
hand gestures and map them to corresponding words or phrases in sign language.
Keywords: Sign language, Detection, Recognition, Computer vision, Image classification, Performance
evaluation, Accuracy.
I. INTRODUCTION
Sign language is a visual language that is primarily used by people who are deaf or hard of hearing. It involves
using hand gestures, facial expressions, and body language to convey meaning and communicate. While sign
language is an important means of communication for the deaf and hard of hearing community, it can be
challenging for those who do not know the language to understand and communicate with them. In recent
years, there has been a growing interest in developing sign language detection and recognition systems using
computer vision and machine learning techniques. These systems have the potential to improve accessibility
and communication for the deaf and hard of hearing community, by automatically recognizing sign language
gestures and translating them into text or speech.
II. MOTIVATION
The motivation behind this sign language detection project is to develop a system that can automatically detect
and recognize sign language gestures in real-time. Such a system has the potential to enhance communication
and accessibility for the deaf and hard of hearing community by providing an intuitive and easy-to-use interface
for communication with others who do not know sign language. Additionally, the development of a sign
language detection system can help to promote inclusivity and diversity by providing a tool that can bridge the
communication gap between people who use sign language and those who do not. It can also facilitate learning
and education of sign language for people who are interested in learning the language. Overall, the
development of a sign language detection system using computer vision and machine learning techniques can
have a significant impact on the lives of people who are deaf or hard of hearing, and can contribute to building a
more inclusive and accessible society.
III. METHODOLOGY
1. Identify the problem: Define the specific problem you want to solve, such as detecting the letters of the
alphabet in American Sign Language (ASL).
2. Collect a dataset: Gather a dataset of images or videos showing the ASL alphabet gestures. The dataset should
include examples of each gesture that you want the system to recognize.
3. Preprocess the dataset: Clean and preprocess the dataset by removing any irrelevant information,
normalizing the data, and converting it to a format that can be used by your machine learning model.
4. Segment the gestures: Use computer vision techniques, such as background subtraction or skin color
detection, to segment the gestures from the background.

www.irjmets.com @International Research Journal of Modernization in Engineering, Technology and Science


[2983]
e-ISSN: 2582-5208
International Research Journal of Modernization in Engineering Technology and Science
( Peer-Reviewed, Open Access, Fully Refereed International Journal )
Volume:05/Issue:04/April-2023 Impact Factor- 7.868 www.irjmets.com
5. Extract features: Extract relevant features from the segmented gestures, such as the position and movement
of the hand, finger shape, and palm orientation.
6. Train the machine learning model: Choose a suitable machine learning algorithm, such as a Convolutional
Neural Network (CNN), and train it on the extracted features from the dataset.
7. Test the machine learning model: Use a separate testing dataset to evaluate the performance of the trained
model. This will help you determine the accuracy and precision of the model.
8. Deploy the system: Once you are satisfied with the performance of the model, you can deploy it to a device,
such as a webcam or a smartphone camera, to allow users to input ASL alphabet gestures and receive
corresponding text output.
9. Evaluate the results: After deploying the system, monitor its performance and continue to collect data to
improve the accuracy and precision of the model over time.
IV. LITERATURE SURVEY
Here is a literature survey of some recent research on sign language detection:
1. "Sign Language Recognition: A Comprehensive Review" by A. Kumar et al. (2022) - This paper provides a
comprehensive review of sign language recognition techniques and recent advancements in this field.
2. "Sign Language Recognition with Deep Learning: A Systematic Review" by M. Sun et al. (2021) - This paper
provides a systematic review of deep learning techniques used for sign language recognition.
3. "Real-time Sign Language Detection and Recognition using Machine Learning Techniques" by S. Saha et al.
(2021) - This paper proposes a real-time sign language detection and recognition system using machine
learning techniques.
4. "Sign Language Recognition using 3D Convolutional Neural Networks" by K. T. Chakraborty et al. (2021) -
This paper proposes a sign language recognition system using 3D convolutional neural networks.
5. "Fingerspelling Recognition in American Sign Language using Convolutional Neural Networks" by A. Subedi
et al. (2020) - This paper proposes a system for recognizing fingerspelling in American Sign Language using
convolutional neural networks.
6. "Dynamic Sign Language Recognition using Spatiotemporal Features and Deep Learning" by C. Zhang et al.
(2020) - This paper proposes a dynamic sign language recognition system using spatiotemporal features and
deep learning.
7. "Sign Language Recognition with Hybrid CNN-HMM Model" by H. Wu et al. (2019) - This paper proposes a
hybrid CNN-HMM model for sign language recognition.
V. MODELING AND ANALYSIS
1. System Architecture-
The system architecture comprises of image acquisition.After that the captured images hand detection and
tracking in done ,followed by a feature extraction.The image recognition process is carried out using the trained
dataset .The training dataset is taken at the time of module building.After all this process the final output in
form of text is given.

Fig -1: System Architecture


www.irjmets.com @International Research Journal of Modernization in Engineering, Technology and Science
[2984]
e-ISSN: 2582-5208
International Research Journal of Modernization in Engineering Technology and Science
( Peer-Reviewed, Open Access, Fully Refereed International Journal )
Volume:05/Issue:04/April-2023 Impact Factor- 7.868 www.irjmets.com
2. Propose system
The proposed system would be a real time system wherein live sign gestures would be processed using image
processing. Then classifiers would be used to differentiate various signs and the translated output would be
displaying text.
3. Proposed system for sign language detection
A proposed system for sign language detection would involve multiple components working together to
recognize and interpret sign language gestures. Here is a possible overview of the components: Video capture:
A camera or webcam captures video footage of the signer's hand gestures and facial expressions. Hand
detection: Computer vision techniques are used to identify the signer's hand(s) in the video footage, even if
they are partially obscured or moving quickly. and tracking: Once the hand(s) have been detected, computer
vision algorithms are used to track their movements over time, allowing the system to recognize when a sign
has started and ended. Feature extraction: Features such as hand shape, hand movement, and facial expressions
are extracted from the video footage and used to classify the sign being performed. Sign language recognition:
Machine learning algorithms, such as deep neural networks, are trained on a large dataset of sign language
videos and annotations, allowing the system to recognize signs with high accuracy. Translation: Once the sign
has been recognized, the system can translate it into spoken or written language for the benefit of those who
don't know sign language. User interface: The system can be designed with a user-friendly interface that
displays the recognized sign and its translation, allowing the user to easily communicate with others. Overall,
this system would require a combination of computer vision, machine learning, and natural language
processing techniques to accurately recognize and interpret sign language gestures. It could be integrated into
a variety of applications, such as video conferencing software, education platforms, and mobile apps
for communication.
VI. RESULTS AND DISCUSSION
Accuracy-Model accuracy is defined as the number of classifications a model correctly predicts divided by the
total number of predictions made. It’s a way of assessing the performance of a model.The model accuracy for
our purposed model is a average of 95%.

Fig -2: Accuracy of model’s


Training module-A machine learning training model is a process in which a machine learning (ML) algorithm is
fed with sufficient training data to learn from.For our purposed system we have used a training module which
takes 30 images in total out which 6 images used for testing and 24 images used for training.
www.irjmets.com @International Research Journal of Modernization in Engineering, Technology and Science
[2985]
e-ISSN: 2582-5208
International Research Journal of Modernization in Engineering Technology and Science
( Peer-Reviewed, Open Access, Fully Refereed International Journal )
Volume:05/Issue:04/April-2023 Impact Factor- 7.868 www.irjmets.com

Fig -3: Image Detection


VII. FUTURE SCOPE
Sign Language Recognition has another segment which is known as Continuous Sign Language Recognition as it
deals with taking successive frames in real-time and predict the word by detecting a continuous gesture. Hence,
this project can be further extended in this direction and words and after that sentences can be formed
according to the continuous gestures performed. Further dataset having images from people with different skin
tones and in different lighting conditions is required in order to develop a robust algorithm that can serve the
purpose for any kind of people.
VIII. CONCLUSION
Gesture recognition is a field of study with numerous operations, including sign language recognition, remote
control robotics, and virtual reality mortal- computer interface. nevertheless, the occlusion of the hand, affine
metamorphosis, database scalability, different background illumination, and high computing cost remain
challenges to establishing an accurate and flexible system. We can enhance the lives of numerous impaired
individualities by allowing them to speak freely, work in different services without the need for an practitioner,
and live their life without having to calculate on others to restate for them so that others can understand them
with the aid of this technology.
IX. REFERENCES
[1] Jinalee Jayeshkumar Raval, Ruchi Gajjar : “Real-time Sign Language Recognition using Computer Vision”
(ICPSC), 2021.
[2] Sanket Bankar, Tushar Kadam, Vedant Korhale, Mrs. A. A. Kulkarni, Real Time Sign Language Recognition
Using Deep Learning (IRJET), 2022.
[3] Zhaoyang Yang, Zhenmei Shi, Xiaoyong Shen, Yu-Wing Tai, “SFNet: Structured Feature Network for
Continuous Sign Language Recognition”. arXiv preprint arXiv:1908.01341v1, 2019.
[4] Ashish S. Nikam, Aarti G. Ambekar, “Sign Language Recognition Using Image Based Hand Gesture
Recognition Techniques”, Online International Conference on Green Engineering and Technologies, 2016.
Available doi: 10.1109/GET.2016.7916786.
[5] G. Rajesh, X. Mercilin Raajini, K. Martin Sagayam, Hien Dang,”A statistical approach for high order epistasis
interaction detection for prediction of diabetic macular edema”,Informatics in Medicine Unlocked, Volume
20,2020,100362,ISSN 2352-9148.
[6] Daniels, Steve & Suciati, Nanik & Fatichah, Chastine. (2021). Indonesian Sign Language Recognition using
YOLO Method. IOP Conference Series: Materials Science and Engineering. 1077. 012029. 10.1088/1757-
899X/1077/1/012029.
[7] Mujahid, Abdullah & Awan, Mazhar & Yasin, Awais & Mohammed, Mazin & Damasevicius, Robertas &
Maskeliunas, Rytis & Hameed, Karrar. (2021). Real-Time Hand Gesture Recognition Based on Deep
Learning YOLOv3 Model. Applied Sciences. 11. 4164. 10.3390/app11094164.
[8] About.almentor.net. 2020. The Deaf And Mute – Almentor.Net. [online] Available at: Real-time Sign
Language Recognition using Computer Vision.
www.irjmets.com @International Research Journal of Modernization in Engineering, Technology and Science
[2986]

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy