0% found this document useful (0 votes)
24 views20 pages

Sign-Lang (1) APSARA3

The document presents a seminar on a machine learning-based system for voice-based sign language detection aimed at improving communication for individuals with speech impairments. It outlines the methodology, including the use of Feed Forward Neural Networks and Hidden Markov Models for gesture recognition and voice conversion. The proposed system aims to enhance real-time interaction and proposes future enhancements for better accuracy and performance.

Uploaded by

guccitaetaev7
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
24 views20 pages

Sign-Lang (1) APSARA3

The document presents a seminar on a machine learning-based system for voice-based sign language detection aimed at improving communication for individuals with speech impairments. It outlines the methodology, including the use of Feed Forward Neural Networks and Hidden Markov Models for gesture recognition and voice conversion. The proposed system aims to enhance real-time interaction and proposes future enhancements for better accuracy and performance.

Uploaded by

guccitaetaev7
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 20

SEMINAR

ON

Voice Based Sign Language Detection For Dumb People


Communication Using Machine Learning

GUIDE: PRESENTED BY:


MS SARANYA RAJ S APSARA R
ASSISTANT PROFESSOR VAK21CS015
DEPT. OF CSE S7 CSE
VAST TC VAST TC

DATE:03/09/2024
CONTENTS
 INTRODUCTION

 LITERATURE SURVEY

 EXISTING SYSTEM

 METHODOLOGY

 SYTSEM ARCHITECTURE

 CONCLUSION

 FUTURE ENHANCEMENT

 REFERENCE
INTRODUCTION

 Approximately 2.78% of the population in our nation cannot speak.

 Speech has been the primary mode of interaction among people since ancient times.

 Despite advances in science and technology that have enhanced human life, there still
remain individuals facing challenges in communication.

 For those with speech impairments, sign language based on hand movements provides
a means to communicate.

 Sign language enables interaction between the hearing and visually impaired and those
who can see and hear.
 The study proposes a novel Feed Forward Neural Network (FFNN) prototype for automatic
sign language recognition.

 The system aims to improve communication between individuals with hearing, speech, or
visual impairments and those without such challenges.

 The approach involves hand signal feature point extraction using FFNN, combined with Hand
Gesture Recognition and Voice Processing through Hidden Markov Models (HMM).

 To assist normal individuals in more effective communication with those who are hearing,
speech, or visually challenged.
LITERATURE SURVEY

SL TITLE AUTHOR YEAR ADVANTAGE DISADVANTAGE


NO
1 American sign language based Aryanie,D.,& 2019 Best when Not efficient when the
finger spelling recognition using K- Heryadi,Y considers full dataset is large.
Nearest Neighbors classifier dimensional
features.
2 Using Deep convolutional Bheda,V.,& 2019 Considers Prone to differences in
Networks for Gesture Recognition Radpour,D alphabet as well light,skin color,and other
in American Sign language as digits. Differences in the
environment
3 Sign language recognition Murali,R.S.L., 2020 High accuracy of Only considers 10 alphabets.
System using convolutional Ramayya,L.D., 98%.
Neural network and computer vision &Santosh,V.A
EXISTING SYSTEM

 The system involves multiple processes and methods for generating output..

 The system achieves a comparatively low accuracy rate of only 80%.

 The exact classification of the hand gesture signal is not done.

 The output of this process is a hand signal converted into a number or text.

 The existing system has drawbacks such as misidentification of video units and insertion
of meaningless units between semantic units.
METHODOLOGY
 Utilizes a supervised learning approach to improve hand-based gesture
recognition for better communication.

 Implements a feed forward neural network to classify multiple symbol


recognitions and match trained features with recognized symbols.

 Employs Hidden Markov Model (HMM) algorithms to convert speech and hand-
based models for effective communication.

 Integrates a media filter to minimize unwanted noise and remove background


clutter from images.

 Develops an easy-to-use chat application for the deaf and blind, converting
speech signals into text and enhancing communication in noisy environments.
SYSTEM ARCHITECTURE
Video Acquisition

 Utilizes a webcam to capture video based on light source processing, converting


visual data into video for analysis.

 Leverages a webcam to observe and record finger and hand movements for
gesture recognition.

 Captures and processes video data for further analysis or integration with other
systems.
Frame Conversion

 Frame conversion is the process where the video will be converted as frames.

 Frames for an example the video with 60 sec where each will be converted as 60
frames.

 Each frame will be shown as an image with the image the hand symbols are
made.
Gaussian Feature Extractor

 Implements a Gaussian-based feature extractor to precisely identify pixel points in an


image.

 Feature extraction is the pattern of extracting feature points from the images.

 Feature points include outward coordinating points, morphological points, texture color,
etc.

 These feature points are used to identify hand gestures from images.
Hand Gesture Detection

 For Hand Gesture Recognition Feed Forward Neural Network (FFNN) is used.

 Train the FFNN on extracted gaussian features.

 Classify gestures based on learned patterns and feature points.

 Ensure accurate recognition by validating with diverse gesture samples.

 Implement real-time processing for immediate feedback.

 Output from the FFNN is the recognized hand gestures and then converted to text output.
Feed Forward Neural Network

 Feedforward Neural Network (FFNN) classifies hand gestures based on input


features.

 Architecture tailored for multi-class classification.

 Training using labelled hand gesture dataset.

 Effective for recognizing complex hand postures.


Voice Conversion

 Text output is converted to voice output using Hidden Markov Model (HMM).

 A simple chat application is marked by converting the speech signals that match
the text.

 More useful for blind people.


Hidden Markov Model
 Provides a statistical framework for modeling speech signals, which is a crucial part of
the Text-To-Speech (TTS) process.

 HMM represents speech as a sequence of hidden states.

 Each phoneme is modeled as an HMM, generating acoustic features.

 HMMs generate parameters for synthesizing speech.

 Incorporates pitch, duration, and energy for natural-sounding speech.


CONCLUSION

 The proposed system demonstrates effective real-time hand gesture recognition


using background subtraction and feature matching techniques.

 Achieves precise gesture classification through Gaussian feature extraction and


trained feature matching.

 Enhances human-computer interaction (HCI) by enabling intuitive and efficient


communication through hand gestures.

 The system's real-time capabilities make it suitable for dynamic environments


and practical applications.
FUTURE ENHANCEMENT

 Refine algorithms and feature extraction methods to achieve higher


accuracy in gesture recognition.

 Incorporate advanced machine learning models and techniques,


such as deep learning, to further enhance performance.

 Improve noise reduction techniques to handle various


environmental conditions and backgrounds more effectively.
REFERENCES

1) Aryanie, D., & Heryadi, Y. (2019, May). American sign language-based finger-spelling
recognition using k-Nearest Neighbors classifier. In 2015 3rd International Conference on
Information and Communication Technology (ICoICT) (pp. 533-536). IEEE.

2) Bheda, V., & Radpour, D. (2019). Using deep convolutional networks for gesture recognition in
american sign language. arXiv preprint arXiv:1710.06836.

3) Murali, R. S. L., Ramayya, L. D., & Santosh, V. A. (2020). Sign language recognition system
using convolutional neural network and computer vision. International Journal of Engineering
Innovations in Advanced Technology ISSN, 2582-1431.
THANK YOU

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy