Sign-Lang (1) APSARA3
Sign-Lang (1) APSARA3
ON
DATE:03/09/2024
CONTENTS
INTRODUCTION
LITERATURE SURVEY
EXISTING SYSTEM
METHODOLOGY
SYTSEM ARCHITECTURE
CONCLUSION
FUTURE ENHANCEMENT
REFERENCE
INTRODUCTION
Speech has been the primary mode of interaction among people since ancient times.
Despite advances in science and technology that have enhanced human life, there still
remain individuals facing challenges in communication.
For those with speech impairments, sign language based on hand movements provides
a means to communicate.
Sign language enables interaction between the hearing and visually impaired and those
who can see and hear.
The study proposes a novel Feed Forward Neural Network (FFNN) prototype for automatic
sign language recognition.
The system aims to improve communication between individuals with hearing, speech, or
visual impairments and those without such challenges.
The approach involves hand signal feature point extraction using FFNN, combined with Hand
Gesture Recognition and Voice Processing through Hidden Markov Models (HMM).
To assist normal individuals in more effective communication with those who are hearing,
speech, or visually challenged.
LITERATURE SURVEY
The system involves multiple processes and methods for generating output..
The output of this process is a hand signal converted into a number or text.
The existing system has drawbacks such as misidentification of video units and insertion
of meaningless units between semantic units.
METHODOLOGY
Utilizes a supervised learning approach to improve hand-based gesture
recognition for better communication.
Employs Hidden Markov Model (HMM) algorithms to convert speech and hand-
based models for effective communication.
Develops an easy-to-use chat application for the deaf and blind, converting
speech signals into text and enhancing communication in noisy environments.
SYSTEM ARCHITECTURE
Video Acquisition
Leverages a webcam to observe and record finger and hand movements for
gesture recognition.
Captures and processes video data for further analysis or integration with other
systems.
Frame Conversion
Frame conversion is the process where the video will be converted as frames.
Frames for an example the video with 60 sec where each will be converted as 60
frames.
Each frame will be shown as an image with the image the hand symbols are
made.
Gaussian Feature Extractor
Feature extraction is the pattern of extracting feature points from the images.
Feature points include outward coordinating points, morphological points, texture color,
etc.
These feature points are used to identify hand gestures from images.
Hand Gesture Detection
For Hand Gesture Recognition Feed Forward Neural Network (FFNN) is used.
Output from the FFNN is the recognized hand gestures and then converted to text output.
Feed Forward Neural Network
Text output is converted to voice output using Hidden Markov Model (HMM).
A simple chat application is marked by converting the speech signals that match
the text.
1) Aryanie, D., & Heryadi, Y. (2019, May). American sign language-based finger-spelling
recognition using k-Nearest Neighbors classifier. In 2015 3rd International Conference on
Information and Communication Technology (ICoICT) (pp. 533-536). IEEE.
2) Bheda, V., & Radpour, D. (2019). Using deep convolutional networks for gesture recognition in
american sign language. arXiv preprint arXiv:1710.06836.
3) Murali, R. S. L., Ramayya, L. D., & Santosh, V. A. (2020). Sign language recognition system
using convolutional neural network and computer vision. International Journal of Engineering
Innovations in Advanced Technology ISSN, 2582-1431.
THANK YOU