IJRAR23B3375
IJRAR23B3375
III. METHODOLOGY
3.4 Classification
The classification is performed by passing the preprocessed image through the pre-trained model, obtaining the class probabilities,
and selecting the class with the highest probability as the predicted class. The code then displays the predicted class and confidence
score on the image before streaming it as a response to the web application.
3.5 Recognition
The code uses the captured frames, preprocesses them, feeds them into the pre-trained Keras model for prediction, and annotates.
the frames with the predicted class and confidence score. The annotated frames are then streamed as a video feed in real-time.
recognition using Flask.
III. CONCLUSION
The sign language recognition project presents a valuable solution to bridge the communication gap between sign language users
and non-sign language users. Through the utilization of deep learning models and computer vision techniques, the project
demonstrates the ability to accurately recognize and interpret sign language gestures in real-time. The system's performance and
accuracy are enhanced by preprocessing techniques, model training, and optimization.
The project's future scope encompasses a wide range of possibilities for further development and expansion. These include
recognizing a broader range of gestures beyond sign language, supporting multiple sign languages, incorporating real-time
translation capabilities, improving the user interface, optimizing performance, developing a mobile application, expanding the
dataset, integrating with accessibility initiatives, conducting real-world testing, and fostering collaboration through open-source
contributions.
By addressing these future directions, the sign language recognition project can advance the field of accessibility and
communication technology, promoting inclusivity and empowering sign language users to engage more effectively with the world
around them. The project serves as a foundation for further research and development in sign language recognition, paving the
way for innovative applications and solutions that have the potential to positively impact the lives of individuals with hearing or
speech impairments.
Integrating the sign language recognition system with smart devices opens new possibilities for seamless and hands-free
interaction. By incorporating the system into devices such as smartwatches, smart glasses, or voice assistants, users can access its
functionality in a convenient and intuitive manner.
Continuously collecting user feedback and data to improve the gesture recognition model can enhance the system's accuracy and
adaptability. This can involve implementing user feedback mechanisms, crowdsourcing data collection, and leveraging active
learning techniques.
REFERENCES
[1] Sagar More, Aditya Chaudhari, Pravin Mandlik, Shubham Bhoknal, A Review On Different Technical Appproaches Of Sign
Language Recognition, International Research Journal Of Modernization In Engineering Technology And Science, Volume 4,2022,
E-Issn: 2582-5208
[2] Qazi Mohammad Areeb, Maryam, Mohammad Nadeem, Roobaea Alroobaea, And Faisal Anwer, Helping Hearing-Impaired In
Emergency Situations: A Deep Learning-Based Approach, Institute Of Electrical And Electronics Engineers Access, Volume 10,
2022, Pp 8502-8517.
[3] Rachana Patil, Vivek Patil, Abhishek Bahuguna And Mr. Gaurav Datkhile, Indian Sign Language Recognition Using
Conventional Neural Network, International Conference On Automation, Computing And Communication,2021, India, Pp 1-5.
[4] Farman Shah, Muhammad Saqlain Shah, Waseem Akram, Awais Manzoor, Rasha Orban Mahmoud And Diaa Salama
Abdelminaam, Sign Language Recognition Using Multiple Kernel Learning: A Case Study Of Pakistan Sign Language, Institute
Of Electrical And Electronics Engineers Access, Volume 9, 2021, Pp 67548-67558.
[5] Sarthak Sharma, Preet Kaur Nagi, Rahul Ahuja, Poorti Rajani, Senior Asst. Prof. Kavita Namdev, Realtime Sign Language
Detection, And Recognition, International Journal For Research In Applied Science & Engineering Technology, Volume 9, 2021,
Pp 1944-1948.
[6] Sevgi Z. Gurbuz, American Sign Language Recognition Using Rf Sensing, Ieee Sensors Journal, Volume 21 No. 3, 2021, Pp
3763-3775.
[7] Saleh Aly And Walaa Aly, Deeparslr: A Novel Signer-Independent Deep Learning Framework For Isolated Arabic Sign
Language Gestures Recognition, Institute Of Electrical And Electronics Engineers Access, Volume 4, 2020, Pp 1-14.
[8] Purva Chaitanya Badhe, Vaishali Kulkarni, Artificial Neural Network Based Indian Sign Language Recognition Using
Handcrafted Features, Institute Of Electrical And Electronics Engineers, 2020.
[9] Kusumika Krori Dutta, Sunny Arokia Swamy Bellary, Machine Learning Techniques For Indian Sign Language Recognition,
Institute Of Electrical And Electronics Engineers, 2017, Pp 333-336