0% found this document useful (0 votes)
3 views26 pages

Talk N Move

The project report presents 'Talk ‘n’ Move,' a gesture-controlled AI voice assistant designed to enhance human-computer interaction through motion detection and voice commands. Utilizing Python and computer vision libraries like OpenCV, the system allows users to control mouse activities hands-free, benefiting individuals with physical disabilities and improving user engagement in various fields such as gaming and healthcare. The report discusses the technology's advantages, challenges, and future potential, emphasizing its role in revolutionizing accessibility and productivity.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views26 pages

Talk N Move

The project report presents 'Talk ‘n’ Move,' a gesture-controlled AI voice assistant designed to enhance human-computer interaction through motion detection and voice commands. Utilizing Python and computer vision libraries like OpenCV, the system allows users to control mouse activities hands-free, benefiting individuals with physical disabilities and improving user engagement in various fields such as gaming and healthcare. The report discusses the technology's advantages, challenges, and future potential, emphasizing its role in revolutionizing accessibility and productivity.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 26

Talk ‘n’ Move- THE MOTION DETECTION

(Gesture-Controlled AI Voice Assistant for Enhanced Human-Computer Interaction)

Project report in partial fulfillment of the requirement for the award of the degree of
Bachelor of Technology
In
Computer Science and Engineering

Submitted By

Shrestha Paul 12021002001260

Debkanta Biswas 12021002001080

Dhurbojyoti Bhattacharjee 12021002001121

Ronik Mondal 12021002001171

Parthiv Sikdar 12021002001184

Kaustav Mukherjee 12021002001204

Soumili Ghosh 12021002001267

Shibaji Chakraborty 12021002001234

Shreyasi Hazari 12021002001107

Jeet Dutta 12021002001337

Under the guidance of

Prof. Nilanjan Chatterjee


&
Prof. Anay Ghosh

Department of Computer Science and Engineering

UNIVERSITY OF ENGINEERING & MANAGEMENT, KOLKATA

University Area, Plot No. III – B/5, New Town, Action Area – III, Kolkata – 700160.
1
2
ACKNOWLEDGEMENT

We would like to take this opportunity to thank everyone whose cooperation and
encouragement throughout the ongoing course of this project remains invaluable to
us.

We are sincerely grateful to our guide Prof. Nilanjan Chatterjee and Prof. Anay
Ghosh of the Department of Computer Science and Engineering, UEM, Kolkata, for
his wisdom, guidance and inspiration that helped us to go through with this project
and take it to where it stands now.

Last but not the least, we would like to extend our warm regards to our families and
peers who have kept supporting us and always had faith in our work.

Shrestha Paul

Debkanta Biswas

Dhurbojyoti Bhattacharjee

Ronik Mondal

Parthiv Sikdar

Kaustav Mukherjee

Soumili Ghosh

Shibaji Chakraborty

Shreyasi Hazari

Jeet Dutta

3
TABLE OF CONTENTS
ABSTRACT.............................................<<5-6>>

CHAPTER – 1: INTRODUCTION...........................<<7-8>>

CHAPTER – 2: LITERATURE SURVEY

.............................................<<9-10>>

CHAPTER – 3: PROBLEM STATEMENT

.............................................<<11-13>>

CHAPTER – 4: PROPOSED SOLUTION

.............................................<<14-15>>

CHAPTER – 5: EXPERIMENTAL SETUP AND RESULT ANALYSIS

.............................................<<16>>

.............................................<<17-18>>

CHAPTER – 6: CONCLUSION & FUTURE SCOPE

.............................................<<19>>

.............................................<<20>>

BIBLIOGRAPHY................................................<<21>>

4
ABSTRACT
In this project involves the creation of a hand gesture recognition
system based on computer vision for mouse pointer control using real-
time hand movements. Using Python and computer vision
libraries such as OpenCV, the system monitors the hand gestures of the
user and converts them
into equivalent mouse activities like movement, left-click, right-click,
and scrolling. The project does away with the need
for conventional input devices, providing a hands-free and interactive
experience.

Motion detection using AI voice assistants is an innovative approach that


enhances human-computer interaction by enabling hands-free control of
mouse movements. This integration of Artificial Intelligence (AI) and
voice recognition technology allows users to navigate digital interfaces
using voice commands, eliminating the need for traditional input devices
like keyboards and mice. The system leverages advanced technologies
such as Natural Language Processing (NLP), Machine Learning (ML), and
Computer Vision to track motion and execute corresponding actions on a
screen.

The concept revolves around converting voice instructions into precise


cursor movements, enabling users to perform actions such as clicking,
scrolling, and dragging through verbal cues. The AI-driven model
processes voice commands in real-time, utilizing deep learning
techniques such as Recurrent Neural Networks (RNN) and Transformer-
based architectures to enhance accuracy and responsiveness.
Additionally, motion detection algorithms, often assisted by image
processing techniques, contribute to a seamless user experience by
analysing gestures and spatial positioning.
One of the major advantages of this technology is its potential to assist
individuals with physical disabilities, providing them with an accessible
and efficient way to interact with digital platforms. Furthermore,
industries such as gaming, virtual reality (VR), and remote work can
benefit from this hands-free control system, enhancing productivity and
5
user engagement.

Despite its advantages, challenges such as latency in command


execution, background noise interference, and voice recognition
accuracy still need refinement. Researchers are working on integrating
multi-modal input processing, combining voice recognition with gesture
control for improved precision. Moreover, ensuring security and privacy
in AI-driven voice motion detection remains a priority, as systems
process sensitive voice data.
In the future, AI voice-controlled motion detection systems are expected
to become more intuitive and context-aware, offering enhanced
customization based on user preferences. As advancements in AI and
human-computer interaction continue, this technology will revolutionize
accessibility, productivity, and user experience across various domains

6
INTRODUCTION

Over the last few years, motion detection and gesture recognition
technology have picked up steam in various markets. As computer vision
capabilities evolve and touch-free interactions gain greater importance,
both technologies are changing the dynamics of how we communicate
with digital technology. From advanced gaming applications to seamless
accessibility technology, gesture recognition has become increasingly
sought after to boost user engagement and convenience.

With the emergence of touchless technology and computer vision, new


avenues in human-computer interaction have been revealed.
Conventional input devices like mice, keyboards, and controllers are
increasingly becoming impractical in situations that demand hands-off or
intuitive control. Gesture recognition systems present an unobtrusive
and interactive option, especially in domains such as accessibility,
gaming, healthcare, and virtual reality.

Gesture recognition has received increased interest in the past few years
as it promises to offer a more immersive and convenient experience.
Through the elimination of physical input devices, users can naturally
interact with their computers via easy hand movements. This technology
is especially useful in applications such as healthcare, where contactless
interaction is paramount, and in gaming, where hand gestures can
increase the level of realism of the gaming experience.

In the gaming world, games such as car racing simulators or endless


runners, e.g., Subway Surfers, are more interactive when operated by
natural gestures. These can be used to steer, speed up, or jump, thus
enhancing the play and user experience. This adoption also minimizes
the need for external controllers, advancing towards a more dynamic
gaming experience.

In medicine, gesture recognition systems are now implemented for


touchless control in sterile conditions like operating rooms. Surgeons
7
can control medical images or retrieve patient records without ever
having physical contact with the
system, minimizing risks contamination. Rehabilitation regimes may also
incorporate gesture-based control for physical
therapy so that patients can perform interactive exercises

8
Automotive technology also has an advantage in gesture
recognition. Advanced driver-assistance systems (ADAS) incorporate
gesture controls to make the driving experience more comfortable and
safer. Drivers are able to manage tasks such as volume adjustment, call
acceptance, or navigation systems using hand movements, reducing
distraction.

This project employs a standard webcam together with Python and


OpenCV for processing video in real time. In contrast to costly hardware
components, the system capitalizes on the computational power of
contemporary devices to precisely monitor hand movement and
translate gestures. Numerous algorithms are used to identify hands,
detect fingers, and study motion patterns. Translated gestures are
transformed into corresponding mouse movement, offering an intuitive
and timely user experience.

Apart from its functional uses, this project also solves issues of
accessibility. People with mobility impairments might have a hard time
using conventional input devices. With the implementation of gesture
control, the system offers an accessible computing experience that can
improve productivity and autonomy among users with physical
disabilities.

Overall, this project marks an important step in human-computer


interaction. By combining gesture recognition with Python and OpenCV,
it offers a low-cost, user-friendly, and affordable alternative to traditional
input devices. With its uses in accessibility, gaming, medicine, and the
automotive field, the suggested system has
the ability to revolutionize how users interact with digital spaces.

9
LITERATURE SURVEY

Motion detection using AI-powered voice assistants enables hands-


free navigation of digital interfaces. The system translates voice
commands into cursor movements, enabling actions such as clicking,
scrolling, and dragging. By incorporating motion tracking algorithms
and Computer Vision, the system enhances user interaction by
enabling gesture-based control. Such functionality is particularly
beneficial for individuals with mobility impairments, offering an
accessible and efficient means of controlling devices.

Functionalities of AI Voice Assistant in Motion Detection:

1. Voice Calling: AI-powered voice assistants can facilitate hands-


free communication by initiating and managing phone calls.
Through speech recognition, the system recognizes user
instructions and dials the intended contact.
2. Email Search & Management: NLP-driven email search enables
users to locate and manage emails using voice commands.
Advanced AI models analyse keywords and contextual information
to retrieve relevant emails efficiently.
3. Mouse Gesture Control: Motion tracking, combined with voice
commands, allows users to perform mouse actions without a
physical device. The AI model maps spoken instructions to
predefined gestures, enabling an intuitive experience.
4. General AI Voice Assistant Functionalities: In addition to
motion detection, AI voice assistants assist with tasks such as
setting reminders, fetching real-time information, and controlling
smart devices.

The way each functionality is beneficial:

1. Voice Calling:
 Hands-free communication is particularly beneficial for individuals
with physical disabilities, allowing them to make and manage calls
effortlessly.
 Reduces distractions for professionals, such as drivers or workers
handling machinery, by enabling voice-activated dialling.
10
 Enhances convenience by eliminating the need to manually search
contacts and dial numbers.

2. Email Search & Management:


 Saves time by allowing users to quickly retrieve important emails
using voice commands instead of manual searching.
 Beneficial for visually impaired individuals who may find it difficult
to navigate traditional email interfaces.
 Enhances productivity by enabling hands-free email composition,
reading, and sorting, reducing dependency on keyboards and
screens.

3. Mouse Gesture Control:
 Provides an alternative input method for users with mobility
impairments who struggle with traditional mouse or touchpad
controls.
 Enhances user experience by enabling intuitive cursor movements
through voice commands, reducing the need for physical
interaction.
 Useful in scenarios where touch-based input is impractical, such as
during presentations or while using large-screen displays.

4.General AI Voice Assistant Functionalities:
 Increases efficiency by setting reminders, fetching real-time
information, and automating repetitive tasks.
 Enhances accessibility for elderly users or individuals with
disabilities, making technology more inclusive.
 Provides a seamless smart home experience by integrating with IoT
devices to control lights, appliances, and security systems through
voice commands.

Despite its benefits, motion detection using AI voice


assistants faces several challenges:

 Latency in Command Execution: Real-time responsiveness is


crucial, requiring optimized AI models and efficient processing
techniques.
 Noise Interference: Background noise affects speech recognition
accuracy, necessitating the development of robust noise-cancelling
algorithms.
 Privacy and Security Concerns: AI-driven voice assistants
process sensitive voice data, highlighting the need for secure
encryption and user authentication measures.
 Multimodal Interaction: Integrating voice commands with
additional input methods such as hand gestures and eye tracking
11
can enhance usability.

Future advancements will focus on enhancing contextual awareness,


improving multilingual support, and integrating AI with Augmented
Reality (AR) and Virtual Reality (VR). The continued evolution of AI
voice assistants is expected to provide even more personalized and
intuitive user experiences across diverse applications, including
accessibility, gaming, healthcare, and professional workflows.

PROBLEM STATEMENT
Motion recognition combined with AI voice control has proven to be a
highly promising development in human-computer interaction. This
technology uses artificial intelligence, computer vision, and voice
recognition to provide natural and easy-to-use user interfaces. Through
integrating gesture recognition and voice commands, AI voice assistants
provide users with greater control over devices and applications. This
survey of literature tries to present a thorough review of motion
detection systems, their functions, voice calling, email management,
search operations, and mouse gesture control applications, especially in
helping disabled people.

1. Motion Detection and Gesture Recognition

Motion detection technology mainly employs cameras, sensors, and


artificial intelligence (AI) algorithms to identify and recognize human
movements. Various research studies have proved the efficiency of
computer vision methods, such as image processing, skeleton tracking,
and machine learning, in precise identification of hand and body
movements. Gesture recognition algorithms tend to use convolutional
neural networks (CNNs) and recurrent neural networks (RNNs) to carry
out real-time recognition and classification of gestures.

Studies show that adding gesture-based control to AI voice assistants


12
improves the user experience through minimized dependence on
conventional input methods. This is especially useful in situations where
the user has reduced mobility or is inclined towards free-hand operation.
Through the use of depth cameras and time-of-flight sensors, systems
are able to monitor gestures in three dimensions, thus ensuring higher
accuracy and responsiveness.

2. Voice Calling and Email Management Using AI Voice Assistants

AI voice assistants such as Siri, Google Assistant, and Alexa have already
proven themselves to be effective tools for voice calling and email
management. Recent developments have added motion detection to
further simplify these features. Research indicates that users can make,
receive, or reject calls through simple hand movements. For instance,
waving a hand can reject an incoming call, while a thumbs-up can accept
it.

In email management, AI solutions offer voice-to-text dictation and


gesture-based email organization. Studies also focus on the application
of natural language processing (NLP) algorithms to interpret voice
commands for transcription and email creation. This voice-controlled
method has been found successful for professionals who handle large
communication volumes.

3. Gesture-Controlled Mouse and Navigation

Mouse gesture control systems have attracted much attention for their
use in personal computing. By monitoring hand movements through a
webcam or dedicated sensors, these systems map gestures onto cursor
movement. Different algorithms, including Kalman filtering and optical
flow analysis, have been used by researchers to provide smooth and
accurate cursor control.

Moreover, gesture-based scrolling and clicking have been introduced to


minimize the use of conventional peripherals. Research highlights the
need to minimize latency in such systems to ensure a smooth user
experience. AI-driven calibration processes also learn from the behavior
of individual users, improving accuracy with time.

4. Search and Navigation with AI Voice Assistants

AI voice assistants provide effective search capabilities through the


processing of voice commands and displaying relevant results. Recent
advancements have incorporated gesture recognition to enhance the
experience. Searches can be performed using voice commands while
hand gestures are used to browse search results or choose options.
13
In addition, researchers have also investigated multimodal interfaces
where both gestures and voice are involved in search activities. This is a
hybrid strategy that is useful in settings such as virtual reality (VR) or
augmented reality (AR), where users can opt for gesture-based input as
opposed to the usual inputs.

5. Applications for Individuals with Disabilities

One of the greatest benefits of motion detection AI voice assistants is


their accessibility factor. For motor-disabled individuals, conventional
input processes can be impractical or infeasible to utilize. Gesture
recognition systems introduce an inclusive means of interaction through
personalized gestures to accomplish necessary activities.

Studies have shown that combining voice commands with motion


detection empowers users with limited mobility to independently
manage phone calls, emails, and computer tasks. Additionally, AI
algorithms continuously adapt to unique user movements,
accommodating varying levels of mobility. For visually impaired users, AI
voice assistants offer audio feedback, ensuring a comprehensive and
accessible user experience.

6. Challenges and Future Directions

Notwithstanding the progress, there are still some challenges to the use
of motion detection AI voice assistants. Varying lighting levels,
background noise, and hardware capabilities can affect system
performance. Experts recommend the development of strong algorithms
that can perform in different environments.

Future advancements are likely to improve the accuracy of gesture


recognition using deep learning models and augmented reality
technologies. Also, the inclusion of adaptive learning systems will further
customize user experiences, especially for people with disabilities.
Developers are also investigating the inclusion of biometric recognition
for increased security in voice assistant interactions.

7. Conclusion

AI voice assistant-based motion detection is a revolutionary


advancement in human-computer interaction. Through support for voice
calling, email, search operations, and mouse gesture control via natural
gestures and voice commands, such systems provide better accessibility
and ease of use. Especially for disabled people, the technology promotes
independence and integration.
14
With ongoing development in AI, computer vision, and machine learning,
motion detection AI voice assistants are likely to become even more
responsive and intelligent. As challenges are overcome by researchers
and capabilities are broadened, the future holds great promise for
further enhancing the manner in which users communicate with digital
devices and services.

15
PROPOSED
SOLUTION

To address the challenges and harness the benefits of motion detection


integrated with AI voice assistants, the proposed solution involves a
multi-layered approach that ensures accuracy, responsiveness, and
accessibility. The system will combine computer vision, artificial
intelligence, and natural language processing to create a seamless and
user-friendly experience. The following are the key components and
functionalities of the proposed solution:

1. Gesture Recognition Module:


Execute gesture recognition in real-time with the help of a webcam and
OpenCV for capturing hand gestures.
Utilize machine learning algorithms for recognizing gestures like swipe,
wave, thumbs-up, and point.
Tune algorithms with adaptive learning to cater to differences in user
gestures.

2. Integration of Voice Command:


Utilize AI voice assistants such as Google Assistant, Siri, or Alexa to
receive and process voice commands.
Facilitate unbroken voice interaction for operations such as voice call,
email composition, searching the web, and smart device management.
Integrate gestures and voice to enable improved multimodal experience.

3. Mouse and Pointer Control:


Establish a gesture mouse control system from hand tracking.
Enforce mouse functionalities such as pointer movement, left click and
right click, and scroll functionalities.
Enable customized sensitivity settings in line with users' preferences.

4. Voice Calling and Email Management:


Add gesture commands for responding, declining, or making voice and
video calls.

16
Support voice composition of emails with real-time suggestions based on
NLP.
Add a gesture-based system for rapid-action replies, forwards, or
deleting emails.

5. Search and Navigation Control:


Allow gesture-controlled navigation of search results.
Create a voice-controlled virtual assistant to carry out web searches.
Add feedback methods to improve accuracy and usability.

6. Accessibility Features:
Implement an accessible system for people with disabilities by providing
gesture customization.
Integrate AI-driven gesture prediction for persons with mobility
disabilities.
Offer voice-activated cues for visually impaired users.

7. Security and Privacy:


Use voice and facial recognition for user authentication.
Encrypt data transmission for sensitive content.
Offer privacy options for customization to control gesture and voice data.

8. Adaptive Learning and Continuous Improvement:


Integrate AI algorithms that learn from user habits for improved gesture
recognition accuracy.
Offer periodic software updates for enhanced performance and added
gesture libraries.
Offer real-time analytics for monitoring usage trends and recommending
improvements.

The suggested solution is intended to establish an effortless, intuitive,


and inclusive interaction model for users. With the integration of the
strengths of motion detection and AI voice command, it facilitates
productivity, accessibility, and user experience in diverse applications
such as personal computing, smart homes, healthcare, and gaming.

17
EXPERIMENTAL SETUP AND RESULT
ANALYSIS
Hardware Requirements:
 Computer: Ensure the hardware has sufficient computational
resources to run the assistant smoothly.
 Microphone: Choose a quality microphone
for accurate speech input recognition.
 Camera: If implementing camera functionality, select a
suitable camera compatible with the hardware and software
setup.

Software Requirements:
 Python and Necessary Libraries: Install Python and required
libraries using package managers like pip.
 Development Environment: Set up a development
environment such as Anaconda or a virtual environment for
managing dependencies.
 VoIP Service: If incorporating calling functionality, sign up for a
VoIP service like Twilio and configure it for integration with the
assistant.

 Libraries Required:

 speech-recognition – Enables speech-to-text conversion for


processing voice commands.

 pyttsx3 – Provides text-to-speech conversion for AI-generated


voice responses.

 cv2 (OpenCV) – Facilitates real-time image processing for gesture


recognition and motion tracking.

 mediapipe – Detects and tracks hand gestures using deep


learning-based models.

 pyautogui – Automates mouse and keyboard actions based on AI-


driven gestures.

18
 wikipedia – Retrieves summarized information from Wikipedia
based on user queries.

 requests – Handles API calls to fetch external data, such as


weather or news updates.

 smtplib – Enables sending automated emails via SMTP protocol.

 twilio – Facilitates voice calling through Twilio's cloud


communication API.

 tkinter – Provides a graphical user interface (GUI) for better user


interaction.

Test Environment Setup:

Installation of Dependencies.
Install all required Python libraries using pip install -r requirements.txt.
Ensure API keys for weather, news, and Twilio are properly configured in
the script.

Configuration of Gmail SMTP:

Ensure two-factor authentication (2FA) is enabled.


Generate and use an App Password instead of a regular password for
SMTP authentication.

19
Testing Environment:

A quiet room for voice recognition testing.


Adequate lighting for gesture recognition via webcam.
Open application scenarios to test app launching and closing
functionalities.

Gesture Control Testing:

Check if the assistant accurately tracks hand gestures for mouse control.
Evaluate click, right-click, and cursor movement accuracy.
Measure latency in gesture recognition and execution.

20
CONCLUSION

Motion detection using AI voice assistants represents a significant leap


in human-computer interaction, offering hands-free accessibility and
convenience across various domains. This technology has the potential
to revolutionize industries such as healthcare, gaming, smart home
automation, and accessibility solutions for individuals with disabilities.
While challenges such as noise interference, latency, and privacy
concerns still exist, ongoing advancements in AI and machine learning
continue to improve the efficiency and security of voice-assisted motion
control systems.

With the rapid evolution of AI, future AI-powered voice assistants will
become more intuitive, context-aware, and capable of handling
complex tasks with minimal user effort. The seamless integration of
motion detection with AI voice technology will open new avenues for
innovation, ultimately reshaping the way humans interact with digital
systems. As researchers and developers refine these systems, AI-driven
voice and motion control will play a crucial role in making technology
more accessible, efficient, and user-friendly in the coming years.
The motion detection system for converting hand gestures into mouse
activities successfully demonstrates its potential as a user-friendly and
accessible interface. Our project demonstrates the feasibility of using
computer vision and machine learning techniques to develop a robust
and intuitive hand gesture recognition system. By offering an
alternative to traditional input devices, the solution promotes
inclusivity, enhances user experience, and paves the way for more
interactive computing methodologies.

It also demonstrates a functional, cost-effective gesture-recognition


system that bridges the gap between physical and touchless
interaction. By translating hand gestures into mouse commands, it offers
a versatile solution for accessibility, gaming, and healthcare, while
eliminating dependency on specialized hardware.

21
In conclusion, our proposed system successfully bridges the gap
between traditional input devices and modern, intuitive interaction
methods by implementing real-time gesture recognition for mouse
control. The project demonstrates significant potential in areas such as
gaming, accessibility, and medical applications, offering an innovative,
hands-free computing experience.

22
FUTURE SCOPE

The future of motion detection using AI voice assistants holds


great potential for enhancing accessibility, efficiency, and user
experience. With advancements in AI and human-computer
interaction, future research will focus on:

 Improved Context Awareness: AI voice assistants will


become more intelligent in understanding user intent, tone,
and situational context, leading to more precise and
relevant responses.
 Enhanced Gesture and Motion Recognition: The integration of
advanced motion sensors and deep learning models will refine
gesture-based controls, allowing for smoother and more accurate
cursor movements and interactions.
 AI-Powered Personalization: Future AI assistants will use
machine learning to adapt to individual user preferences,
optimizing performance based on behavioural patterns and voice
command history.
 Security and Privacy Enhancements: As AI assistants become
more integrated into daily life, robust security measures such as
encrypted voice processing and AI-driven authentication will be
prioritized to protect user data and privacy.
 Integration with Augmented and Virtual Reality (AR/VR): AI
voice assistants will play a crucial role in controlling AR/VR
interfaces, enabling more interactive and immersive experiences in
gaming, training simulations, and virtual collaboration.

23
BIBLIOGRAPHY

Speech Recognition & Text-to-Speech


Chaudhary, A., & Kothari, S. (2018). Speech Recognition Techniques: A Review.
International Journal of Engineering Research & Technology (IJERT).

Google Cloud Speech API – Google Documentation

Pyttsx3 Documentation – https://pyttsx3.readthedocs.io/

Computer Vision & Hand Gesture Recognition


Zhang, Z., & Wu, W. (2021). Real-Time Hand Gesture Recognition Using Mediapipe
Hands and Deep Learning. Journal of AI Research.

Mediapipe Hands API – Google Developer Documentation

Bradski, G. (2000). The OpenCV Library. Dr. Dobb’s Journal of Software Tools.

GUI Development (Tkinter)


Grayson, J. (2000). Python and Tkinter Programming. Manning Publications.

Tkinter Documentation – https://docs.python.org/3/library/tkinter.html

Automation & API Integration


Twilio API for Calls & SMS – https://www.twilio.com/docs/

NewsAPI for Fetching News – https://newsapi.org/

OpenWeatherMap API – https://openweathermap.org/api

Mouse Control & PyAutoGUI


Sweigart, A. (2015). Automate the Boring Stuff with Python. No Starch Press.

PyAutoGUI Documentation – https://pyautogui.readthedocs.io/

Wikipedia Search API


Wikipedia API Documentation – https://pypi.org/project/wikipedia-api/
24
25
26

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy