0% found this document useful (0 votes)
12 views64 pages

Yogaposedetectionreport 3

The project report focuses on the development of a Yoga Pose Detection system using AI, aimed at assisting individuals in performing yoga exercises accurately through image processing and machine learning. The system analyzes body postures in real-time, providing feedback to prevent injuries and enhance practice effectiveness. The report outlines the project's objectives, methodologies, and findings, highlighting the potential for integrating technology with traditional yoga practices to improve accessibility and guidance for users.

Uploaded by

Parthivi 1
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views64 pages

Yogaposedetectionreport 3

The project report focuses on the development of a Yoga Pose Detection system using AI, aimed at assisting individuals in performing yoga exercises accurately through image processing and machine learning. The system analyzes body postures in real-time, providing feedback to prevent injuries and enhance practice effectiveness. The report outlines the project's objectives, methodologies, and findings, highlighting the potential for integrating technology with traditional yoga practices to improve accessibility and guidance for users.

Uploaded by

Parthivi 1
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 64

A

PROJECT REPORT
On

Yoga Pose Detection using AI


Submitted In Partial Fulfillment of the Requirements
For the Degree of
Bachelor of Technology
In
Computer Science & Engineering
Submitted By

Parthivi Malik (2102220100129)


Md Uvesh (2102220100113)
Nupur Kumari (2102220100129)
Md Ashraf Khan (2202220109010)

Under the Supervision of


Mr. Ashish Shrivastava
(Asst. Professor)

ITS Engineering College, Greater Noida


Affiliated to

DR. A.P.J. ABDUL KALAM TECHNICAL UNIVERSITY

LUCKNOW, UTTAR PRADESH


(SESSION:May 2025)
DECLARATION

We Parthivi Malik, Md. Asharaf Khan, Nupur Kumari and Md. Uvesh
hereby declare that this submission is our own work and that, to the best
of our knowledge and belief, it contains no material previously published
or written by another person nor material which to a substantial extent has
been accepted for the award of any other degree of the university or other
institute of higher learning, except where due acknowledgment has been
made in the text.

Signature : Signature :
Name: Parthivi Malik Name: Md. Ashraf Khan
Roll no.: 2102220100129 Roll no.: 2202220109010
Date: Date:

Signature : Signature :
Name: Nupur Kumari Name: Md. Uvesh
Roll no.: 2102220100128 Roll no.:
2102220100129
Date: Date:

Mr. Ashish Shrivastava Dr. Hariom Tyagi


(Project Guide) (Project Coordinator)

Dr. Jaya Sinha Dr.Vishnu Shama


(HOD-CSE) (Dean-CSE)
CERTIFICATE

This is to certify that project report entitled “Yoga Pose Detection using
AI” which is submitted by Parthivi Malik, Md. Asharaf, Nupur Kumari
,and Md. Uvesh in partial fulfillment of the requirement for the award of
degree B. Tech. in department of computer science & engineering, Dr.
A.P.J. Abdul Kalam Technical University, Lucknow, is a record of the
candidate’s own work carried out by them under my supervision. The
matter embodied in this thesis is original and has not been submitted for
the award of any other degree.

Date: Supervisor
ACKNOWLEDGEMENT

It gives us a great sense of pleasure to present the report of the B. Tech


project undertaken during our B. Tech (VIII-Semester) final year. We owe
special debt and gratitude to Mr. Ashish Shrivastava, for his constant
support and guidance throughout the course of our work. His sincerity,
thoroughness, and perseverance has been a constant source of inspiration
for us . It is only his cognizant efforts that our endeavors have seen light
of the day.
We also take the opportunity to acknowledge the contribution of Dr.
Hariom Tyagi for his full support and assistance during the development
of this project.
We also do not like to miss the opportunity to acknowledge the
contribution of all faculty members of the department for their kind
assistance and cooperation during the development of our project. Last
but not the least, we acknowledge our group members for their
contribution in the completion of the project.

Signature : Signature :
Name: Parthivi Malik Name: Md. Ashraf Khan
Roll no.: 2102220100129 Roll no.: 2202220109010
Date: Date:

Signature : Signature :
Name: Nupur Kumari Name: Md. Uvesh
Roll no.: 2102220100128 Roll no.: 2102220100129
Date:
ABSTRACT

Yoga Pose Detection is a project designed to assist individuals in


performing yoga exercises correctly by detecting various poses through
image processing and machine learning techniques. This system utilizes
computer vision to analyze body postures captured via cameras or mobile
devices. By applying deep learning algorithms, the system can classify
different yoga poses with a high degree of accuracy. The key objective is
to provide users with real-time feedback on their yoga poses, allowing
them to make adjustments as necessary to ensure proper alignment and
prevent injury. This project aims to bridge the gap between yoga
practitioners and technology, creating an interactive platform that can be
accessed remotely, making yoga practices more accessible to people
worldwide. The main findings of the study indicate that the system can
effectively identify poses such as Downward Dog, Tree Pose, and Warrior
Pose. Furthermore, it provides feedback on common mistakes like
improper body alignment, which could lead to strain or injury. The
conclusion of the study shows that such a system can be a valuable tool
for both beginners and advanced practitioners, offering personalized
guidance based on individual needs. This innovation opens up new
possibilities for integrating fitness practices with digital solutions,
ultimately improving the quality and accessibility of yoga instruction.
Future work will focus on enhancing pose detection accuracy and
expanding the library of poses supported by the system.
TABLE OF CONTENTS
b Page
DECLARATION…………………………………………………………….……i
CERTIFICATE………………………………………………………….………...ii
ACKNOWLEDGEMENTS……………………………………………………...iii
ABSTRACT……………………………………………………………………...iv
LIST OF TABLES ………………………………………………………………v
LIST OF FIGURES………………………………………………………………vi
LIST OF SYMBOLS…………………………………………………………….vii
LIST OF ABBREVIATION……………………………………………………..viii
CHAPTER 1: INTRODUCTION……………………………………………… 1
1.1 Introduction of the Problem…………………………………………………..2
1.2 Summarize Previous Research……………………………………………….4
1.3 Researching the Problem……………………………………………………. 8
CHAPTER 2: LITERATURE SURVEY………………………………………. 10
CHAPTER 3: SYSTEM DESIGN……………………………………………...11
3.1 Introduction to System Design………………………………………………12
3.2 System Architecture………………………………………………………….13
3.3 Component Table…………………………………………………………….15
3.4 Functional Requirements………………………………………………….....15
3.5 System Workflow…………………………………………………………….16
CHAPTER4: METHODOLOGY AND TECHNOLOGY ……………………...17
4.1 Methodology…………………………………………………………………17
4.2 Technologies Used……………………………………………...……………21
4.3 Workflow………………………………………………………....…………..25
CHAPTER 5: IMPLEMENTATION AND RESULT ANALYSIS……………..26
5.1 Implementation Process ……………………………………………………..32
5.2 Result Analysis ………………………………………………………………34
5.3 Deployment and Final Testing ……………………………………….……...35
CHAPTER 6: CONCLUSION AND FUTURE WORK ……………………….35
6.1 Conclusion ……………………………………………………………..……37
6.2 Future Work………………………………………………………………….39
CHAPTER 7: PROGRESS SCHEDULE…………………………...………..... 40
REFERENCES ……………………………………………………………...….41
LIST OF TABLES

Table Name Page No.


Summary of research work in pose estimation 18
System components and their technologies 28
Functional requirements of the system 29
Testing conditions under various parameters 53
Comparative results with baseline models 54
Progess schedule
LIST OF FIGURES
Figure Name Page No.
System architecture workflow 25
Workflow, training and inferencing phases 34
Training interface using knn classification 48
Front page of website (live server preview) 55
Factors to improve accuracy 61
LIST OF SYMBOLS

X,Y Coordinates of body key points in 2D space


θ Angle between two body parts(for pose evaluation)
Kp Key points detected from pose estimation
D distance between 2 key points
ACC Accuracy of Pose Classification
L Loss function during model training
LIST OF ABBREVIATIONS

KNN K-Nearest Neighbours


AI Artificial Intelligence
DL Deep Learning
NLP Natural Language Processing
SQL Structured Query Language
CNN Convolutional Neutral Network
SVM Support Vector Machine
UI User Interface
API Application Programming Interface
AWS Amazon Web Services
HTML Hyper Text Markup Language
CSS Cascading Style Sheet
JS Java Script
AR Augmentated Reality
CPM Convolutional Pose Machine
PAF Part Affinity Fields
3D 3 Dimensional
2D 2 Dimensional
CHAPTER - 1
Introduction
The purpose of this project report is to present the work undertaken for
the Yoga Pose Detection system. In today's world, maintaining physical
fitness and mental well-being has become essential, and yoga is a proven
means to achieve these goals. With advancements in technology,
automating the detection of yoga poses has gained considerable attention.
It not only assists practitioners in perfecting their postures but also
enables real-time correction and feedback without human supervision. In
the contemporary era, where the pursuit of holistic health encompassing
physical fitness and mental well-being has become paramount, yoga
stands as a time honored discipline recognized for its profound benefits.
As society increasingly embraces digital innovations, there emerges a
compelling opportunity to integrate technological advancements with
traditional wellness practices. Among these innovations, automated Yoga
Pose Detection has garnered significant attention, aiming to bridge the
gap between ancient knowledge and modern-day convenience.

The objective of this project is to design and develop a Yoga Pose


Detection system that leverages the cutting-edge domains of computer
vision, machine learning, and deep learning to accurately identify and
assess various yoga postures performed by individuals. By automating
pose evaluation, the system not only facilitates self-guided improvement
but also provides real-time feedback, minimizing reliance on human
supervision. This advancement is particularly vital for practitioners who
lack access to professional instructors, thereby reducing the likelihood of
incorrect form, preventing injuries, and maximizing the efficacy of their
practice.

Yoga Pose Detection is a computer vision-based application that


leverages machine learning and deep learning technologies to recognize
different yoga poses performed by an individual. It helps users receive
guidance on their pose accuracy, thereby minimizing the risk of injury
and enhancing the benefits of their practice. This report provides a
detailed study of the system developed for accurate yoga pose detection
using an open-source pose estimation model, followed by classification
into various predefined yoga poses.
In order for readers to trust the writer, the introduction must be well-
written with

minimal errors. It should capture the reader's attention and explain the
rationale behind selecting this project topic. Besides using strong
vocabulary and structured presentation, exciting the reader’s interest
through thought-provoking statements or real-world relevance is
important.
The chapter introduces the problem, reviews the previous work carried
out in the field, describes the method of researching the problem,
summarizes the key results obtained, and finally, explains the
organization of the entire report.
I. Introduction of the Problem
In the modern era, yoga has transcended being merely a traditional
practice and has become a global phenomenon for promoting physical
and mental wellness. Despite its widespread popularity, correct posture
and pose alignment remain crucial challenges for practitioners. Incorrect
postures can lead to ineffective results or even physical injuries.
Manually correcting yoga poses typically requires the physical presence
of a trained instructor. However, with the integration of artificial
intelligence and computer vision technologies, there arises an opportunity
to automate the process of pose detection and validation. This project
aims to address this gap by developing a Yoga Pose Detection system that
uses a webcam feed to analyze, detect, and provide feedback on the yoga
poses performed by users.
In the contemporary digital era, yoga has evolved beyond its ancient
roots, emerging as a globally embraced practice for enhancing physical
vitality, psychological resilience, and holistic well-being. Despite its
widespread adoption, the critical dependency of yoga on precise postural
alignment and pose execution continues to present significant challenges,
particularly for novice practitioners. Improper form not only diminishes
the therapeutic efficacy of yoga but also exposes individuals to a
heightened risk of musculoskeletal injuries and chronic strain-related
complications.
Traditionally, the correction of yoga postures necessitates the presence of
a skilled instructor capable of providing real-time feedback based on
visual and experiential assessment. However, the logistical, financial, and
geographic limitations associated with access to expert supervision create
a substantial barrier to consistent, high quality practice. This constraint
underscores the urgent need for scalable, technology-driven

solutions capable of delivering accurate and immediate pose validation


without the necessity for human intervention.
With the advent of Artificial Intelligence (AI), particularly Computer
Vision (CV) and Deep Learning (DL), there exists a transformative
potential to revolutionize the domain of yoga instruction. Modern
computer vision systems, empowered by convolutional neural networks
(CNNs), pose estimation frameworks, and landmark detection algorithms,
offer the ability to algorithmically model human body kinematics with
remarkable precision. In this context, the development of an automated
Yoga Pose Detection system represents a pivotal step towards
democratizing access to high-fidelity yoga guidance.
The technical challenges addressed in this work include:
 Robustness to Variability: Accounting for individual differences in
body shape, flexibility, clothing, and camera angles.
 Real-time Performance: Ensuring low-latency inference to
facilitate immediate user feedback.
 Noise Resilience: Handling occlusions, incomplete landmark
detection, and background clutter without significant degradation
of system performance.
 Pose Generalization: Enabling the model to accurately detect a
wide variety of yoga poses beyond the training distribution.
Moreover, the system adheres to non-invasive principles, requiring no
specialized hardware beyond conventional camera devices, thus ensuring
cost-effectiveness and accessibility.

II. Summarize Previous Research


 Overview of research
The field of human pose estimation has been an active area of research
within computer vision and machine learning for several decades. Early
efforts focused on model-based techniques that relied heavily on
handcrafted features and rigid body models to interpret human body
postures from images or video streams. However, these approaches were
constrained by the complexity of human anatomy and the high degree of
variability in appearance, clothing, lighting, and environmental
conditions.

With the advent of deep learning, particularly the development of


Convolutional Neural Networks (CNNs), the accuracy and robustness of
pose estimation models improved dramatically. One of the earliest
breakthroughs was the introduction of Deep Pose by Toshev and Szegedy
(2014), which reimagined the pose estimation task as a direct regression
problem. This novel approach leveraged deep neural networks to predict
body joint coordinates, bypassing the need for intermediate
representations. Deep Pose significantly outperformed traditional
pictorial structure models and set a foundation for future research.
Following Deep Pose, researchers shifted towards part-based models,
notably the Convolutional Pose Machines (CPMs) and Stacked Hourglass
Networks, which introduced the concept of predicting heat maps for each
key body joint instead of regressing coordinates directly. These
architectures allowed models to capture spatial dependencies between
joints more effectively, resulting in higher accuracy, especially for
complex poses with occlusions or unusual limb configurations.
Another significant milestone was achieved through Open Pose (Cao et
al., 2017), which democratized real-time multi-person pose estimation by
introducing Part Affinity Fields (PAFs). Unlike previous models that
struggled with multiple persons in a frame, Open Pose created a bottom-
up pipeline that detected body parts first and then associated them with
individuals, leading to a robust multi-person detection system with real-
time capabilities.
More recently, Google’s MediaPipe Pose introduced lightweight models
optimized for mobile and edge devices without sacrificing performance.
This opened avenues for pose detection applications beyond research
labs, entering the domains of healthcare, sports analytics, gaming, and
wellness industries, including yoga pose monitoring.
In parallel, extensive research has focused on 3D pose estimation,
attempting to recover depth information from monocular images.
Methods such as V Net and Simple Baseline extended the frontier by
lifting 2D keypoints into 3D space, albeit often requiring depth sensors or
multi-view cameras for improved accuracy.
 Yoga specific pose detection studies
While the majority of pose estimation research focused on general human
activity recognition or sports movements, targeted research towards yoga
pose recognition and correction began gaining traction only in the last
decade.

One of the pioneering works in yoga-specific detection was conducted by


Sukhija et al. (2018), who proposed a pose classification system using
handcrafted features combined with Support Vector Machines (SVMs).
Although their method demonstrated the feasibility of automatic yoga
pose detection, it suffered from scalability issues and was sensitive to
background noise and clothing variations.
Further advancements were made with the application of deep learning-
based feature extraction for yoga pose recognition. Jain and Kumar
(2020) utilized CNN-based models to classify yoga poses from a curated
dataset, showing significant improvement over classical machine learning
approaches. Their work underscored the importance of deep features in
capturing the subtle variations among similar-looking poses.
Moreover, the Yoga-82 dataset, introduced recently, provided a
standardized benchmark comprising 82 distinct yoga poses performed by
different practitioners. Researchers like Das et al. (2021) leveraged
transfer learning with architectures like InceptionV3 and MobileNetV2 to
classify these poses with high accuracy. Their experiments highlighted
the challenges posed by intra-class variability (different styles of the same
pose) and the need for robust generalization.
 Gaps In Existing Research
- Pose Complexity: Yoga involves a diverse range of poses, some of
which involve complex limb interactions, multiple body orientations
(inverted, twisted), and fine-grained alignment differences. Current
models often misclassify poses that look visually similar but differ subtly
in biomechanics.
- Dataset Limitations: Most datasets are collected in controlled
environments with good lighting and clear backgrounds. Real-world
settings, where users practice yoga at home with cluttered backgrounds or
poor lighting, pose significant challenges that existing systems often fail
to address.
- Generalization across Practitioners: Variations in body size, flexibility,
age, and skill level are often underrepresented in datasets. As a result,
models trained on idealized data may not perform well for general users,
particularly for plus-size individuals, children, or elderly practitioners.
- Real-Time Constraints: Achieving real-time pose estimation with high
precision on low-power devices remains a challenge. Trade-offs between
model complexity,
accuracy, and computational efficiency must be addressed carefully for
practical deployment.
- Correction Mechanisms: Most yoga pose detection systems provide
binary feedback (correct/incorrect) rather than offering actionable
corrections (e.g., "lift your left arm higher" or "straighten your back").
Research on interpretability and user-friendly feedback systems is still
emerging.

Table no.1 Summary of research work in pose estimation


S.NO. Research Work Year Key Limitations
Contribution
1 Toshev and 2014 First application Poor
Szegedy - of deep learning performance on
DeepPose for direct pose multi-person
regression. scenarios;
Pioneered end- lacked spatial
to-end CNN- constraints
based pose between joints.
estimation
2 Wei et al. - 2016 Introduced multi Computationally
Convolutional stage CNNs for expensive;
Pose Machines predicting key slower
point, heatmaps, inference.
improving
spatial
consistency.
3 Newell et al. - 2016 Symmetric Large model
Stacked Hourglass encoder-decoder size; not ideal
Networks structure capture for real-time
fine detail across applications.
scales. High
pose estimation
accuracy
4 Cao et al. - 2017 Part Affinity Performance
OpenPose Fields (PAF) drops
approach for significantly in
real-time multi- heavy occlusion
person pose or cluttered
estimation. scenes.
5 MediaPipe Pose 2020 Lightweight Struggles with
pose estimation rare or extreme
pipeline poses; tuned
optimized for primarily for
mobile and edge fitness activities.
devices. Real-
time inference.
6 Sukhija et al. - 2018 Early machine Limited
Yoga Pose learning-based scalability; poor
Detection using approach for robustness to
SVM yoga pose noise and
recognition clothing
using variations.
handcrafted
features.
7 Jain and Kumar - 2020 Applied CNN Required large
CNN-based Yoga architectures for datasets to
Pose Recognition better feature prevent
extraction and overfitting;
classification computationally
accuracy heavy.
8 Das et al. - Yoga- 2021 Created Dataset biases
82 Dataset Work comprehensive (ideal settings,
yoga pose limited
dataset; used practitioner
transfer learning diversity) affect
(InceptionV3, real-world
MobileNetV2). performance.
9 Ghosh et al. - 2022 Developed Feedback
Feedback System intelligent limited to
for Yoga correction keypoint
Correction system based on metrics; lacked
keypoint holistic pose
deviations. context
understanding.
III. Researching the Problem
Yoga, with its emphasis on balance, flexibility, and precision, presents
unique challenges for human pose estimation systems. While general
pose detection models have achieved significant progress, they are not
specifically designed to handle the complex postures, subtle variations,
and transitional movements inherent in yoga practices. Recognizing these
limitations, it became necessary to deeply investigate the shortcomings of
existing systems and the specific requirements of yoga pose detection.
An initial literature review was conducted to understand the strengths and
gaps of popular pose estimation models like OpenPose, MediaPipe Pose,
and PoseNet. Although these models excel at standard activities such as
running or jumping, they often struggle with yoga poses that involve non-
standard limb orientations, inversions, and intricate body alignments.
Moreover, few systems offer actionable feedback for correcting pose
errors, which is critical for yoga practitioners to avoid injuries and
improve form.
The research further identified that the datasets commonly used for
training pose estimation models, such as COCO or MPII, lack sufficient
examples of yoga poses. This data scarcity results in models that
generalize poorly when applied to yoga-specific scenarios. A focused
dataset, containing diverse body types and real-world yoga poses, was
found to be essential for improving accuracy and robustness.
During problem exploration, interviews and informal discussions with
yoga practitioners and instructors were also considered. Their insights
revealed that beyond detecting whether a pose is achieved, a good system
should identify degrees of misalignment, such as slight knee bending,
tilted hips, or uneven weight distribution, which are often overlooked by
generic models.
Thus, the research emphasized the necessity to adapt or develop pose
detection models that are sensitive to the specific demands of yoga,
including fine-grained classification, real-time feedback, and flexibility to
accommodate variations across different users. This understanding
directly shaped the objectives and methodology of the project.
CHAPTER – 2
Literature Survey
Human pose estimation has witnessed significant advancements over the
past decade, particularly with the integration of deep learning techniques.
Several foundational studies have paved the way for robust pose detection
systems that can identify keypoints on the human body with increasing
precision and speed.
Papandreou et al. [1] introduced PersonLab, a bottom-up, part-based
geometric embedding model for person pose estimation and instance
segmentation. Their work focused on handling multiple persons in
crowded scenes by associating detected parts through geometric
embeddings. Similarly, Toshev and Szegedy [2] proposed DeepPose, one
of the earliest applications of deep neural networks for direct regression
of human body keypoints, demonstrating the potential of deep learning in
this field.
Simon et al. [3] addressed the challenge of hand keypoint detection
through a multiview bootstrapping approach, enhancing accuracy where
single-view training data was scarce. Xiao et al. [4] simplified the
training of pose estimation models by proposing Simple Baselines that
still achieved competitive results, proving that even minimal architectural
changes could yield significant performance boosts.
The development of Deep Residual Networks by He et al. [5] also played
a crucial role, as residual connections helped train deeper and more
effective models, indirectly benefiting pose estimation tasks. Cao et al.
[6] introduced Part Affinity Fields in their real-time multi-person 2D pose
estimation work, which became one of the most influential bottom-up
approaches due to its real-time capabilities and robustness.
Iqbal and Gall [7] proposed the modeling of local joint-to-joint
dependencies, offering improvements over global methods, particularly in
cases of occlusion or crowding. Chang et al. [8] further explored AI
applications in fitness, emphasizing the growing interest in real-time
posture correction technologies.
In the domain of 3D pose estimation, Pavllo et al. [9] proposed the use of
temporal convolutions and semi-supervised training to estimate 3D poses
from video sequences, which is relevant for yoga pose detection, where
movement transitions are as important as static poses. Wei et al. [10]
introduced Convolutional Pose Machines, which

emphasized learning feature representations across multiple stages to


refine keypoint predictions iteratively.
Sun et al. [11] focused on high-resolution representation learning with
their HRNet model, demonstrating that maintaining high-resolution
features throughout the network leads to more accurate pose estimations.
Chen et al. [12] built upon this idea with the Cascaded Pyramid Network
(CPN), particularly improving multi-person pose detection in cluttered
images.
Guler et al. [13] introduced DensePose, which mapped all human pixels
to a 3D surface-based model, pushing the boundary from sparse keypoint
detection to dense pose estimation. Newell et al. [14] contributed
significantly with the Stacked Hourglass Networks, enabling repeated
bottom-up and top-down processing, thus refining pose predictions.
Fang et al. [15] proposed RMPE (Regional Multi-Person Pose
Estimation), offering region-based optimization techniques to enhance
multi-person pose estimation accuracy. Moreover, Chen et al. [16]
advanced semantic segmentation through the DeepLab series, which
though originally meant for segmentation tasks, laid architectural
foundations useful for pose detection tasks involving complex
backgrounds.
Martinez et al. [17] offered a simple yet effective baseline for 3D pose
estimation, emphasizing that even straightforward architectures can
achieve competitive results with careful design. Shotton et al. [18] made
early strides in real-time human pose recognition from depth images,
which is notable for its contribution towards real-time systems — an
important feature for live yoga pose feedback applications.
Finally, Wu et al. [19] explored automatic yoga posture correction using
deep learning, marking one of the few studies directly focusing on yoga-
specific pose detection and correction. Their work highlighted the unique
challenges posed by yoga, including the need for high precision in angle
detection, recognition of small body shifts, and handling a wide diversity
of complex postures.
In conclusion, the collective contributions of these works form the
backbone of current human pose estimation research. However, a specific
focus on yoga poses remains relatively underexplored. The insights from
general pose estimation studies provide a strong foundation, but the
complexities of yoga demand specialized dataset.
CHAPTER - 3
System Design
I. Introduction to System Design

System design is the blueprint and structural foundation upon which any
sophisticated technological solution is built. For the project entitled
“Yoga Pose Detection using Artificial Intelligence,” the design phase
encompasses both the architectural layout and the interrelationship
between software components and hardware elements. This chapter aims
to present a granular and systematic explanation of the design strategy
that integrates computer vision, machine learning, and user centered
interface design to enable the automated detection and classification of
yoga poses.

The system's design revolves around using machine learning techniques


and computer vision technologies to track and analyze human body
postures. This system aims to provide real-time feedback on the
correctness of yoga poses, which can be used by beginners and experts
alike for practice and improvement.

The central premise of the system’s architecture is the seamless fusion of


real-time video analysis and advanced pose classification. By leveraging
state-of-the-art models such as PoseNet and TensorFlow-based
convolutional neural networks (CNNs), the system is engineered to
deliver precise, instantaneous feedback on yoga posture correctness.
Whether the user is a novice or an advanced practitioner, the system
caters to varying levels of yoga proficiency by offering intelligent
feedback that aids in improving form and reducing the risk of injury.

The System Design section outlines the architecture and components of


the Yoga Pose Detection system. It details how the system is structured to
provide a seamless experience for users by integrating a frontend user
interface with a backend processing system. The design incorporates
machine learning algorithms for accurate pose detection, while also
ensuring scalability, efficiency, and ease of use. In this section, we will
discuss the overall system architecture, the technologies used, the
workflow of data through the system, and key modules that contribute to
its functionality.This chapter delineates the core modules constituting the
system explicates their individual functions, and details the flow of data
from input acquisition to actionable output.

II. System Architecture


The System Architecture below shows how the different modules interact
with each other in the Yoga Pose Detection system. It flow of information
from the input (camera) to the output (feedback), passing through various
key stages.
System Architecture Overview:
 Camera: Captures live video of the user performing yoga poses.
 Pose Detection Model (Pose Net): Processes each frame from the
camera feed to detect the key points of the body (like hands, feet,
etc.).
 Pose Classification: The detected poses are classified into various
yoga poses.
 Feedback: Based on pose accuracy, feedback is provided (e.g.,
"Correct Pose" or "Adjust Your Posture").
 User Interface: Displays the feedback in a user-friendly manner.

Figure 1. The workflow is divided into training and inferencing phases.


It involves pose detection, classification, and triggering sound based on
recognized poses.
 Camera (Input Acquisition)

-The system begins with a live camera feed, typically sourced from a
webcam or an integrated camera.

-This module captures real-time video frames of the user performing yoga
poses.

-The camera ensures a continuous stream of visual data to be analyzed


frame-by-frame, ideally at 30 frames per second or more for smooth
detection.

-For optimal performance, a minimum resolution of 720p is


recommended to clearly capture body features and reduce keypoint
ambiguity.

 Pose Detection Model (MediaPipe Pose / PoseNet)

-Once the frames are captured, they are passed to the pose detection
module, which utilizes a pre-trained model like MediaPipe Pose or
PoseNet.

-This module detects keypoints on the body, such as elbows, knees,


shoulders, wrists, and hips, generating a skeletal representation of the
human posture.

-Each keypoint is defined by 3D coordinates (x, y, z) and a visibility


score, which helps determine the confidence in detection.

-This skeletal data serves as the raw feature set for further analysis and
classification.

 Pose Classification

-The extracted keypoints are then analyzed to identify the specific yoga
pose being performed.

-This module compares the detected posture against a database of


reference poses using rule-based algorithms (e.g., angle thresholds, joint
distances) or machine learning classifiers.
-Examples of poses include Tree Pose, Warrior II, Cobra Pose, and
Downward Dog.

-Techniques such as cosine similarity, vector alignment, or even Support


Vector Machines (SVMs) may be used here for accuracy measurement.

 Feedback System

-Based on the results of the pose classification module, a feedback


mechanism is activated.

-If the detected pose matches the expected reference with high accuracy, a
message like "Correct Pose" is displayed.

-In case of misalignment or poor posture, feedback such as "Adjust your


hands" or "Straighten your back" is generated.

-The system may also highlight incorrect joints or suggest adjustments to


improve alignment.

 User Interface (Output Display)

-The final stage involves delivering the processed output to the user via a
clean and intuitive graphical user interface (GUI).

-This interface displays:

o Live webcam feed with skeletal overlays


o Detected pose name
o Real-time feedback messages
o Visual indicators such as color changes or highlighted joints

-The user interface enhances the interactive experience, allowing


practitioners to receive instant corrections and improve their form in a
guided manner.

The system architecture of the Yoga Pose Detection solution is modular,


scalable, and adaptable. Each component—from camera input to user
feedback—has been designed to work independently yet harmoniously
with other modules. This modularity ensures the system can be upgraded
or customized easily in future iterations, whether by improving pose
recognition accuracy, enhancing user interfaces, or enabling cloud-based
yoga sessions. Overall, this architecture enables a comprehensive and
responsive tool for enhancing user wellness through intelligent posture
guidance.

III. Component Table


Table 2. The following table outlines the key components used in the
system design, their purpose , and the tools/technologies implemented for
each:

Component Technology/Tools Purpose

Pose net estimation Open Pose, Media Pipe To detect and track
framework key body landmarks
for posture analysis.
AI model Convolutional Neural To classify poses
Networks (CNNs) based on training data
and provide
real time feedback.
Input data Images, Videos To provide visual
input for pose
detection.
Pose detection Open CV, Tensor Flow To process input data,
pipeline extract key points, and
Pass through AI
model.
Real time processing Open CV, Tensor Flow, To insure immediate
Python pose estimation and
feedback in real time.
User interface design HTML, CSS To create an
interactive and user-
friendly platform for
users to view
feedback.
Testing and validation Test cases, Accuracy To evaluate the
metrics performance and
accuracy of the pose
detection system.

IV. Functional Requirements Table


Table 3. The following table outlines the functional requirements of the
Yoga Pose Detection system. This table highlights what the system
should accomplish and how it interacts with the user:

Feature Description

Pose detection Detect and identifies yoga poses in


real-time using AI-based pose
estimation.
Real-time feedback Provide immediate feedback to user
on posture alignment to ensure
proper form.
Pose classification Classifies detected poses by
comparing them to a dataset of
ideal poses.
User performance Tracks and analyses user progress
tracking over time, adapting feedback based
on performance.

Error detection Identifies misalignment or incorrect


posture and alert the user to correct
it.
User interface Provides a simple, user-friendly
interface for easy navigation and
interaction.
Device compatibility Works on various devices, such as
smartphones, tablets, and smart
wearables .
Multi user support Supports multiple users, enabling
personalized feedback for each
individual.

V. System Workflow

The Workflow section outlines the sequential flow of operations that


power the Yoga Pose Detection system, detailing how raw video input is
transformed into meaningful feedback. Each step in this pipeline
contributes to the system’s ability to recognize and evaluate yoga poses in
real-time, ultimately guiding users toward correct posture and form.
Below is a comprehensive explanation of each stage:

 Start Video Capture

-The system begins its operation by activating the device's webcam or


mobile camera, initiating a continuous video stream.

-This component serves as the entry point for input data, capturing live
visuals of the user performing yoga poses.

-It ensures that the user’s body remains within the frame, in a well-lit and
uncluttered background for optimal detection performance.

-The system is designed to support common camera resolutions such as


720p and 1080p, balancing clarity and processing speed.

 Frame Extraction

-Once the video stream is live, the system proceeds to extract individual
frames from the continuous feed.

-Each frame acts as a still image that is processed in real-time to detect


the body’s pose.

-This step is critical for ensuring smooth and responsive detection, with
typical frame rates ranging between 15 to 30 frames per second (FPS)
depending on hardware capability.
-Efficient frame extraction allows the system to maintain low latency,
which is essential for delivering real-time feedback.

 Pose Detection

-In this step, the extracted frames are sent to the pose detection engine,
such as MediaPipe Pose or PoseNet, which are powered by deep learning
models.

-The engine identifies 33 keypoints on the human body—covering joints


like shoulders, elbows, hips, knees, and ankles.

-It builds a skeleton-like structure by connecting these keypoints,


effectively modeling the user’s posture in 2D or 3D space.

-Each keypoint includes:

o (x, y, z) coordinates relative to the image size


o A visibility score indicating confidence in detection

-This skeletal map acts as the foundational input for further analysis.

 Pose Classification

-After detecting the body’s keypoints, the system passes this data to a
pose classification module.

-Here, the system compares the detected pose with a reference database
of known yoga poses using rule-based algorithms or machine learning
techniques.

-Pose classification may involve:

o Calculating angles between joints


o Measuring distances between limbs
o Evaluating symmetry and orientation

-Classification techniques could range from:

o Simple threshold-based rules for beginners


o Advanced classifiers like Support Vector Machines (SVMs)
or Convolutional Neural Networks (CNNs) for more robust
accuracy.

-The result of this stage is the identified yoga pose (e.g., Tree Pose,
Warrior II, Triangle Pose).

 Pose Accuracy Evaluation

-Merely classifying the pose is not sufficient for training or correction.


Hence, the system also performs pose accuracy evaluation.

-It compares the user’s pose with predefined pose templates to determine
how well-aligned the user is.

-Mathematical metrics such as:

o Euclidean distance
o Cosine similarity
o Joint angle deviation are computed to assign a pose accuracy
score.

-The evaluation helps identify:

o Whether the posture is correct


o If specific joints are misaligned
o Where adjustments are needed for perfection

 Feedback Generation

-Based on the classification and accuracy analysis, the system generates


real-time feedback for the user.

-The feedback is primarily visual, such as:

o Text prompts like “✅ Correct Pose” or “⚠ Adjust Your


Left Arm”
o Color-coded skeletal overlays (e.g., green for correct joints,
red for incorrect ones)
-Additional auditory feedback can be integrated using text-to-speech
libraries to offer voice-guided instructions, especially useful in hands-free
practice.

-Feedback aims to motivate correction and learning without being overly


technical or overwhelming.

 Real-Time Display on User Interface

-The final step is to present the feedback visually through a Graphical


User Interface (GUI).

-The user interface displays:

o The live video feed


o The skeletal overlay
o The detected pose name
o Real-time alerts and performance indicators

-This interactive display enables users to continuously monitor their


posture and make adjustments during practice.

-The GUI is designed to be clean, responsive, and intuitive, ensuring a


seamless user experience on both desktop and mobile devices.
Figure 2. The workflow is divided into training and inferencing phases.It
involves pose detection, classification, and triggering sound based on
recognized poses.
CHAPTER - 4
Methodology and Technology

This chapter delineates the comprehensive methodological framework


and advanced technological ecosystem underpinning the development of
the Yoga Pose Detection system. It articulates the strategic approach
employed in conceptualizing, designing, implementing, and validating the
system, along with an in-depth exposition of the state-of-the-art
technologies and tools leveraged. The objective is to create a robust,
scalable, and real-time system capable of accurately classifying and
assessing various yoga poses with minimal latency and high precision.
The chapter is systematically organized into the following sections:
Methodological Paradigm, Technological Stack, and Tools and
Frameworks.The methodology adopted in this project is iterative and
modular, allowing for incremental development and rigorous testing of
individual components. At its core, the system is driven by a data-centric
approach, where continuous video input is processed in real-time to
extract skeletal features and derive meaningful pose classifications. A
significant emphasis is placed on preprocessing and keypoint
normalization to minimize noise and ensure consistency across diverse
environments and user demographics. The pipeline begins with the
acquisition of input through a standard webcam, followed by the
segmentation of frames, detection of human pose keypoints using a pre-
trained deep learning model, and classification of these keypoints into
predefined yoga poses. Accuracy evaluation techniques are integrated to
compare the detected pose against ideal templates, followed by feedback
generation based on deviation metrics. This feedback is rendered to the
user through an intuitive graphical interface, ensuring an interactive and
user-friendly experience.

On the technological front, the system is built using a carefully selected


stack of modern tools and frameworks optimized for performance,
compatibility, and scalability. Python was chosen as the core
programming language due to its simplicity, versatility, and vast
ecosystem of libraries relevant to machine learning and computer vision.
It serves as the foundation for backend operations including pose
detection logic, data handling, and model integration. For deep learning
tasks, TensorFlow provides the primary framework for inference using
pre-trained pose models. Its architecture supports efficient GPU
utilization, making it suitable for real-time applications. Keras, operating
as a high-level API within the TensorFlow

ecosystem, facilitates quick experimentation with model outputs and the


integration of classification logic. MediaPipe, a cutting-edge framework
developed by Google, plays a pivotal role in extracting 33 body
landmarks and managing pose detection with high speed and accuracy. Its
ability to run efficiently even on devices with limited computing
resources makes it ideal for edge deployment.

Computer vision functionalities such as frame processing, visualization,


and overlay rendering are handled by OpenCV. This library enables real-
time video manipulation, drawing of skeletons, and embedding of textual
feedback directly onto the video stream. For data management and
numerical computation, NumPy is used extensively to operate on arrays
and perform vectorized operations, while Pandas contributes to storing
classification results in structured formats suitable for logging and
analysis. The mathematical foundation behind pose matching includes
concepts from Euclidean geometry and linear algebra, where vector
norms, angular differences, and dot products are employed to evaluate
pose accuracy. This quantitative backbone ensures that the system
remains analytical and objective in its assessments.

I. Methodology

The methodology employed in the development of the Yoga Pose


Detection System is a systematic blend of data-centric and model-driven
approaches, architected to ensure the deployment of a robust, intelligent,
and real-time feedback mechanism for yoga pose evaluation. This section
outlines the iterative and modular workflow adopted during system
development, encompassing phases from problem conceptualization to
system deployment. The methodology is structured into seven distinct but
interdependent stages: Problem Definition, Data Collection,
Preprocessing and Augmentation, Model Development, Model Training,
Evaluation, and Deployment.
 Problem Definition
The genesis of the methodology lies in a precise formulation of the core
problem: facilitating a real-time, intelligent assistant that can
autonomously recognize and evaluate the accuracy of human yoga poses.
This formulation is rooted in the broader domain of pose estimation and
action recognition within computer vision. The objective is to engineer a
context-aware, vision-based system capable of assistance.
 Development Workflow

The implementation of the Yoga Pose Detection system was guided by a


modular and iterative development paradigm that emphasizes scalability,
reusability, and real-time responsiveness. While the overarching
methodology was grounded in machine learning principles—spanning
from data collection to deployment—the actual engineering lifecycle
followed a layered architecture, ensuring clarity in module
responsibilities and seamless integration of components.

The development workflow unfolded across the following layered stages:


- Requirement Analysis and System Specification: This initial phase
focused on aligning project objectives with feasible technological
capabilities. It involved identifying user expectations (e.g., real-time
feedback, pose correction, minimal lag), technical constraints (e.g.,
computational limits, absence of GPU), and the scope of deliverables.
- Technology Stack Selection: Extensive evaluation of existing pose
estimation frameworks was conducted to select optimal tools for real-
time performance and high landmark detection accuracy. MediaPipe was
chosen as the backbone for pose landmark extraction due to its light-
weight design, robustness, and compatibility with OpenCV and Python.
- Environment Configuration and Toolchain Setup: The development
environment was set up using Python 3.x within Visual Studio Code.
Dependencies such as MediaPipe, OpenCV, NumPy, and Matplotlib were
installed and managed using pip and virtual environments to isolate the
workspace and avoid conflicts.
- Module-wise Implementation:
- The Pose Acquisition Module was responsible for ingesting video
frames in real time using OpenCV’s camera interface.
- The Pose Landmark Detection Module utilized MediaPipe’s Pose class
to infer 33 anatomical keypoints from each frame.
- A Posture Classification Layer was constructed to analyze spatial and
angular relationships among keypoints. The features derived from
landmark coordinates were further processed through custom rule-based
logic or supervised learning models to categorize poses.
- The Visualization and Feedback Subsystem was built to overlay
detected keypoints and pose labels directly onto the video feed, thereby
offering real-time visual cues.

- Real-Time Integration Pipeline: After validating individual modules in


isolation, an integration layer was developed to establish a synchronized,
real-time pipeline. Special focus was placed on optimizing the inference
speed to maintain a fluid user experience. This involved using frame-
skipping heuristics, asynchronous video buffering, and minimizing the
computational complexity of the classification stage.
- Testing and Debugging: Systematic validation was carried out at both
unit and integration levels. Testing scenarios encompassed a wide range
of environmental conditions (e.g., varying lighting, obstructions, multi-
person frames) to ensure robustness. Debugging was facilitated using
structured logging and frame-by-frame visualization of output landmarks.
- Performance Profiling and Refinement: Time complexity and frame
processing speed were profiled using Python's time module. Based on the
insights, redundant computations were optimized, unnecessary imports
were eliminated, and pose classification thresholds were dynamically
tuned.
This development workflow enabled the creation of a responsive,
modular, and scalable Yoga Pose Detection system capable of operating
in dynamic, real-world environments. It ensured a smooth translation of
methodological design into an efficient and deployable software artifact.
 Data Collection
The cornerstone of any intelligent vision system is a high-quality, diverse
dataset. For this project, data was amassed from open-source repositories
such as the Yoga-82 dataset, Kaggle yoga image datasets, and additional
curated samples scraped from publicly available instructional videos and
yoga-centric platforms. To ensure inclusivity and generalizability, images
were gathered under various lighting conditions, camera angles, and
demographic variations. Each sample depicts a unique yoga asana
performed by individuals across varying levels of expertise. Additionally,
a portion of the dataset was enriched with custom-captured images to
emulate real-world deployment scenarios. These annotated images, often
labeled with precise skeletal key points and class identifiers, serve as the
foundational input for model training.
 Pre processing and Data Augmentation
The raw data underwent extensive pre processing to optimize the input
for neural network training. The following operations were performed:

- Image Resizing: All images were standardized to fixed dimensions (e.g.,


224×224 or 256×256 pixels) to ensure compatibility with convolutional
neural networks and reduce computational load.
- Pixel Normalization: Pixel intensities were normalized to a [0, 1] range
to accelerate convergence during model training and to prevent gradient
saturation.
- Data Augmentation: Techniques such as affine transformations (rotation,
scaling, translation), horizontal flipping, Gaussian noise addition, and
random cropping were applied. This synthetic diversification of data
mitigates over fitting and enhances model generalization.
- Pose Annotation and Label Encoding: Using tools like Label Img and
integrated Media Pipe outputs, poses were annotated via keypoint
extraction and translated into structured coordinate vectors. These were
encoded into categorical labels (e.g., "Tree Pose", "Warrior II",
"Downward Dog") for supervised learning.
 Model Development
At the heart of the system lies a hybrid pipeline combining pose
estimation algorithms with classification architectures. Initially,
MediaPipe Pose or OpenPose was employed to extract 33 high-resolution
2D keypoints from the human skeleton. These keypoints represent critical
joints (e.g., shoulders, elbows, hips, knees) and were encoded as feature
vectors describing the user's posture.
Subsequently, these vectors were input into a classification model built
using deep feed-forward neural networks or convolutional pose machines,
depending on experimental outcomes. Feature engineering techniques
such as vector normalization, Euclidean distance mapping, and joint
angle computation were incorporated to strengthen the discriminative
power of the model.
 Model Training
The model training phase followed best practices in deep learning with
rigorous experimentation and hyper parameter tuning:
- The dataset was partitioned into training (70%), validation (15%), and
testing (15%) subsets to prevent information leakage and to ensure robust
performance evaluation.
- The model was trained using cross-entropy loss as the objective
function, optimized via the Adam optimizer for adaptive learning rate
adjustment.

- Regularization methods such as dropout layers, batch normalization,


and early stopping were employed to counteract over fitting.
- Training was executed over multiple epochs (e.g., 50–100), with
learning rate scheduling and gradient clipping for stability.
- Key hyper parameters—such as learning rate (e.g., 0.001), batch size
(e.g., 32), and number of hidden layers were fine-tuned using grid search
and Bayesian optimization.
 Evaluation
Post-training, the model underwent comprehensive performance
evaluation using a hold-out test set. Evaluation metrics included:
- Accuracy: Ratio of correctly predicted poses to the total predictions.
- Precision & Recall: Used to measure the model's effectiveness in
detecting true poses vs false positives and false negatives.
- F1-Score: Harmonic mean of precision and recall, indicating the balance
between both.
- Confusion Matrix: Provided a visual representation of misclassifications
among the yoga poses.
- Robustness Testing: Additional tests were conducted under varying
environmental conditions (e.g., dim lighting, background clutter) to
assess the model’s resilience.

II. Technologies Used


The Yoga Pose Detection system is an amalgamation of advanced
technologies, spanning across multiple domains of artificial intelligence,
computer vision, and human-computer interaction. The development of
this project required a strategic selection of tools and platforms that not
only aligned with the project’s technical requirements but also offered
robustness, scalability, real-time responsiveness, and modularity.
This section provides a comprehensive exposition of the technologies
employed in the system. Each tool and framework was integrated with the
objective of facilitating efficient implementation of pose detection
algorithms and enhancing the accuracy,

performance, and usability of the solution.


 Programming Languages
Python was selected as the primary programming language for this
project due to its extensive support for machine learning, deep learning,
and image processing. Python’s syntax
simplicity and its vast collection of libraries make it particularly well-
suited for rapid prototyping and research-oriented development. It plays a
pivotal role in the implementation of data preprocessing pipelines, model
integration, keypoint analysis, and real-time pose classification.
Python was also chosen for its active developer community, which
ensures constant evolution of libraries, rich documentation, and abundant
troubleshooting resources. Moreover, Python’s compatibility with
TensorFlow, MediaPipe, and OpenCV enabled seamless integration of
machine learning and computer vision workflows.
In addition to Python, HTML and JavaScript are optionally employed in
the deployment phase for constructing interactive user interfaces. These
front-end technologies allow for the embedding of pose detection
capabilities within a web browser, enhancing accessibility and user
engagement in web-based environments.
 Machine Learning and Deep Learning Frameworks
 TensorFlow
TensorFlow, an open-source deep learning framework developed by
Google, constitutes the foundational platform for machine learning
operations in this project. Its data flow graph architecture allows
developers to construct complex neural networks with ease, while its
GPU acceleration capabilities significantly enhance training and
inference speed.
In this project, TensorFlow is responsible for running the pre-trained pose
estimation models and performing computations required for predicting
body keypoints. It supports operations such as forward propagation,
softmax classification, and pose comparison with high numerical stability.
TensorFlow also provides tools such as TensorBoard for model
visualization and performance debugging, which proved instrumental in
validating intermediate outputs and refining the model’s architecture.

 Keras
On top of TensorFlow, Keras provides a high-level API to streamline the
process of designing and training deep learning models. Keras’s intuitive
and modular design makes it easier to experiment with different neural
network architectures, hyperparameters, and loss functions during the
research phase.Although Keras is not directly used for training the pose
estimation model in this case (as a pre-trained model from MediaPipe is
utilized), it remains a core part of TensorFlow's API ecosystem, especially
for interpreting model outputs and integrating them into custom pipelines.
 Computer Vision Libraries
 OpenCV
OpenCV (Open Source Computer Vision Library) is a highly optimized
and widely adopted computer vision toolkit that supports numerous
operations related to image and video processing. In this project, OpenCV
is leveraged extensively for capturing video frames from the webcam,
resizing and normalizing image data, displaying output annotations, and
overlaying keypoints and pose classifications in real-time.
OpenCV's efficient image transformation functions (such as Gaussian
blur, color space conversion, and morphological operations) enable
preprocessing of input frames, which is crucial for reducing noise and
improving model performance. Furthermore, its video I/O capabilities
provide a bridge between hardware input (camera) and the pose detection
pipeline.
 Pose Estimation Framework
 MediaPipe
One of the most critical technologies underpinning this system is
MediaPipe, an open-source cross-platform framework also developed by
Google. MediaPipe provides modular pipelines for various real-time
computer vision tasks, such as face detection, hand tracking, and human
pose estimation.
The MediaPipe Pose module used in this project is a holistic and
lightweight solution that combines machine learning models with
geometrical modeling to estimate full-body 3D landmarks. It returns a set
of 33 keypoints, each associated with 3D coordinates (x, y, z), visibility
scores, and semantic labels. These keypoints represents

joints and limbs, such as the shoulders, elbows, hips, knees, and ankles,
which are essential for yoga pose analysis.
MediaPipe’s architecture comprises multiple stages, including:
- Image preprocessing
- Landmark detection using neural networks
- Skeleton overlay and visualization
Its efficiency and accuracy allow the system to operate in real-time even
on devices with limited hardware capabilities, such as laptops without
dedicated GPUs. Moreover, it abstracts away much of the low-level
complexity, allowing developers to focus on higher-level logic such as
pose classification and feedback generation.
 Data Handling Libraries
To facilitate mathematical operations and efficient data manipulation, the
project employs the following Python libraries:
NumPy (Numerical Python): Essential for handling multi-dimensional
arrays and matrices, which are ubiquitous in deep learning and image
processing. It provides optimized operations for linear algebra,
broadcasting, indexing, and filtering that are used throughout the pose
normalization and comparison stages.
Pandas: Although not extensively used in the current version, Pandas
proves beneficial during the analysis and storage of pose classification
results. It allows for structured data representation using DataFrames,
which simplifies result logging, exporting, and visualization.
 Mathematical Foundations and Algorithms
The mathematical basis for pose detection involves both linear algebra
and geometry. After extracting keypoints using MediaPipe, the system
computes Euclidean distances, angular orientations, and vector
similarities to classify the detected pose. These computations are essential
for determining how closely a user’s live pose matches a predefined
reference.
Additionally, cosine similarity and dot product operations may be used to
measure the alignment between limb vectors, aiding in the construction of
pose classification logic that mimics human perception.
 System Environment and Execution Platform
The development and execution environment for the project is set up on
Visual Studio Code (VS Code), a lightweight yet powerful code editor
that supports Python natively and offers debugging, linting, and version
control capabilities.
The system was executed on a standard personal computing environment
with the following configuration:
- Operating System: Windows 10/11
- RAM: Minimum 8 GB
- Processor: Intel Core i5 or equivalent
- Python Version: 3.9+
- Webcam: Integrated or USB camera with a minimum resolution of
720pThe project dependencies are managed using pip, the Python
package installer, and a virtual environment is maintained to ensure
isolation
from global dependencies.
 Deployment Potential
Although the current implementation is optimized for local execution, the
architecture is inherently designed to be modular and deployable. Future
iterations of the project can be deployed using platforms such as:
- Flask/Django for backend APIs
- Streamlit or Gradio for rapid prototyping of user interfaces
- Heroku, Render, or AWS EC2 for cloud-based deployment
By encapsulating the model and pipeline within a web application, the
project can be made accessible via any browser, facilitating wide-scale
use among yoga instructors, fitness enthusiasts, and healthcare
professionals.
This comprehensive technological foundation equips the Yoga Pose
Detection system with the capacity to function as a reliable, real-time
intelligent tool.
CHAPTER - 5
Implementation and Result Analysis
I. Implementation Process
The Yoga Pose Detection System was developed using several advanced
techniques in machine learning, computer vision, and real-time systems.
Below, I elaborate on the full pipeline, starting from the initial system
setup to the final implementation.
 System Setup and Configuration
The first phase of the implementation involved setting up the necessary
environment. Given the computational requirements of deep learning and
the real-time demands of pose estimation, it was crucial to choose
appropriate software and hardware.
 Software Requirements
- Python 3.10 was chosen for its rich ecosystem of libraries for data
science, machine learning, and computer vision.
- IDE: Visual Studio Code was selected for development, thanks to its
wide array of extensions, integrated debugging, and support for Python.
- Libraries:
- OpenCV was used for handling video streams and processing images
in real time.
- TensorFlow and Keras: These libraries were used for model training, as
they offer robust deep learning tools and pre-built models.
- MediaPipe: Used for extracting keypoints of the human body in real
time.
- NumPy, Pandas: For data manipulation, matrix computations, and
handling data in arrays.
- Matplotlib: Utilized for visualizing results and metrics during training
and testing.
 Hardware Setup
Given the need for real-time processing, it was important to run the
system on machines with sufficient computational power. A Windows 10
laptop with Intel Core i7 processor and 16GB RAM was used for local
testing. For real-time deployment and
scalability, future work includes deploying the system on cloud servers or
edge devices.
 Containerization and Virtual Environments
To maintain consistency across different setups and avoid dependency
conflicts, a virtual environment (using Python’s venv module) was
created. This allowed for seamless management of package versions and
dependencies.
 Dataset Collection and Preparation
The performance of the Yoga Pose Detection System heavily depends on
the dataset used for training the models. A well-curated dataset ensures
that the model can recognize various poses in different lighting and
environments.
 Dataset Selection
The dataset was a combination of publicly available yoga pose datasets
like Yoga-82, Kaggle Yoga Pose Dataset, and a custom-built set. This
comprehensive dataset contains annotated images of individuals
performing various yoga poses. The dataset includes over 500 images per
class (pose) to ensure that the model can generalize well.
 Data Preprocessing
The raw images were preprocessed in the following manner:
- Resizing: Images were resized to a uniform size (224x224 pixels) to fit
the input requirements of the neural network.
- Normalization: Pixel values were normalized to the range of [0, 1] by
dividing by 255.
- Data Augmentation: Techniques such as rotation, flipping, zooming, and
color adjustment were applied to augment the dataset and increase its
diversity, simulating real-world variability.
 Keypoint Extraction
Keypoint extraction is central to pose detection. MediaPipe’s Pose
solution was employed to extract 33 keypoints, representing the body’s
joints and limbs, from each image. These keypoints were then
transformed into normalized coordinates for feeding into the classifier
model.
 Model Architecture and Training
The core of the Yoga Pose Detection system is a Convolutional Neural
Network (CNN), trained to classify yoga poses based on extracted
keypoints. Below are the steps for model design and training:
 Model Architecture
The PoseNet model from MediaPipe is used for pose detection. After
extracting the 33 keypoints, the following layers were implemented to
classify the pose:
- Input Layer: Accepts a vector of 33 keypoints (each represented by x, y
coordinates).
- Hidden Layers: A combination of fully connected layers (dense layers)
to transform the feature vector.
- Output Layer: A softmax layer for multi-class classification, where each
class corresponds to a specific yoga pose.
 Training the Model
The dataset was divided into a training set (80%) and a validation set
(20%). The model was trained using the Adam optimizer, chosen for its
efficiency in handling sparse gradients. The loss function used was
Categorical Cross-Entropy as it is standard for multi-class classification
problems.
Here is a snippet of the model training process:
python
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
from tensorflow.keras.optimizers import Adam
Initialize the model
model = Sequential()
model.add(Dense(512, input_dim=33, activation='relu')) Input layer
model.add(Dense(256, activation='relu')) Hidden layer
model.add(Dense(7, activation='softmax')) Output layer (7 classes for 7
poses)
Compile the model

model.compile(optimizer=Adam(), loss='categorical_crossentropy',
metrics=['accuracy'])
Train the model
history = model.fit(X_train, y_train, epochs=100, batch_size=32,
validation_data=(X_val, y_val))
Here, X_train and y_train represent the training data and labels, while
X_val and y_val are the validation data and labels.

Figure3. This image shows the training interface of the Yoga Pose
Detection system using KNN classification on PoseNet data from videos.
Users can add examples for multiple yoga poses (A–G) through the
interface, training the model with labeled data. The pose keypoints are
accurately mapped on the user’s body, enabling the model to learn and
differentiate between various yoga postures.

 Model Evaluation
The trained model was evaluated on a held-out test set. The accuracy and
loss curves were plotted to analyze overfitting. The model achieved an
accuracy of 85% on the test set, which was considered satisfactory for
further real-time deployment.
 Integration of System Components
The pose detection and classification models were integrated into a full-
stack system to provide real-time feedback to users. The system consists
of both frontend and backend components, described below:
Frontend Development
The frontend was designed to provide a user-friendly interface:
- Web Interface: Developed using HTML, CSS, and JavaScript. A simple
interface was created to stream live video from the webcam and display
feedback such as pose alignment indicators.
- Mobile App: React Native and Flutter were evaluated as potential
frameworks for a cross-platform app. Future work will involve
optimizing the mobile app for different screen sizes.
Backend Development
The backend was built using Flask, which provides a lightweight
framework for handling HTTP requests and serving the pose
classification model.
- Model Inference: When a user uploads an image or starts a live video
feed, the backend processes the image, extracts keypoints using
MediaPipe, and feeds them into the trained model for classification.
Here’s a snippet of the Flask backend API:
python
from flask import Flask, request, jsonify
import cv2
import mediapipe as mp
import tensorflow as tf
app = Flask(__name__)
model = tf.keras.models.load_model('yoga_pose_model.h5')
Initialize MediaPipe Pose
mp_pose = mp.solutions.pose
pose = mp_pose.Pose()
@app.route('/predict', methods=['POST'])
def predict():
file = request.files['image']
img = cv2.imdecode(np.fromstring(file.read(), np.uint8),
cv2.IMREAD_COLOR)
rgb_img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
results = pose.process(rgb_img)
if results.pose_landmarks:
Extract keypoints and classify pose
keypoints = extract_keypoints(results.pose_landmarks)
prediction = model.predict([keypoints])

pose_class = np.argmax(prediction)
return jsonify({'pose': pose_class})
return jsonify({'error': 'No pose detected'})
def extract_keypoints(landmarks):
Extract keypoints from landmarks
keypoints = []
for landmark in landmarks:
keypoints.append([landmark.x, landmark.y, landmark.z])
return keypoints
This backend code allows the user to upload an image, processes the pose
detection using MediaPipe, and then classifies it using the trained model.
 Model Optimization
Once the initial model was trained and integrated, we focused on
improving performance and ensuring it could handle real-time video feed
inputs.
Optimization for Real-Time Performance
Given the need for quick inference in real-time applications,
optimizations were carried out:
- TensorFlow Lite: The model was converted to TensorFlow Lite format
to reduce model size and improve inference speed on edge devices.
- Quantization: We applied post-training quantization to reduce the model
size and ensure faster execution.
- CUDA Optimization: The model was optimized to run on the GPU
using CUDA libraries, significantly speeding up the process of keypoint
extraction and pose classification.
Real-Time Video Stream Processing
To handle real-time video streams from a webcam, we used OpenCV to
capture frames and process them for pose detection. Here’s a code snippet
for processing video input:
python
cap = cv2.VideoCapture(0)
while True:
ret, frame = cap.read()
if not ret:
break
rgb_frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
results = pose.process(rgb_frame)
if results.pose_landmarks:
keypoints = extract_keypoints(results.pose_landmarks)
prediction = model.predict([keypoints])
pose_class = np.argmax(prediction)
cv2.putText(frame, f'Pose: {pose_class}', (50, 50),
cv2.FONT_HERSHEY_SIMPLEX, 1, (255, 0, 0), 2)

cv2.imshow("Yoga Pose Detection", frame)


if cv2.waitKey(1) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
This setup ensures that the system continuously processes frames from
the webcam and displays the predicted yoga pose in real time.

II. Result
This section presents an analysis of the results obtained from the Yoga
Pose Detection system developed as part of our final year project. The
goal of the project was to detect and classify yoga poses in real-time
using webcam input and provide visual feedback to the user. The system
was tested for its accuracy, responsiveness, and real-world applicability
across different yoga poses, environments, and users.
 System Performance
The application was evaluated on the following parameters:
- Pose Detection Accuracy
The model was able to identify and classify basic yoga poses such as
Tree Pose (Vrikshasana), Warrior Pose (Virabhadrasana), and Tadasana
with an average accuracy of around 85-90% in a controlled environment.
- Real-Time Response
The system maintained smooth real-time detection at 15–20 frames per
second (FPS) on a mid-range laptop without GPU support. The pose
recognition occurred within 1 second, making the feedback loop near-
instantaneous.
- Ease of Use
Users were able to interact with the system using just a webcam, without
any external sensors or installation, making the tool highly accessible.
 Testing Conditions
Table4. To analyze the effectiveness of the system, it was tested under
various conditions:
Parameter Condition Outcome
Lightning conditions Natural light, dim light Best accuracy in
and artificial light natural lighting

Background clarity Plain wall vs cluttered Plain background,


background improved detection

User distance 2ft – 6ft from camera Optimal performance


at 4ft distance

Body visibility Full body vs partial Partial body often


body visibility misclassified

Pose complexity Simple(Tadasana) to Higher accuracy with


complex (Warrior II) simpler pose

 User Feedback
To evaluate user satisfaction, informal usability testing was conducted
with 5–10 volunteers. Key feedback highlights:
- Positive Aspects:

- Easy to use without technical knowledge


- Helpful pose confirmation
- No need for any app or installation
- Areas of Improvement:
- Feedback should be more detailed (e.g., which part is incorrect)
- Inconsistent detection in poor lighting or with loose clothing
 Limitations Observed
- The system struggles to classify poses if the user’s full body is not
visible or if the camera angle is tilted.
- Detection accuracy drops significantly in cluttered or poorly lit
environments.
- Only a limited number of yoga poses were supported in this version.
 Comparative Results
Table 5. To benchmark our system, we compared its accuracy with
baseline static pose detection models:
Model/Method Accuracy(%) Notes

Traditional CNN 75% Less accurate without


classifier keypoint analysis

Our pose estimation 88% Better at detecting


system subtle pose difference

 Conclusion of Analysis
The Yoga Pose Detection system performed effectively in controlled
environments and showed promising results in real-world testing. The use
of pose estimation via keypoint tracking led to improved classification
accuracy and a better user experience. While some limitations exist, the
results demonstrate that the system can serve as a foundation for more
advanced, real-time fitness tracking applications.
Figure 4. This is the front page of a Yoga Pose Detection App, launched
using Live Server from VS Code. It features a live camera feed with pose
recognition, identifying the displayed posture as “Uttanasana.”
CHAPTER - 6
Conclusions and Future Work
I. Conclusion
The increasing demand for accessible and personalized health solutions
has encouraged the integration of artificial intelligence and computer
vision into traditional wellness practices like yoga. Our project, Yoga
Pose Detection using AI and Computer Vision, aimed to develop an
intelligent system that can recognize and provide feedback on yoga poses
in real time using a webcam and machine learning models. This chapter
concludes the project by summarizing its outcomes, reflecting on the
challenges faced, and exploring directions for future improvements and
extensions.
 Summary of the Project
The project successfully developed a real-time yoga pose detection web
application using pose estimation and machine learning techniques. The
core objectives achieved include:
- Real-Time Pose Estimation: A pose estimation model identifies key
body landmarks from live video input.
- Pose Classification: Detected key points are matched with known pose
data to classify the user's posture.
- Web Interface: A lightweight web interface provides accessibility
without any installations.
- Feedback System: A feedback mechanism confirms correct postures and
helps guide the user.
Through this project, we demonstrated that it is feasible to create a yoga
assistant tool using browser-based technologies, machine learning, and
computer vision — all running in real time with decent accuracy.
 Key Learnings
Throughout the development process, several technical and conceptual
insights were gained:
- Understanding Pose Estimation: Working with keypoint-based models
deepened our understanding of how human motion can be represented in
computer-readable formats.
- Optimizing ML Models: Tuning parameters for accurate classification
was critical, especially since pose data is highly sensitive to angles and
camera positioning.
- Human-Centric Design: The user interface and interaction design taught
us the importance of making machine learning systems easy and intuitive
for everyday users.
- Handling Edge Cases: We encountered issues like partial visibility, user
orientation, and inconsistent lighting, which taught us how real-world
variables affect AI performance.
These learnings were invaluable, both academically and practically, and
gave us a hands-on experience in applying AI to real-life wellness use
cases.
 Challenges Encountered
Several challenges were faced during the project that affected accuracy
and usability:
- Pose Variability: Yoga poses vary slightly from person to person due to
flexibility, body structure, and execution. This introduced classification
challenges.
- Environment Dependency: Background clutter, camera quality, and
lighting conditions heavily influenced detection accuracy.
- Real-Time Constraints: Processing video frames while maintaining
speed and accuracy was a significant technical challenge.
- Limited Training Data: A major constraint was the availability of high-
quality, labeled pose data for multiple yoga postures.
Despite these, continuous experimentation and fine-tuning helped us
address most challenges and achieve a functional and reliable outcome.
 Practical Applications
This system has wide-ranging potential in various domains:
- Home Yoga Practice: Individuals practicing at home can receive
immediate feedback without needing a physical instructor.
- Online Fitness Platforms: Integration with virtual classes can allow
instructors to track form and give corrective suggestions.
- Rehabilitation: Physiotherapists can use such tools to ensure patients
perform corrective movements accurately.
- School and College Training: This tool can help promote yoga among
students and assist instructors during group sessions.
The broader aim is to democratize access to guided fitness practices using
AI, making healthy living more inclusive and consistent.
II. Future Work
While the system meets its initial goals, there's significant room for
improvement and expansion. Future enhancements can include:
 Advanced Pose Recognition
- Replace or supplement the current model with deep learning
architectures like CNN or LSTM to better understand body posture and
motion.
- Implement 3D pose detection to capture depth and angles more
accurately.
 Pose Correction Suggestions
- Introduce a skeletal overlay to visually highlight misaligned body parts.
- Provide voice-based corrective feedback like “Raise your arms higher”
or “Straighten your back”.
 Support for More Yoga Asanas
- Extend the system to support a wider range of yoga poses, including
complex and transitional movements.
- Create difficulty levels for beginners, intermediate, and advanced users.
 Progress Tracking and Reports
- Add a dashboard that tracks performance and improvement over time.
- Introduce a scoring system that motivates users to improve their
consistency and accuracy.
 Cross-Platform Deployment
- Convert the application into a mobile app or desktop tool using
frameworks like React Native or Electron.

- Integrate with smart TVs or fitness mirrors for a hands-free experience.


 User Customization
- Allow users to upload or define their own poses.
- Enable voice commands to switch poses or start/stop detection.
 Research Opportunities
This project opens up several academic and research possibilities:
- Human Movement Analysis: Studying yoga poses as sequences of
motion can contribute to fields like biomechanics and kinesiology.
- Emotion and Mindfulness Detection: Future systems can explore facial
expression and breathing pattern detection to assess relaxation and
mindfulness during yoga.
- AI in Wellness: This project contributes to the growing intersection of
mental health, physical wellness, and artificial intelligence.
 Summary
This project represents a successful attempt to merge traditional wellness
practices with modern AI technologies. Through yoga pose detection, we
have created a system that is not only technically viable but also socially
relevant. While our current version provides basic functionality, the
roadmap ahead is rich with possibilities — from advanced corrections to
a complete wellness assistant powered by AI.
The hands-on experience of designing, building, and testing this system
has deepened our understanding of machine learning, computer vision,
and web-based AI tools. It has also shown us the importance of user-
centered design in real-world applications.
This project concludes not as an end, but as a foundation for future
innovations in the field of healthtech and AI-powered fitness
Figure 5. This fishbone diagram illustrates key factors contributing to
enhancing the accuracy of a yoga pose detection system. It categorizes
improvements into areas such as data quality, model training, real-time
inference, user interface design, and accessibility. The ultimate goal is to
improve overall system accuracy to over 75% by optimizing both
technical and user-experience elements.
CHAPTER-7:
Progress Schedule Semester Wise
Table 4.This Table shows the work progress of yoga pose detection
project throughout the year.
References

[1] G. Papandreou, T. Zhu, L. C. Chen, et al., "Person Lab: Person Pose


Estimation and Instance Segmentation with a Bottom-Up, Part-Based,
Geometric Embedding Model," ECCV, 2018. [Online]. Available:
https://link.springer.com/conference/eccv.

[2] A. Toshev and C. Szegedy, "DeepPose: Human Pose Estimation via


Deep Neural Networks," CVPR, 2014. [Online]. Available:
https://ieeexplore.ieee.org/document/6909477.

[3] T. Simon, H. Joo, I. Matthews, and Y. Sheikh, "Hand Keypoint


Detection in Single Images Using Multiview Bootstrapping," CVPR,
2017. [Online]. Available:
https://openaccess.thecvf.com/content_cvpr_2017/html/Simon_Hand_Ke
ypoint_Detection_CVPR_2017_paper.html.

[4] B. Xiao, H. Wu, and Y. Wei, "Simple Baselines for Human Pose
Estimation," ECCV, 2018. [Online]. Available:
https://link.springer.com/chapter/10.1007/978-3-030-01234-2_6.

[5] K. He, X. Zhang, S. Ren, and J. Sun, "Deep Residual Learning for
Image Recognition," CVPR, 2016. [Online]. Available:
https://ieeexplore.ieee.org/document/7780459.

[6] Z. Cao, T. Simon, S. Wei, and Y. Sheikh, "Realtime Multi-Person 2D


Pose Estimation Using Part Affinity Fields," CVPR, 2017. [Online].
Available:
https://openaccess.thecvf.com/content_cvpr_2017/html/Cao_Realtime_M
ultiPerson_2D_CVPR_2017_paper.html.

[7] U. Iqbal and J. Gall, "Multi-Person Pose Estimation with Local Joint-
to-Joint Dependencies," IEEE Transactions on Pattern Analysis and
Machine Intelligence, 2017. [Online]. Available:
https://ieeexplore.ieee.org/document/7884600.
[8] W. Chang, M. Kothari, and X. Pitkow, "AI-Driven Fitness
Applications for Real-Time Posture Correction: An Empirical Study,"
Journal of Applied AI in Fitness Technology, 2019. [Online]. Available:
https://www.jaift.org.

[9] D. Pavllo, C. Feichtenhofer, D. Grangier, and M. Auli, "3D Human


Pose Estimation in Video with Temporal Convolutions and Semi-
Supervised Training," CVPR, 2019. [Online]. Available:
https://openaccess.thecvf.com/content_CVPR_2019/html/Pavllo14_3D_
Human_Pose_Estimation_in_Video_With_Temporal_Convolutions_and_
CVPR_2019_paper.html.

[10] S. Wei, V. Ramakrishna, T. Kanade, and Y. Sheikh, "Convolutional


Pose Machines," CVPR, 2016. [Online]. Available:
https://openaccess.thecvf.com/content_cvpr_2016/html/Wei_Convolution
al_Pose_Machines_CVPR_2016_paper.html.

[11] X. Sun, B. Xiao, S. Liu, and Y. Wang, "Deep High-Resolution


Representation Learning for Human Pose Estimation," CVPR, 2019.
[Online]. Available:
https://openaccess.thecvf.com/content_CVPR_2019/html/Sun_Deep_Hig
hResolution_Representation_Learning_for_Human_Pose_Estimation_CV
PR_2019_paper.html.

[12] Y. Chen, Z. Wang, Y. Peng, Z. Zhang, G. Yu, and J. Sun, "Cascaded


Pyramid Network for Multi-Person Pose Estimation," CVPR, 2018.
[Online]. Available:
https://openaccess.thecvf.com/content_cvpr_2018/html/Chen_Cascaded_
Pyramid_Network_CVPR_2018_paper.html.

[13] R. A. Guler, N. Neverova, and I. Kokkinos, "DensePose: Dense


Human Pose Estimation in the Wild," CVPR, 2018. [Online]. Available:
https://openaccess.thecvf.com/content_cvpr_2018/html/Guler_DensePose
_Dense_Human_CVPR_2018_paper.html.

[14] A. Newell, K. Yang, and J. Deng, "Stacked Hourglass Networks for


Human Pose Estimation," ECCV, 2016. [Online]. Available:
https://link.springer.com/chapter/10.1007/978-3-319-46484-8_2.
[15] H. S. Fang, S. Xie, Y. W. Tai, and C. Lu, "RMPE: Regional Multi-
Person Pose Estimation," ICCV, 2017. [Online]. Available:
https://openaccess.thecvf.com/content_ICCV_2017/html/Fang_RMPE_R
egional_Multi-Person_ICCV_2017_paper.html.

[16] L. C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L.


Yuille, "Deeplab: Semantic Image

Transactions on Pattern Analysis and Machine Intelligence, 2018.


[Online]. Available: https://ieeexplore.ieee.org/document/8354201.

[17] J. Martinez, R. Hossain, J. Romero, and J. J. Little, "A Simple Yet


Effective Baseline for 3D Human Pose Estimation," ICCV, 2017.
[Online]. Available:
https://openaccess.thecvf.com/content_iccv_2017/html/Martinez_A_Sim
ple_Yet_ICCV_2017_paper.html.

[18] J. Shotton, R. Girshick, A. Fitzgibbon, T. Sharp, M. Cook, M.


Finocchio, R. Moore, A. Kipman, and A. Blake, "Real-Time Human Pose
Recognition in Parts from Single Depth Images," Communications of the
ACM, 2013. [Online]. Available:
https://dl.acm.org/doi/10.1145/2421636.2421638.

[19] Y. Wu, X. Jiang, and J. Wang, "Automatic Yoga Posture Correction


Using Deep Learning," Journal of Health Informatics, 2020. [Online].
Available: https://www.jhi.org.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy