Final Report
Final Report
2025
Submitted by
Guided by
SOMIR SAIKIA
Head of R&D Team,
Vantage Circle
Guwahati, Assam
&
Internal Guide
Dr, TRIBIKRAM PRADHAN
Assistant Professor,
Department of Computer Science & Engineering
Tezpur University,
Assam
Date:
Place: Tezpur, Assam Dr Sarat Saharia
Professor & Head
Department of CSE
Tezpur University
i
Department of Computer Science & Engineering, Tezpur University
This is to certify that the project entitled “VisionFitTrack: AI Powered Real Time
Workout Detection and Reps Count” submitted by Arpan Neog (CSM23044) is carried
out by him under my supervision and guidance for partial fulfillment of the requirements and the
regulations for the award of the degree Master of Computer Application (MCA) during session 2023-
2025 at Tezpur University. To the best of my knowledge, the matter embodied in the project
report has not been submitted to any other university/institute for the award of any Degree or
Diploma.
Date:
Place: Tezpur, Assam Dr Tribikram Pradhan
Assistant Professor
Department of CSE
Tezpur University
ii
Department of Computer Science & Engineering, Tezpur University
Date:
Place: Tezpur, Assam Internal Examiner
iii
Department of Computer Science & Engineering, Tezpur University
This is to certify that the project entitled “VisionFitTrack: AI Powered Real Time
Workout Detection and Reps Count” submitted by Arpan Neog (CSM23044), to
Tezpur University, in partial fulfillment of the requirements for the major project of Master of
Computer Application (MCA). It is a Bonafide record of the project work carried out by him
during spring semester.
Date:
Place: Tezpur, Assam External Examiner
iv
Department of Computer Science & Engineering, Tezpur University
Declaration
I affirm that the project work entitled, “VisionFitTrack: AI Powered Real Time
Workout Detection and Reps Count” submitted to the Department of Computer
Science & Engineering at Tezpur University, was authored solely by me and has not been
presented to any other institution for the purpose of obtaining any other degree.
v
About the Company
Key Products:
Vantage Rewards – Peer & manager recognition platform
vi
Acknowledgement
At the outset, I express my sincere gratitude to Mr. Partha Neog, CEO of Vantage Circle,
for providing me with the opportunity to intern at such an innovative and forward-thinking
organization. His visionary leadership and the company’s commitment to cutting-edge
research in AI and wellness technology created an enriching environment for learning and
growth.
I am especially thankful to Mr. Somir Saikia, Head of Research at Vantage Circle, for his
invaluable mentorship, technical guidance, and consistent encouragement throughout the
internship. His insightful feedback and strategic direction played a key role in shaping the
objectives and execution of my project. I would also like to thank the entire research and
development team at Vantage Circle for their support, collaboration, and willingness to share
knowledge. Working alongside such experienced professionals has been a rewarding
experience that has significantly contributed to my academic and professional development.
vii
Abstract
viii
Contents
Cover
ix
Corpus Availability 9
Use Cases 9
System Analysis 10
Corpus Creation 10
3D avatar 12
Signs Animation 13
Web Application 14
Audio Parsing using Earley Parser 14
Sign Detection 14
LLM chat system 15
Software Requirement Specifications 16
General Description 16
Product Perspective 16
Product Functions 16
User Characteristics 16
General Constraints 17
Assumptions and Dependencies 17
Functional Requirements 17
Translation of one language to Indian Sign Language 17
Real Time Translation 17
Signer Friendly Inputs 17
Should follow rules and grammar of Indian Sign Language 17
Singer Friendly Chat System 17
Application should be accessible in all screen sizes 18
Non Functional Requirements 18
Avatar size should follow the block of interpretation 18
User Friendly Interface18
External Interface Requirement 18
Access of Large Language Model 18
Microphone 18
Graphical Processing Unit 18
Performance Requirement 18
Design Constraints 19
Indian Sign Language Compliance 19
x
Future Scope 19
System Design 20
Introduction 20
System Architecture 20
Data Flow Diagram 20
Sign Detection Module 24
Language Processor 24
Use Case 25
Activities 25
Audio parsing using Earley Parser 29
System Implementation 30
Step 1: creation of 3D avatar 30
Step 2: Display of avatar on website using React and Three.js 31
Step 3: Creation of the sign’s corpus 31
Step 4: Animation of Signs 32
Step 5: Development of Processor 33
Step 6: Front-end desing using Mantine 33
Step 7: Enable system with voice accessibility33
Step 8: Development of audio parser 34
Step 9: Implementation of Sign Detection Module 34
Step 10: Integrating Sign Detection Module using Flask 35
Step 11: Integrating OLLAMA server using Langchain 35
System Testing 36
Black Box Testing for Translation 36
White Box testing for Processor 36
Beta testing for audio accessibility 37
Black Box test of Sign Detection Module 37
Black Box testing for Screen Responsiveness 38
Acceptance Testing 38
Conclusion 39
Bibliography 40
xi
Chapter 1
Introduction
1.1 Background
Fitness tracking and exercise monitoring have become increasingly important as more
individuals seek to improve their health and performance through data-driven insights.
Traditional fitness tracking solutions often rely on wearable devices or manual input, which can
be inconvenient, expensive, or inaccessible to many users. Wearables may require regular
charging, can be uncomfortable during certain exercises, and often come with privacy concerns
due to data being sent to external servers. Manual tracking, on the other hand, is prone to human
error and can disrupt the flow of a workout.
1
1.2 Task
The primary task of VisionFitTrack is to detect, classify, and count exercise repetitions in
real time using video input from a webcam. The system is designed to recognize multiple
exercise types—such as push-ups, pull-ups, bicep curls, shoulder presses, squats, and deadlifts—
by analyzing the user’s body pose and movement patterns. The application must provide accurate
feedback, track workout statistics, and visualize progress, all while ensuring user privacy and
ease of use.
The input to the system is a live video stream, which is processed frame by frame to extract key
body points using pose estimation. These keypoints are then analyzed to determine the type of
exercise being performed, the current phase of the movement, and whether a valid repetition has
occurred. The output includes:
- Detected exercise type (e.g., push-up, squat)
- Repetition count for each exercise
- Set count and workout duration
- Real-time feedback on form and progress
- Visual progress charts and workout summaries
The system is also designed to support user authentication, allowing individuals to save their
workout history and track long-term progress. The user interface provides clear instructions, real-
time statistics, and interactive charts to enhance user engagement and motivation.
1.3 Challenges
2
environmental conditions.
• Real-time processing requires efficient algorithms to ensure smooth user experience
without lag. The application must balance computational complexity with responsiveness,
especially on devices with limited processing power.
• Distinguishing between similar exercises (e.g., push-ups vs. planks, squats vs. deadlifts)
demands precise analysis of joint angles, movement cycles, and temporal patterns.
Misclassification can lead to inaccurate feedback and user frustration.
• Ensuring privacy by processing all video data locally, without transmitting sensitive
information to external servers. This requires careful optimization of browser-based
machine learning models and pose estimation pipelines.
• Providing a user interface that is intuitive for users of all technical backgrounds, including
those new to fitness technology. The system must offer clear instructions, easy navigation,
and accessible visualizations.
• Handling edge cases such as partial occlusion, fast movements, or users stepping out of the
camera frame, which can disrupt pose detection and rep counting.
• Supporting extensibility for new exercises, custom workout routines, and integration with
other health platforms.
3
• Rule-based algorithms analyze joint angles (e.g., elbow, knee, hip) and movement
direction to refine exercise classification and accurately count repetitions and sets. State
machines are used to track the phases of each exercise, applying hysteresis and smoothing to
reduce false positives.
• All processing is performed in the browser, ensuring privacy and responsiveness. The
system does not require any server-side video processing, making it suitable for privacy-
conscious users and environments with limited internet connectivity.
• The modular architecture allows for easy extension to new exercises and integration with
additional features, such as progress visualization, user authentication, and personalized
feedback. The codebase is organized into clear modules for pose detection, exercise logic,
progress tracking, and user interface.
• Continuous user feedback was incorporated during development, allowing the system to be
refined for usability, accuracy, and robustness. The application is designed to be accessible
to users with varying levels of fitness and technical expertise.
This approach combines the strengths of modern machine learning with interpretable, rule-
based logic, resulting in a system that is both accurate and adaptable to real-world
conditions. VisionFitTrack sets a foundation for future research and development in AI-
powered fitness tracking, with potential applications in personal training, rehabilitation, and
health VisionFitTrack employs a hybrid approach combining rule-based logic and machine
learning for exercise detection and repetition counting:
• MediaPipe Pose is used to extract 17 key body points from the webcam video stream in
real time, providing a robust foundation for pose analysis. The pose estimation model is
optimized for speed and accuracy, enabling smooth tracking even on consumer-grade
hardware.
• A TensorFlow.js model, trained on labeled exercise data, classifies the user’s current
exercise based on normalized keypoint coordinates. The model outputs probabilities for
each supported exercise class, allowing the system to handle ambiguous or transitional
movements.
• Rule-based algorithms analyze joint angles (e.g., elbow, knee, hip) and movement
4
direction to refine exercise classification and accurately count repetitions and sets. State
machines are used to track the phases of each exercise, applying hysteresis and smoothing to
reduce false positives.
• All processing is performed in the browser, ensuring privacy and responsiveness. The
system does not require any server-side video processing, making it suitable for privacy-
conscious users and environments with limited internet connectivity.
• The modular architecture allows for easy extension to new exercises and integration with
additional features, such as progress visualization, user authentication, and personalized
feedback. The codebase is organized into clear modules for pose detection, exercise logic,
progress tracking, and user interface.
• Continuous user feedback was incorporated during development, allowing the system to be
refined for usability, accuracy, and robustness. The application is designed to be accessible
to users with varying levels of fitness and technical expertise.
This approach combines the strengths of modern machine learning with interpretable, rule-
based logic, resulting in a system that is both accurate and adaptable to real-world
conditions. VisionFitTrack sets a foundation for future research and development in AI-
powered fitness tracking, with potential applications in personal training, rehabilitation, and
health monitoring. M.
5
Chapter 2
2.1 Components
A machine learning model built with TensorFlow.js is used to classify the user's current
exercise based on the normalized coordinates of the detected keypoints. The model is trained on
labeled exercise data and outputs probabilities for each supported exercise class (e.g., push-up,
squat, deadlift). Running the model in the browser ensures privacy and enables real-time
inference without the need for server-side processing.
6
angles and movement patterns. State machines track the phases of each exercise, applying logic
to determine when a valid repetition or set has occurred. Hysteresis and smoothing techniques
are used to reduce false positives and ensure stable, accurate rep counting.
Workout data,including repetitions, sets, and duration, is stored and visualized for the
user. Chart.js is used to render interactive progress charts, allowing users to monitor their
performance and improvements over time. This visual feedback motivates users and helps them
set and achieve fitness goals.
The frontend is built with HTML5, CSS3, JavaScript, and Bootstrap 5, providing a
responsive and user-friendly interface. The UI displays real-time feedback, workout statistics,
and progress charts. Clear instructions and intuitive controls make the system accessible to users
of all fitness and technical backgrounds.
All video and pose data are processed locally in the user's browser. No video data is sent
to external servers, ensuring user privacy and data security. This approach also reduces latency
and makes the system usable even with limited or no internet connectivity.
The backend, built with Flask and SQLite, manages user authentication and stores
workout history. Users can register, log in, and view their personalized progress. The database
scheme supports multiple users, each with their own secure workout records.
7
2.1.8 Extensibility and Modularity
8
Chapter 3
Feasibility Analysis
Most laptops, desktops, and even some tablets and smartphones meet these requirements, making
VisionFitTrack technically feasible for a broad user base. The use of browser-based technologies
ensures cross-platform compatibility and ease of deployment without the need for software
installation.
9
3.2 Economical Feasibility
The economic model supports wide adoption in individual, educational, and institutional settings,
making the solution accessible and sustainable.
If new exercises or movement types are to be supported, additional labeled data can be collected
and incorporated into the system, ensuring ongoing relevance and accuracy.
10
• Integration with virtual coaching or remote fitness programs
• Educational use in teaching exercise form and technique
• Research and development in human movement analysis and sports science
The system’s flexibility and extensibility make it suitable for a wide range of fitness and health
applications, supporting both individual users and organizations.
11
Chapter 4
System Analysis
The system analysis for VisionFitTrack examines the core components, data flow, and
operational logic that enable real-time, privacy-focused fitness tracking using computer vision
and AI. The analysis covers the following aspects:
VisionFitTrack’s accuracy and robustness depend on the quality and diversity of its
training data. Data collection involves recording videos of users performing supported exercises
(push-ups, pull-ups, bicep curls, shoulder presses, squats, deadlifts) in various environments,
lighting conditions, and camera angles. Each frame is annotated with the exercise type, phase
(e.g., up/down for push-ups), and repetition boundaries. This annotated dataset is used to train
and validate the exercise classification model, ensuring it generalizes well to real-world
scenarios. Data augmentation techniques, such as mirroring, scaling, and rotation, are applied to
increase dataset diversity and model robustness.
The pose estimation module is the foundation of VisionFitTrack. It uses MediaPipe Pose,
a state-of-the-art library for real-time human pose detection, to extract 17 key points (joints)
from each video frame. These keypoints include the nose, eyes, ears, shoulders, elbows, wrists,
hips, knees, and ankles. The module is optimized for speed and accuracy, enabling smooth
tracking even on consumer-grade hardware. The extracted keypoints are normalized relative to
the video frame size and serve as input for downstream exercise classification and repetition
counting. The pose estimation runs entirely in the browser, ensuring privacy and low latency.
12
4.3 Exercise Classification Model
The exercise classification model is implemented using TensorFlow.js and runs directly
in the user’s browser. It is a lightweight neural network trained on the annotated pose data. The
model takes a flattened array of normalized key point coordinates as input and outputs a
probability distribution over the supported exercise classes. The model is designed to be
efficient, allowing real-time inference without significant computational overhead. It is
periodically retrained or fine-tuned as new data becomes available, enabling the system to adapt
to new exercises or user populations.
While the ML model provides exercise classification, VisionFitTrack also employs rule-
based logic to analyze joint angles and movement direction for more precise repetition and set
counting. For each exercise, specific joint angles (e.g., elbow for push-ups, knee for squats) are
monitored to detect the start and end of a repetition. State machines track the user’s movement
through different phases (e.g., down, up, rest), applying hysteresis and smoothing to avoid false
positives from jitter or noise. This hybrid approach ensures both flexibility and interpretability,
allowing for easy debugging and extension to new exercises.
13
4.6 Web Application Frontend
The frontend is built with HTML5, CSS3, JavaScript, and Bootstrap 5, ensuring a
modern, responsive, and accessible user interface. The UI displays real-time feedback, exercise
stats, and progress charts. Key features include:
- Live video feed with pose skeleton overlay
- Real-time display of detected exercise, rep/set count, and timer
- Interactive charts and workout summaries
- User authentication and profile management
- Responsive design for desktops, laptops, and tablets
The frontend is optimized for usability, with clear instructions and intuitive controls for users of
all backgrounds.
User authentication and data management are handled by a Flask backend and SQLite
database. Users can register, log in, and securely store their workout history. The database
schema supports multiple users, each with their own records of exercises, reps, sets, and session
notes. Authentication is managed using Flask-Login, ensuring secure sessions and data privacy.
The backend exposes RESTful APIs for saving and retrieving workout data, enabling seamless
integration with the front end. Data management features include:
- Secure registration and login
- Password hashing and session management
- Workout history retrieval and visualization
-Support for future integration with external health platform
14
Chapter 5
15
5.1.2 Product Functions
16
5.1.5 Assumptions and Dependencies
The system must capture live video input, extract pose landmarks, and
classify the type of exercise in progress using an ML model.
The application must count each repetition accurately and increment set
count based on predefined rules (e.g., full-body extension and return).
17
5.2.4 User Authentication and History
Users must be able to register, log in, and access personalized workout
histories including dates, exercise types, and performance metrics.
The interface must scale smoothly across different devices and screen
resolutions, ensuring consistent functionality.
5.3.3 Performance
18
5.3.4 Security
All user data must be encrypted during transmission and securely stored
using hashing for passwords and sanitized inputs to prevent injection attacks.
The app must request webcam permission and utilize its feed to perform
pose estimation in real time.
19
5.5 Performance Requirements
20
Chapter 6
System Design
Introduction
21
1. Sequence Diagram
Summary:
Depicts the time-ordered sequence of interactions between system components and the user:
- Actors: User
- Objects: Frontend, Backend, Classifier, Database
Flow:
Starts with video upload, proceeds to feature extraction, classification, repetition counting, and
ends with result display.
2. Class Diagram
22
Summary:
Defines the object-oriented design of VisionFitTrack with core classes:
User: Manages profile (userID, name, email).
ExerciseSession: Tracks workouts (sessionID, start/end time).
ExerciseClassifier: Classifies exercises from pose data.
RepetitionCounter: Counts exercise repetitions.
Relationships:
A User has multiple ExerciseSessions.
Each session uses ExerciseClassifier and RepetitionCounter.
Purpose: Ensures modular, maintainable design with AI components integrated for classification
and rep tracking.
23
Summary:
Describes the overall workflow of the VisionFitTrack system:
Starts with user authentication
Proceeds to video upload and processing
Continues with AI-driven classification and repetition counting
Ends with feedback/report generation
Purpose:
Provides a step-by-step operational blueprint of the system, integrating functional and technical
components.
24
Summary:
This top-level DFD provides a high-level overview of the system:
Processes:
- VisionFitTrack System
External Entities:
- User
Data Stores/Flows:
- Flow of user inputs (video uploads, profile info) into the system
- Outputs such as detected exercises and repetition counts back to the user
Purpose:
Gives a simplified bird’s-eye view of the system’s functionality and its interaction with the user.
5. Level 1 DFD
25
Summary:
Provides more detail than Level 0 by breaking down the main process into sub-processes:
Processes:
- 1.1: Register/Login
- 1.2: Upload Video
- 1.3: Detect Exercise
- 1.4: Count Repetitions
- 1.5: Display Results
Data Stores:
- User Database
- Exercise Database
Purpose:
Clarifies internal system workings and how different submodules interact with each other and the
database.
26
6. Level 2 DFD
Summary:
Drills down further, likely focusing on one of the Level 1 processes:
Processes:
o Pose landmark extraction using MediaPipe
o Preprocessing and feeding data into the CNN model
o Outputting classification results
Purpose:
Captures detailed data flow for the AI-based exercise detection component, highlighting how
data is transformed and used internally.
27
7. ER Diagram
Summary:
The Entity-Relationship (ER) Diagram models the backend database structure:
Entities: User, Exercise, Workout session
Relationships:
o A User can have many Sessions.
o A Session can include multiple Exercises.
o Each Exercise has several Repetitions.
Purpose:
This diagram defines how data is structured and interlinked in the database, ensuring normalized
and efficient data storage for the fitness tracking platform.
Figure: ER diagram
28
Chapter 7
System Implementation
The implementation of VisionFitTrack was carried out in a series of well-defined phases, each
building on the previous to ensure a robust, scalable, and user-friendly fitness tracking
application. Below is an expanded, step-by-step breakdown of the implementation process:
- Videos of users performing each supported exercise (push-ups, pull-ups, bicep curls, shoulder
presses, squats, deadlifts) were collected in diverse environments and lighting conditions.
- Each video frame was annotated with exercise type, movement phase, and repetition
boundaries, creating a high-quality dataset for model training.
- Data augmentation techniques (mirroring, scaling, rotation) were applied to increase dataset
diversity and model robustness.
- A self-annotated dataset of over 200,000 exercise images was curated and labelled across 6
exercise classes. Pose key points (33 landmarks with x, y, z, and visibility values) were
extracted using MediaPipe Pose, resulting in 132-dimensional feature vectors per image.
- These pose vectors were normalized using training-set statistics and processed in chunks to
optimize memory usage.
- An initial custom CNN model was trained using these features, followed by a fine-tuned
MobileNetV2 model via transfer learning. The base model's convolutional layers were frozen,
and custom classification layers were added with dropout regularization.
- The model was trained with categorical cross entropy loss, Adam optimizer, and monitored
using metrics such as accuracy, precision, and recall. Training included EarlyStopping,
ModelCheckpoint, and ReduceLROnPlateau callbacks to prevent overfitting and adapt
29
learning rates dynamically.
- The best-performing model was converted to TensorFlow.js format, enabling real-time, in-
browser inference for pose-based exercise recognition in fitness applications.
30
Figure: Accuracy and Loss of the model
- MediaPipe Pose was integrated into the frontend using JavaScript, enabling real-time extraction
of 33 key body points from the webcam video stream.
- The pose estimation pipeline was optimized for speed and accuracy, ensuring smooth
performance on consumer hardware.
- Extracted keypoints were normalized and formatted as input for the exercise classification
model.
31
4. Implementation of Rule-Based Logic:
- Custom algorithms were developed to analyze joint angles and movement direction for each
exercise type.
- State machines were implemented to track exercise phases (e.g., up/down for push-ups),
applying hysteresis and smoothing to reduce false positives and ensure stable repetition counting.
- Rule-based logic was combined with model predictions to improve robustness, especially in
ambiguous or transitional movements.
5. Frontend Development:
- The user interface was built using HTML5, CSS3, JavaScript, and Bootstrap 5, ensuring a
modern, responsive, and accessible experience.
- Real-time feedback was provided through overlays (skeleton drawing), live statistics (reps,
sets, timer), and interactive controls.
- Chart.js was integrated to visualize workout progress, trends, and session summaries.
- The UI was tested and refined for usability, accessibility, and cross-device compatibility.
- A Flask backend was developed to handle user authentication, session management, and secure
storage of workout history.
- SQLite was chosen for lightweight, local data storage, supporting multiple users and efficient
queries.
- RESTful APIs were implemented for communication between the frontend and backend,
enabling seamless data exchange.
7. User Authentication:
32
- Secure registration and login were implemented using Flask-Login, with password hashing and
session management.
- Only authenticated users could save and view their workout history, ensuring data privacy
and security.
- User management features were designed for extensibility and future integration with
external platforms.
- After each workout, exercise data (reps, sets, duration, notes) was saved to the database for
authenticated users.
- Chart.js was used to render interactive charts, allowing users to track their progress over time
and analyze trends.
- The progress module was designed to be extensible for future analytics and personalized
feedback features.
- Unit tests were written for core logic modules (pose extraction, exercise detection, repetition
counting).
- Integration testing ensured smooth interaction between frontend, backend, and database
components.
- User acceptance testing was conducted with real users to gather feedback, identify edge
cases, and refine the interface.
- Performance testing validated real-time responsiveness and accuracy across devices and
browsers.
33
Chapter 8
System Testing
A comprehensive and multi-layered testing strategy was employed to ensure the reliability,
accuracy, and usability of VisionFitTrack. The following testing methodologies were applied
throughout the development lifecycle:
- The system was tested as a whole, focusing on input-output behavior without knowledge of
internal code structure.
- Multiple users performed supported exercises (push-ups, pull-ups, bicep curls, shoulder
presses, squats, deadlifts) in different environments, backgrounds, and lighting conditions.
- The accuracy of exercise detection, classification, and repetition counting was validated by
comparing system outputs to manual counts.
- Edge cases, such as partial occlusion, fast movements, and users stepping out of the camera
frame, were specifically tested to ensure robustness.
- The internal logic for pose extraction, joint angle calculation, state transitions, and rule-based
algorithms was thoroughly tested.
- Unit tests were written for core modules (e.g., poseDetection.js, exercises.js) to verify correct
calculation of joint angles, state machine transitions, and rep/set counting logic.
- Code coverage analysis was performed to ensure all critical paths and edge cases were tested.
34
8.3 Integration Testing:
- The interaction between frontend, backend, and database components was tested to ensure
seamless data flow and correct API responses.
- Scenarios included user registration, login, workout data saving, and retrieval of progress
history.
- Error handling and recovery from network or server failures were validated.
- The application was released to a group of real users for hands-on evaluation.
- User feedback was collected on usability, interface clarity, responsiveness, and overall
experience.
- Issues identified during beta testing (e.g., confusing UI elements, missed reps, slow feedback)
were addressed and the system refined accordingly.
- The accuracy and clarity of progress charts and workout summaries (using Chart.js) were
validated.
- Test data was used to ensure correct rendering of line graphs, bar charts, and session
summaries.
- The ability to filter, sort, and interpret progress data was tested for usability.
- The user interface was tested on a range of devices (desktops, laptops, tablets) and screen
sizes to ensure a consistent and accessible experience.
- Cross-browser compatibility was verified for Chrome, Firefox, Edge, and Safari.
- The UI was evaluated for accessibility, including color contrast, font size, and keyboard
navigation.
35
8.7 Performance Testing:
- The system was tested for real-time responsiveness, with latency measured for pose
estimation, exercise classification, and UI updates.
- The application was profiled to identify and resolve performance bottlenecks, ensuring
smooth operation on consumer hardware.
- The final system was validated against all functional and non-functional requirements
specified in the SRS.
- Test cases were derived from user stories and requirements to ensure complete coverage.
- The system was deemed ready for deployment only after passing all acceptance criteria.
This rigorous testing approach ensured that VisionFitTrack is robust, accurate, user-friendly, and
ready for real-world use.
36
Chapter 9
Conclusion
37
Chapter 10
Bibliography
- MediaPipe: https://mediapipe.dev/
- TensorFlow.js: https://www.tensorflow.org/js
- Flask: https://flask.palletsprojects.com/
- Chart.js: https://www.chartjs.org/
- Bootstrap: https://getbootstrap.com/
- [Other relevant papers, tutorials, or datasets as needed
38