Health Mental Ai Driven Companion
Health Mental Ai Driven Companion
Prof. Jagadeesh G
Professor Grade 1
SCORE
AI-Driven Companion for Mental Well-Being Assessment
Capstone Project
Master of Technology
in
Software Engineering
By
Prof. Jagadeesh G
Professor Grade 1
SCORE
VIT, Vellore
April, 2025
2
DECLARATION
I hereby declare that the Capstone Project entitled “AI-Driven Companion for Mental Well-
Being Assessment " submitted by me, for the award of the degree of Master of Technology in Software
Engineering, School of Computer Science Engineering and Information Systems to VIT is a record of
bonafide work carried out by me under the supervision of Jagadeesh G, Professor Grade 1, SCORE,
VIT, Vellore.
I further declare that the work reported in this dissertation has not been submitted and will not
be submitted, either in part or in full, for the award of any other degree or diploma in this institute or
any other institute or university.
Place: Vellore
Date:
3
CERTIFICATE
This is to certify that the Capstone Project entitled “AI-Driven Companion for Mental Well-
Being Assessment” submitted by Sreemayee 20MIS0412, SCORE, VIT, for the award of the degree
of Master of Technology in Software Engineering, is a record of bonafide work carried out by him /
her under my supervision during the period, 13. 12. 2024 to 17.04.2025, as per the VIT code of
academic and research ethics.
The contents of this report have not been submitted and will not be submitted either in part or
in full, for the award of any other degree or diploma in this institute orany other institute or university.
The project fulfills the requirements and regulations of the University and in my opinion meets the
necessary standards for submission.
Place: Vellore
Date:
4
ACKNOWLEDGEMENT
It is my pleasure to express with a deep sense of gratitude to my Capstone Project guide Dr. Jagadeesh
G, Professor Grade 1, School of Computer Science Engineering and Information Systems, Vellore
Institute of Technology, Vellore for her constant guidance, continual encouragement, in my endeavor.
My association with her is not confined to academics only, but it is a great opportunity on my part to
work with an intellectual and an expert in the field of Machine Learning and Artificial Intelligence
"I would like to express my heartfelt gratitude to Honorable Chancellor Dr. G Viswanathan; respected
Vice Presidents Mr. Sankar Viswanathan, Dr. Sekar Viswanathan, Vice Chancellor Dr. V. S. Kanchana
Bhaaskaran; Pro-Vice Chancellor Dr. Partha Sharathi Mallick; and Registrar Dr. Jayabarathi T.
My whole-hearted thanks to Dean Dr. Daphne Lopez, School of Computer Science Engineering and
Information Systems, Head, Department of Software and Systems Engineering Dr. Neelu Khare,
M.Tech Project Coordinator Dr. C. Navaneethan and Dr Malathy E, SCORE School Project
Coordinator Dr. Thandeeswaran R, all faculty, staff and members working as limbs of our university
for their continuous guidance throughout my course of study in unlimited ways.
It is indeed a pleasure to thank my parents and friends who persuaded and encouraged me to take up
and complete my capstone project successfully. Last, but not least, I express my gratitude and
appreciation to all those who have helped me directly or indirectly towards the successful completion
of the project.
Place: Vellore
Date:
C. Sreemayee
5
SUMMARY
Mental health issues are becoming increasingly prevalent, creating a demand for accessible and
scalable solutions. This research focuses on developing an AI-driven mental health companion that
utilizes Recurrent Neural Networks (RNNs) and Natural Language Processing (NLP) to provide
The system is designed to analyze user inputs, detect emotional states, and generate appropriate
responses through deep learning models. RNNs, particularly Long Short-Term Memory (LSTM)
networks, enable the chatbot to maintain conversational context, ensuring more cohesive and
personalized interactions over time. The model is trained using mental health datasets to improve its
Ethical considerations such as data privacy, bias mitigation, and responsible AI use are integrated into
the system. The chatbot serves as an initial support tool rather than a replacement for professional
By leveraging AI-driven models like RNNs, this research demonstrates the potential of AI
companions in enhancing mental health accessibility, reducing stigma, and providing real-time
6
CONTENTS PAGE NO.
Acknowledgement 5
Executive Summary 6
List of Figures 9
List of Tables 10
List of Acronyms 11
1 INTRODUCTION
1.1 Objective 12
1.2 Motivation 13
1.3 Background 14
2.2 Goals 16 - 17
3 TECHNICAL SPECIFICATION
4 LITERATURE SURVEY 21 - 25
7
5 ANALYSIS AND DESIGN
8 RESULTS 49
8.1 Discussion 49 - 50
52
9 SUMMARY
10 REFERENCES 53 - 54
8
LIST OF FIGURES
9
LIST OF TABLES
LITERATURE SURVEY
10
LIST OF ACRONYMS
AI – Artificial Intelligence
ML - Machine Learning
11
CHAPTER 1
INTRODUCTION
The growing prevalence of mental health disorders, including anxiety, depression, and stress,
underscores the urgent need for accessible, affordable, and stigma-free mental health support. Despite
the increasing awareness surrounding mental well-being, many individuals still face significant barriers
to accessing professional help. Traditional therapy methods, while effective, often come with high
costs, long wait times, and geographical limitations, particularly for those in remote or underserved
areas. Additionally, the stigma associated with seeking mental health treatment prevents many
individuals from reaching out for assistance, leading to undiagnosed and untreated mental health
conditions.
To bridge these gaps, Artificial Intelligence (AI)-driven mental health solutions have emerged as a
viable alternative. AI-powered chatbots, in particular, offer on-demand mental health support, enabling
users to engage in private, stigma-free conversations at any time. By utilizing Recurrent Neural
Networks (RNNs) and Natural Language Processing (NLP), these chatbots can analyze user inputs,
detect emotional cues, and generate personalized responses in real time. Their ability to provide
immediate and scalable assistance makes them a promising tool for early intervention in mental
healthcare.
This research aims to develop an AI-driven mental health chatbot that leverages these technologies to
offer empathetic, real-time, and context-aware support, addressing the limitations of conventional
mental health services and improving accessibility for those in need.
1.1 Objective
12
➢ Safeguard User Confidentiality: Protect user privacy by avoiding the storage of identifiable
data, reducing stigma and promoting open engagement without hesitation.
➢ Enhance Reach and Cost-Effectiveness: Deploy the chatbot on mobile and web platforms to
serve users in isolated or underserved regions, offering an economical substitute for
conventional therapy.
➢ Support Early Identification and Ongoing Tracking: Enable continuous real-time analysis to
spot early indicators of mental health concerns, delivering prompt interventions to mitigate
escalation.
➢ Boost Interaction and Long-Term Use: Create a user-friendly interface with features like mood
monitoring and personalized check-ins to encourage consistent engagement and strengthen user
connection.
➢ Design for Future Growth: Construct a flexible system that supports scalability, allowing for
additions like multilingual capabilities and links to other health platforms, ensuring adaptability
over time.
1.2 Motivation
Mental health disorders such as anxiety, depression, and stress are rising globally, yet many individuals
face barriers to seeking professional help due to high costs, long wait times, geographic limitations,
and social stigma. Traditional therapy, while effective, is not always accessible to everyone, leading to
a growing need for alternative mental health support solutions.
The rapid advancements in Artificial Intelligence (AI), particularly Recurrent Neural Networks
(RNNs) and Natural Language Processing (NLP), offer an opportunity to develop intelligent chatbots
capable of providing empathetic, real-time, and stigma-free mental health assistance. AI-driven
chatbots can serve as a readily available, cost-effective, and scalable solution for individuals seeking
mental health support, making intervention more accessible.
This project is motivated by the potential of AI to bridge the gap in mental healthcare, offering
immediate and personalized support while ensuring privacy and convenience. By leveraging AI-driven
models, this research aims to contribute to the development of innovative mental health solutions,
ensuring that help is available to those who need it, regardless of their circumstances.
13
1.3 Background
Mental health issues have become a significant global concern, affecting millions of people across
various age groups. Conditions such as anxiety, depression, and stress are on the rise, yet access to
professional mental health care remains limited due to financial, geographical, and societal barriers.
Traditional therapy methods, while effective, often fail to reach a large portion of the population due
to high costs, long wait times, and social stigma associated with seeking psychological help.
With advancements in Artificial Intelligence (AI) and Natural Language Processing (NLP), AI-
powered chatbots have emerged as a promising solution to provide accessible and stigma-free mental
health support. These chatbots, particularly those utilizing Recurrent Neural Networks (RNNs) and
Long Short-Term Memory (LSTM) models, are capable of understanding and responding to users in a
context-aware, empathetic, and personalized manner. By analyzing user input and emotional cues, AI-
driven companions can offer real-time support, guiding individuals through stress, anxiety, and other
mental health concerns.This research explores the integration of AI in mental healthcare, focusing on
developing an AI-driven mental health chatbot that leverages RNNs and NLP to provide personalized
and effective emotional support while ensuring user privacy and ethical AI implementation.
The scope of the "AI-Driven Companion for Mental Well-Being Assessment" project centers on
creating a conversational AI tool that delivers immediate, tailored mental health assistance. Powered
by Recurrent Neural Networks (RNNs) and Natural Language Processing (NLP), this companion is
engineered to identify conditions like anxiety, depression, and stress by analyzing user responses.
Available on both mobile and web platforms, it provides a confidential, judgment-free space with
robust privacy measures to make users feel at ease seeking support. The system incorporates real-time
tracking for early recognition of mental health challenges, dynamic response features for individualized
care, and engagement tools such as mood monitoring and customized check-ins. Built with scalability
in mind, it supports future enhancements like multilingual functionality and integration with other
digital wellness solutions, ensuring it can evolve with technological progress and diverse user demands.
14
1.5 Project Statement
The "AI-Driven Companion for Mental Well-Being Assessment" project aims to create an affordable,
accessible, and individualized mental health support tool powered by artificial intelligence. With
mental health challenges on the rise, conventional approaches often fail due to expensive services,
prolonged delays, and societal stigma, which deter people from seeking help. To overcome these
barriers, this project introduces a conversational AI companion that integrates Natural Language
Processing (NLP) and Recurrent Neural Networks (RNNs) to engage users naturally, interpret
emotional signals, and assess mental health based on their responses.
15
CHAPTER 2
This dissertation focuses on the design and development of an AI-driven mental health companion
aimed at providing accessible, real-time, and personalized support to individuals experiencing
emotional distress. With the rising prevalence of mental health issues such as anxiety, stress, and
depression, there is a critical need for scalable and stigma-free support systems. Traditional therapy,
while effective, often faces barriers like high cost, limited availability, and social stigma. This project
seeks to bridge that gap by leveraging artificial intelligence to simulate empathetic human conversation
and offer preliminary emotional assistance.
The proposed system integrates Natural Language Processing (NLP) and Recurrent Neural Networks
(RNNs) to analyze user input, detect emotional cues, and deliver context-aware responses. The AI
companion is designed to engage users in supportive dialogue, recognize patterns of mental distress,
and respond in a way that fosters comfort and trust. Additionally, the system incorporates key ethical
principles, ensuring user privacy, data security, and non-judgmental interaction.
This dissertation explores the system's architecture, development process, evaluation metrics, and the
effectiveness of machine learning models used. Through this research, the project demonstrates how
AI can contribute to early intervention in mental health and provide an accessible tool for those who
may not otherwise seek help.
2.2 Goals
The primary goal of this dissertation is to develop an AI-driven mental health companion capable of
providing empathetic, real-time support to individuals experiencing emotional challenges such as
anxiety, stress, and depression. The project aims to utilize advanced machine learning and natural
language processing techniques to simulate human-like conversations that promote emotional well-
being.
16
• To design and implement a conversational AI system that can interact with users in a natural,
supportive manner.
• To integrate Recurrent Neural Networks (RNNs) for accurate emotional state prediction based
on sequential user inputs.
• To apply Natural Language Processing (NLP) techniques for intent recognition, sentiment
analysis, and context understanding.
• To ensure ethical and secure handling of user data, with a strong focus on privacy, anonymity,
and responsible AI usage.
• To create a user-friendly interface that encourages engagement and reduces the stigma
associated with discussing mental health.
17
CHAPTER 3
TECHNICAL SPECIFICATION
1. Conversational Interface
o I created a chatbot that engaged users in natural, human-like conversations using NLP
to interpret their text inputs. It responded with empathetic tones, fostering a supportive
space.
18
o I designed the system to analyze user inputs with RNNs, detecting emotional cues and
predicting conditions like anxiety, depression, and stress. It delivered real-time
feedback based on those insights.
3. Personalized Responses
o The chatbot generated tailored advice and responses based on each user’s emotional
state and past interactions. It remembered previous conversations, offering context-
aware follow-ups.
4. Real-Time Monitoring
o I built in continuous monitoring of user inputs, and the system successfully identified
early signs of mental health issues, prompting timely suggestions for support.
o The chatbot recorded emotional trends over time and showed users their mental health
progress through simple, accessible displays.
6. Multi-Platform Accessibility
o I deployed the system across mobile and web platforms, and it worked seamlessly,
letting users connect from any device they preferred.
o The chatbot let users input and update preferences anonymously. It avoided storing
identifiable details, keeping interactions private.
8. Scalability Support
o I crafted an architecture that adapted to future needs, and it handled additions like multi-
language options and health tool integrations without a hitch.
19
3.4 Non-Functional Requirements
o I ensured user anonymity by skipping personal data storage, and the system protected
interactions with encryption, meeting privacy expectations.
2. Performance
3. Reliability
o I got the system to a 99% uptime level, so it was almost always available. It predicted
mental health conditions with 85% accuracy based on its training.
4. Usability
o I designed an interface that felt like a familiar messaging app, and users picked it up
easily. The language stayed clear, concise, and caring, enhancing their experience.
5. Scalability
o The system grew to handle more users and features effortlessly. I added things like
multi-language support with almost no downtime.
6. Accessibility
o I made the chatbot work across standard browsers and mobile devices, even on slow
connections in remote areas. It followed accessibility guidelines, helping users with
disabilities too.
7. Maintainability
o I kept the code modular and documented, so updates and fixes went smoothly. New AI
models slotted in without trouble.
20
CHAPTER 4
LITERATURE SURVEY
1 Patient Perspectives The study offers critical insights Concerns about AI’s accuracy,
on AI for Mental into public views on AI in mental misdiagnosis, and
Health Care: Cross- health care, with 49.3% of confidentiality risks may
Sectional Survey participants recognizing its hinder trust and adoption.
2 Use of AI in Mental The study reveals AI’s growing Concerns about privacy, ethics,
Health Care: role in mental health care, and the loss of human
Community and enhancing accessibility, reducing connection were prevalent,
Mental Health costs, and improving potentially limiting AI’s
Professionals Survey administrative efficiency for both acceptance in mental health
community members and care. Additionally, nearly half
professionals. It also provides of users reported issues like
specific usage insights, noting inaccuracies, lack of
28% of community members use personalization, and potential
AI for emotional support and 43% misuse, underscoring the need
of professionals use it for research for improved AI design and
and documentation, guiding ethical oversight.
future AI applications.
21
health intervention: A depression. Users expressed influence of demographic
systematic review positive attitudes toward chatbots, differences and the need for
appreciating their psychological standardized metrics,
capabilities and accessibility as a indicating challenges in
mental health intervention. ensuring consistent efficacy.
4 Exploring the Role of The study highlights AI’s It identifies challenges like
Artificial Intelligence effectiveness in mental ethical concerns, data privacy,
in Mental Healthcare: healthcare, using predictive cultural sensitivity,
Current Trends and analytics and machine learning cybersecurity risks, and the
Future Directions for screening, diagnosis, and need for better collaboration,
personalized treatment with larger datasets, and
promising accuracy. standardized metrics.
22
7 Artificial Intelligence AI demonstrates high accuracy in Most studies are in the proof-
for Mental Health predicting and classifying mental of-concept stage, lacking
and Mental Illnesses: health conditions like depression clinical validation for real-
an Overview and schizophrenia using diverse world application. Bridging the
data sources such as EHRs and gap between AI advancements
social media. It offers potential to and practical implementation
transform mental healthcare requires further research and
through improved diagnosis, early validation.
detection, and personalized
treatment.
23
10 A machine-learning The study achieved high accuracy The reliance on specific
approach for stress (98.29% with Random Forest, physiological data may limit
detection using 98.98% with XGBoost) in stress applicability in diverse
wearable sensors in detection using wearable sensors settings. Model selection and
free-living and the SWEET dataset. It preprocessing complexities
environments underscores the effectiveness of could pose challenges for
machine learning and broader implementation.
preprocessing techniques for real-
life stress monitoring.
11 Machine Learning The study highlights the Challenges include the need for
Techniques to Predict effectiveness of CNNs, SVMs, larger, diverse datasets and
Mental Health and RF models in predicting improved model
Diagnoses: A mental health diagnoses like interpretability. Data
Systematic Literature bipolar disorder and limitations and ethical
Review schizophrenia. It emphasizes AI’s concerns must be addressed for
potential for early detection and practical application.
personalized interventions in
mental healthcare.
12 The Design and The study identifies unique The study’s focus on paternal
Evaluation of a challenges of paternal PPD and PPD limits its applicability to
Mental Health evaluates app usability, with the other demographics.
Educational App for gamified version showing Engagement benefits of the
Paternal Postpartum potential for increased gamified version require
Depression engagement. It provides further validation for broader
recommendations for designing use.
effective mental health
educational tools for fathers.
24
13 Enhancing mental The study showcases AI’s Ethical concerns, including
health with Artificial transformative potential in mental data privacy and model bias,
Intelligence: Current healthcare through early pose significant challenges to
trends and future diagnosis and personalized AI adoption. The need for
prospects treatment using predictive regulatory frameworks and
analytics and NLP. It highlights transparent validation limits
high accuracy in detecting immediate implementation.
disorders like depression and
schizophrenia, advancing mental
health interventions.
25
4.1 FINDINGS IN LITERATURE SURVEY:
2. Use of AI in Mental Health Care: Community and Mental Health Professionals Survey
The study examines AI use in mental health care among community members and mental
health professionals, finding that while AI is increasingly used for accessibility, cost reduction,
and administrative efficiency, concerns about privacy, ethics, and human connection persist.
About 28% of community members used AI for emotional support, while 43% of professionals
used it mainly for research and documentation. AI was generally perceived as beneficial,
though nearly half of users reported concerns such as inaccuracies, lack of personalization,
and potential misuse. The findings highlight the need for ethical guidelines, transparency, and
further research to balance AI's benefits with its risks
26
4. Exploring the Role of Artificial Intelligence in Mental Healthcare: Current Trends and
Future Directions – A Narrative Review for a Comprehensive Insight
The study explores the role of AI in mental healthcare, emphasizing its applications in
screening, diagnosis, and treatment. AI-driven predictive analytics enhances treatment
planning by forecasting patient responses, aligning with a shift toward personalized mental
health interventions. Machine learning and deep learning models have shown promising
accuracy in detecting psychiatric disorders, but collaboration between AI and healthcare
professionals remains underexplored. Key challenges include ethical concerns, data privacy,
cultural sensitivity, and cybersecurity risks, highlighting the need for comprehensive
regulatory frameworks. The study underscores the necessity for further research with larger
datasets and standardized metrics to optimize AI integration in mental healthcare
27
7. Artificial Intelligence for Mental Health and Mental Illnesses: an Overview
AI technology shows great potential in transforming mental healthcare by improving
diagnosis, early detection, and personalized treatment of psychiatric illnesses. Studies utilizing
electronic health records, mood rating scales, brain imaging, and social media data have
demonstrated high accuracy in predicting and classifying mental health conditions such as
depression, schizophrenia, and suicidal ideation. However, most studies remain in the proof-
of-concept stage, requiring further validation before clinical implementation. While AI can
help redefine mental illnesses and enhance treatment strategies, additional research is needed
to bridge the gap between AI advancements and real-world clinical practice.
The study explores the role of AI in mental health, emphasizing its potential for diagnosis,
monitoring, and treatment. AI-based diagnostic tools, such as speech and video analysis, have
shown high accuracy in detecting disorders like psychosis, ADHD, and ASD. AI-powered
monitoring, including smartphone-based tracking and physiological data analysis, helps predict
psychiatric symptoms and improve medication adherence. The integration of AI-driven CBT
chatbots has demonstrated effectiveness in reducing depression and anxiety, although
challenges related to data privacy, clinical governance, and ethical concerns remain. The study
highlights the need for further research to refine AI applications and ensure responsible
implementation in mental healthcare
9. An intelligent Chatbot using deep learning with Bidirectional RNN and attention model
The study focuses on developing an intelligent chatbot using deep learning, specifically a
Bidirectional Recurrent Neural Network (BRNN) with an attention mechanism. The chatbot is
trained using the Reddit dataset to enhance conversational accuracy and coherence. Results
indicate that the model achieves a perplexity of 56.10 and a Bleu score of 30.16 after 23,000
training steps, demonstrating improved performance in generating context-aware responses.
Despite these advancements, challenges remain in optimizing training efficiency, reducing
computational costs, and further refining response accuracy. Future research aims to enhance
domain-specific chatbot applications, integrate reinforcement learning, and address ethical
considerations in AI-driven conversations
28
10. A machine-learning approach for stress detection using wearable sensors in free-living
environments
The study explores machine learning-based stress detection using wearable sensors in real-life
environments, addressing limitations of traditional questionnaire-based methods. By analyzing
physiological data from the SWEET dataset, including ECG, skin temperature, and skin
conductance, the study evaluates various machine learning models. The Random Forest model
demonstrated the highest accuracy (98.29%) in binary classification without SMOTE, while
XGBoost performed best (98.98%) in multi-class classification with SMOTE. These findings
highlight the effectiveness of wearable sensors and machine learning in stress detection,
emphasizing the importance of model selection and preprocessing techniques for improved
accuracy.
12. The Design and Evaluation of a Mental Health Educational App for Paternal Postpartum
Depression
The study investigates the design and evaluation of a mental health education app for paternal
postpartum depression (PPD) using the double diamond model of design thinking. Findings
reveal unique challenges faced by fathers with PPD and assess the usability of both traditional
and gamified app designs. The gamified version showed potential for increasing engagement,
29
though both versions were found useful for educating users about PPD. The study contributes
to understanding paternal PPD as a distinct mental health issue, compares the effectiveness of
traditional versus gamified learning, and provides recommendations for designing engaging
mental health educational tools.
13. Enhancing mental health with Artificial Intelligence: Current trends and future
prospects
The study explores the integration of Artificial Intelligence (AI) into mental healthcare,
highlighting its transformative potential in early diagnosis, personalized treatment, and AI-
driven virtual therapy. AI applications, including predictive analytics, natural language
processing, and emotion recognition, have demonstrated high accuracy in detecting mental
health disorders like depression and schizophrenia. However, ethical concerns such as data
privacy, bias in AI models, and the need for regulatory frameworks remain key challenges. The
study emphasizes the importance of responsible AI implementation, transparent validation of
AI models, and ongoing research to optimize AI’s role in mental healthcare
14. A comprehensive review of predictive analytics models for mental illness using machine
learning algorithms
The study provides a comprehensive review of machine learning models for predicting mental
illness, emphasizing the role of data-driven approaches in early detection and treatment. It
examines various machine learning techniques, including supervised, unsupervised, and
reinforcement learning, applied to diverse data sources such as surveys, social media posts,
audio recordings, and wearable sensor data. The findings highlight the potential of AI in
improving diagnostic accuracy and personalized mental health interventions while
acknowledging challenges related to data privacy, model bias, and ethical concerns. Future
research should focus on integrating multimodal data, improving gender-specific assessments,
and enhancing the accessibility of mental health AI solutions
30
anthropomorphism and social presence, fostering greater trust in the system. Conducted in the
context of online government services, the findings indicate that dialog length and positive
user attitudes amplify these effects, with anthropomorphism playing a key role in creating a
human-like experience. The research underscores the potential of VSAs to enhance online
service delivery by adding a “human touch,” though outcomes depend partly on the extent of
conversation and users’ pre-existing attitudes toward the technology.
Current AI-powered mental health platforms encounter a variety of challenges that limit their practical
effectiveness and widespread use. One of the most prominent issues is the lack of personalized
interaction—many systems provide generic responses that fail to address the unique needs of
individuals, resulting in a less impactful user experience. Real-time responsiveness is often missing,
which reduces the ability to support users during critical emotional moments.
Data privacy and security are major concerns, especially for platforms that process sensitive mental
health data or rely on wearable technology, where any breach could significantly damage user trust.
Additionally, sustaining user engagement is a challenge, as many tools do not offer interactive or
compelling features to keep users consistently involved.
For systems that depend on sensor-based input, issues related to data accuracy and reliability can
compromise the overall performance and trustworthiness of the application. Implementing hybrid AI
models, such as those combining convolutional and recurrent neural networks, can also be resource-
intensive and technically complex for many organizations.
Moreover, ethical considerations, including data bias, informed consent, and transparency, remain
areas of concern. Many platforms also tend to be limited in scope, often focusing solely on specific
therapies like Cognitive Behavioral Therapy (CBT), which restricts their ability to serve broader, more
diverse populations. These limitations highlight the need for flexible, secure, and ethically sound AI-
driven mental health solutions that deliver tailored and timely support.
31
4.3 Gantt Chart
This Gantt Chart presents a structured timeline for the development of a “AI-Driven Companion for
Mental Well-Being Assessment” project, spanning from December 2024 to March 2025. The project
begins with the initiation phase, where key objectives and resources are defined. Following this, project
identification and scope clarification take place, establishing the chatbot’s purpose of providing
accessible mental health support through AI.
A literature survey and research are conducted to understand existing mental health prediction methods.
In late December and early January, data collection and preprocessing ensure the quality of the dataset,
which is crucial for accurate predictions. Model development and evaluation follow, where the
chatbot’s natural language processing and prediction capabilities are built and assessed using metrics
like accuracy, precision, recall, and F1 score.
Training is then conducted, allowing the model to learn effectively from the data. The final phases
include testing, result analysis, and documentation to refine the chatbot’s performance and ensure a
comprehensive record of the project. This structured approach enables the creation of an AI-based tool
designed to offer real-time mental health support in a stigma-free and accessible manner.
32
CHAPTER 5
The methodology for developing the "AI-Driven Companion for Mental Well-Being Assessment"
focuses on building an interactive chatbot interface that enables users to discuss mental health concerns
in a supportive environment. The process starts with Natural Language Understanding (NLU)
analyzing user-submitted text to detect intents and identify critical emotional cues or keywords,
providing insight into the user’s state of mind. A dataset is compiled and processed using a "bag of
words" technique, which simplifies text into numerical representations without grammatical
constraints, accommodating a variety of inputs. This dataset is split into training and testing portions
to enhance the model’s reliability. The central predictive component employs a Recurrent Neural
Network (RNN) to classify user intents from text inputs, enabling the selection of personalized
responses. Once an intent is recognized, the system generates a relevant, context-aware reply to
maintain an engaging dialogue. Data management is handled through API interactions with a secure
database, logging interactions and refining the companion’s capabilities over time. This integrated
approach, combining NLU, RNNs, and adaptive response generation, ensures the companion delivers
timely, private, and individualized mental health support.
• Gather conversational data related to mental well-being, including user prompts (patterns) and
corresponding replies, organized by intents with sample expressions and responses.
• Standardize the data by tokenizing, stemming, and stripping punctuation to prepare it for model
training.
• Transform the text into numerical vectors using a "bag of words" method, enabling the neural
network to process it effectively.
• Build a vocabulary by tokenizing and stemming words, converting each user input into a vector
reflecting word occurrences.
33
• Assign intent labels to categorize user queries, serving as the classification targets for the
model.
• Divide the data into training and validation sets to assess the model’s ability to generalize.
• Opt for a dense neural network architecture, suitable for classifying structured intents
efficiently without requiring sequential memory.
• Design the model with an input layer, hidden layers using activation functions, and an output
layer with a softmax function for multi-class intent prediction.
• Train the model with cross-entropy loss and an optimizer (e.g., Adam or SGD) to refine its
ability to link inputs with intents.
• Assess model performance using metrics like accuracy, precision, recall, and F1-score to verify
intent prediction accuracy.
• Adjust hyperparameters such as learning rate and batch size to optimize performance based on
evaluation results.
• Store the trained model and integrate it into a Flask web application for real-time user
interaction.
• Use the Flask API to process incoming user messages, classify intents, and generate responses
dynamically.
• Select and deliver pre-defined responses based on the predicted intent, ensuring relevance to
the user’s input.
• Provide immediate feedback to users, addressing common mental health inquiries effectively.
• Test the deployed companion and collect user input on the relevance and utility of its responses.
• Update responses and retrain the model periodically to enhance precision and broaden its
conversational range.
34
Why RNN is Used
RNNs excel at processing sequential data by retaining memory of prior inputs, making them well-
suited for conversational tasks where the context of earlier messages influences current interpretations.
In this companion, RNNs enable a deeper understanding of dialogue flow, improving intent prediction
by leveraging the sequence of words or phrases.
Advantages of RNN
1. Sequential Data Processing: RNNs handle sequences by maintaining memory through recurrent
connections, ideal for language-based tasks like chatbot development where order is
significant.
2. Context Retention: By recalling earlier parts of a sequence, RNNs grasp contextual nuances in
text, enhancing applications like sentiment analysis or dialogue systems.
3. Efficient Parameter Use: Sharing parameters across time steps reduces complexity and
overfitting risk, making RNNs trainable on large datasets with fewer resources.
5. Temporal Insight: They capture dependencies over time, useful for predicting outcomes based
on historical data, such as in mental health dialogues.
6. NLP Suitability: RNNs learn language patterns across sequences, excelling in tasks like text
generation and intent recognition.
7. Enhanced Variants: Extensions like LSTM and GRU overcome limitations like vanishing
gradients, improving long-term context retention.
8. Real-Time Capability: RNNs process new inputs instantly within context, enabling responsive
applications like live chatbot interactions.
35
5.2 System Architecture
The architecture of the "AI-Driven Companion for Mental Well-Being Assessment" is crafted
to deliver immediate, customized mental health support through a cohesive set of
interconnected modules. Users engage with the companion via an intuitive chat interface on
mobile or web platforms, starting the process by submitting text inputs. These entries are
processed by the Natural Language Understanding (NLU) unit, which employs Natural
Language Processing (NLP) methods to discern the user’s intent and detect emotional
indicators or significant phrases. This data is then channeled into the training module, which
adopts a "bag of words" technique to transform inputs into structured patterns, enhancing the
system’s learning foundation. At the heart of the companion lies a machine learning framework,
leveraging Recurrent Neural Networks (RNNs) to interpret user intentions and craft suitable
replies. Once the intent is determined, the response formulation unit selects and provides a
compassionate, pre-crafted message aligned with the user’s specific emotional state. A secure
database and API system manage data storage and access, safeguarding user information while
supporting ongoing improvements in the companion’s predictive accuracy. This unified design
ensures the delivery of empathetic, relevant responses, enhancing the accessibility and
personalization of mental health care.
36
5.3 Sequence Diagram
37
5.4 Use-Case Diagram
38
5.6 Constraints ,Alternatives and Tradeoffs
• Since the system deals with sensitive mental health information, strict data privacy and
ethical handling protocols are essential. Ensuring compliance with regulations like
GDPR can limit certain data usage and increase development overhead.
• The chatbot may not be able to immediately respond to crisis situations or provide
emergency support, as it lacks real-time monitoring capabilities and human intervention
pathways.
• Language, cultural context, and emotional expression vary greatly across users, which
may affect how the chatbot interprets and responds to inputs. The model may struggle
to generalize well without diverse training data.
Computational Resources
• Training and running AI models with NLP and RNN components requires significant
computational power, which can be a constraint during development or deployment in
resource-limited environments.
39
• Keeping users engaged with a chatbot over time can be difficult, especially if responses
become repetitive or the system lacks personalization. This can reduce long-term
effectiveness.
Scope of Diagnosis
• The chatbot is not a replacement for professional diagnosis or therapy. Its capabilities
are limited to basic support, guidance, and symptom recognition—not in-depth clinical
analysis.
40
CHAPTER 6
DESIGN APPROACH AND DETAILS
• Tasks:
o Define project scope and objectives: Develop a chatbot for mental health
support using NLU and RNN.
• Deliverables:
• Tasks:
o Define intent tags for classification and split the dataset into training and
validation sets.
41
o Develop the RNN model to predict user intents, using a dense neural network
architecture with an input layer, hidden layers, and an output layer with a
softmax function.
o Train the model using cross-entropy loss and an optimizer (e.g., Adam) to
minimize prediction errors.
• Deliverables:
o Trained RNN model capable of predicting user intents from text inputs.
• Tasks:
o Evaluate the model using metrics like accuracy, precision, recall, and F1-score
to ensure accurate intent prediction.
• Deliverables:
• Tasks:
o Integrate the NLU module to analyze user inputs and extract intents, feeding
them into the RNN model.
o Set up a database to log interactions and communicate via API calls for data
storage and retrieval.
o Deploy the trained model as a Flask web application to enable real-time user
interaction through a mobile or web interface.
42
• Deliverables:
o Fully integrated chatbot system with NLU, RNN, and response generation
components.
• Tasks:
o Test the deployed chatbot with users to assess response relevance and
helpfulness.
• Deliverables:
o Retrain the model with updated data to enhance intent prediction and response
generation.
o Finalize the system for long-term use, ensuring the database and API are secure
and functional.
• Deliverables:
43
6.2 Codes and Standards
1. Programming Standards
• The system is developed using Python 3.8+, adhering strictly to PEP 8 coding
guidelines to maintain clarity, consistency, and readability.
• The codebase is documented with inline comments and docstrings for ease of
understanding and future updates.
• The machine learning model is implemented using the PyTorch framework, following
standard practices in deep learning model architecture, training, and evaluation.
• Data preprocessing and augmentation techniques are applied to improve the model's
robustness and generalization to unseen data.
• Recurrent Neural Networks (RNNs) are used for handling sequential data, with
attention to overfitting prevention techniques like dropout and early stopping.
• The system complies with data protection principles similar to GDPR, ensuring that
user interactions are anonymized and not stored without consent.
• User conversations are handled in-memory only, with no persistent storage of sensitive
data, maintaining confidentiality and ethical AI usage.
44
• A modular and scalable architecture allows for easy integration of future components
such as emotion recognition or voice input.
• The design is aligned with the ISO/IEC 25010 quality model, with emphasis on
usability, security, and reliability.
• The system is tested manually using multiple test cases to ensure consistent behaviour
under different conversational scenarios.
The proposed AI-Driven Mental Health Chatbot system is divided into six core modules, each
serving a distinct purpose to ensure accurate, secure, and empathetic mental health support:
This module interprets user input to extract context, emotional tone, and intent. It uses Natural
Language Processing (NLP) techniques like sentiment analysis and intent classification to
identify key phrases and emotional cues. By understanding the user's mental state and the
meaning behind their words, this module lays the foundation for appropriate responses.
This module focuses on analyzing the user’s input to detect potential signs of mental health
issues such as anxiety, depression, and stress. It uses Recurrent Neural Networks (RNNs) to
understand patterns in sequential data. The model is trained on a labeled dataset containing real
mental health conversations, allowing it to make accurate, real-time predictions.
Based on the analysis from the NLU and ML modules, this module generates appropriate and
empathetic responses. It selects or adapts replies that suit the user’s emotional state and intent.
These responses aim to be supportive, stigma-free, and helpful, ensuring users feel heard and
guided.
This is the front-end module where users interact with the chatbot. It mimics popular messaging
platforms to ensure ease of use. The interface is designed to be clean, comfortable, and stigma-
free, encouraging open conversations. It is accessible via both web and mobile platforms.
45
This module ensures all user data is encrypted and anonymized, protecting sensitive mental
health information. It complies with data protection regulations and does not store or share any
personally identifiable data without consent. This helps build user trust and encourages open
engagement.
6. Real-Time Monitoring and Early Detection: • This module continuously monitors user
interactions to detect early signs of mental health issues. It tracks trends in user emotions and
behavior over time, providing the system with the ability to offer timely interventions when
necessary. • This feature is crucial for early detection, enabling the system to recommend
support before issues escalate.
7. Feedback and Continuous Learning: • A feedback mechanism allows users to rate their
experience, which helps the system learn from real-world interactions and improve accuracy.
User feedback is used to periodically retrain the ML model, ensuring it remains adaptive and
responsive. • This module also supports the addition of new intents and emotional cues, making
the chatbot increasingly relevant to user needs.
8. Scalability and Integration: • This module allows the system to scale by supporting additional
features, such as multi-language support or integration with other digital health tools. It ensures
that the system is adaptable to future enhancements, making it sustainable and expandable.
46
CHAPTER 7
IMPLEMENTATION AND TESTING
App.py
# Import required modules
app = Flask(__name__)
app = Flask(__name__)
@app.route("/")
def home():
return render_template("index.html")
@app.route("/get")
def get_bot_response():
userText = request.args.get('msg')
return str(generate_response(userText))
if __name__ == "__main__":
app.run(debug=True)
47
Fig 7.1.1 Sample Code
48
CHAPTER 8
RESULTS
• Understand user input by detecting emotions, intent, and context through sentiment
analysis and intent classification.
• Predict potential mental health conditions like stress, anxiety, and depression with
reasonable accuracy based on conversational cues.
• Provide a private and stigma-free platform, where users can interact in a comfortable
environment via a user-friendly web-based interface.
• Ensure data security through anonymization and encryption measures to build trust
among users.
The chatbot was evaluated using accuracy, precision, recall, and F1-score on a labeled mental
health conversation dataset. The results demonstrated that the system could effectively classify
emotional states and engage users with contextually appropriate responses. Though it is not a
replacement for professional therapy, it shows potential for use as an early intervention and
support tool.
8.1 Discussion
The development of the AI-driven mental health chatbot demonstrates how artificial
intelligence can be leveraged to address the growing need for accessible mental health support.
The integration of Recurrent Neural Networks (RNNs) and Natural Language Processing
(NLP) enables the system to process user inputs effectively and understand emotional
undertones, allowing it to offer personalized, empathetic responses.
49
One of the major strengths of the chatbot is its ability to provide real-time, stigma-free
assistance that is available 24/7, making it highly beneficial for individuals who may be hesitant
to seek traditional therapy. The use of a responsive and intuitive chat-based interface enhances
user engagement, while privacy and security mechanisms ensure trust and confidentiality.
However, some challenges remain. While the system performs well in recognizing emotional
patterns, it may still misinterpret complex or ambiguous language. Moreover, it is designed as
a supportive tool and not a replacement for professional psychological help. Ethical concerns,
such as managing crisis situations and avoiding over-reliance on automation, also need careful
consideration.
The AI companion’s performance suggests that with continuous training, larger datasets, and
integration with expert systems or human intervention layers, it could serve as a scalable mental
health companion, especially in underserved regions.
50
Fig 8.2.2 Web Page
51
CHAPTER 9
SUMMARY
52
CHAPTER 10
REFERENCES
[1] Benda N, Desai P, Reza Z, Zheng A, Kumar S, Harkins S, Hermann A, Zhang Y, Joly R,
Kim J, Pathak J, Reading Turchioe M .Patient Perspectives on AI for Mental Health Care:
Cross-Sectional Survey Study. JMIR Ment Health 2024;11:e58462. DOI: 10.2196/58462
[3] Limpanopparat, S., Gibson, E., & Harris, A. (2024). User engagement, attitudes, and the
effectiveness of chatbots as a mental health intervention: A systematic review. Computers in
Human Behavior: Artificial Humans, 2(2), 100081.
[4] Alhuwaydi AM. Exploring the Role of Artificial Intelligence in Mental Healthcare: Current
Trends and Future Directions – A Narrative Review for a Comprehensive Insight. Risk Manag
Healthc Policy.2024;17:1339-1348 . https://doi.org/10.2147/RMHP.S461562
[5] Jun, G. (2024). Psychological and Mental Health Evaluation of English Language Students
using Recurrent Neural Networks. Mobile Networks and Applications, 1-17.
[6] Dinesh DN, Rao MN, Sinha C. Language adaptations of mental health interventions: User
interaction comparisons with an AI-enabled conversational agent (Wysa) in English and
Spanish. DIGITAL HEALTH. 2024;10. doi:10.1177/20552076241255616
[7] Graham, S., Depp, C., Lee, E.E. et al. Artificial Intelligence for Mental Health and Mental
Illnesses: an Overview. Curr Psychiatry Rep 21, 116 (2019). https://doi.org/10.1007/s11920-
019-1094-0
[8]Lovejoy CA. Technology and mental health: The role of artificial intelligence. European
Psychiatry. 2020 ;55:1-3. doi:10.1016/j.eurpsy.2018.08.004
[9] Dhyani, M., & Kumar, R. (2021). An intelligent chatbot using deep learning with
Bidirectional RNN and attention model. Materials Today: Proceedings, 34, 817–824.
53
[10] Abd Al-Alim, M., Mubarak, R., Salem, N. M., & Sadek, I. (2024). A machine-learning
approach for stress detection using wearable sensors in free-living environments. Computers
in Biology and Medicine, 179, 108918.
[11] Madububambachu, U., Ukpebor, A., & Ihezue, U. (2024). Machine Learning Techniques
to Predict Mental Health Diagnoses: A Systematic Literature Review. Clinical practice and
epidemiology in mental health : CP & EMH, 20, e17450179315688.
[12] Mulgund, P., Li, Y., Singh, R., Purao, S., & Agrawal, L. (2025). The Design and Evaluation
of a Mental Health Educational App for Paternal Postpartum Depression. International Journal
of Human–Computer Interaction, 1–20.
[13] Olawade, D. B., Wada, O. Z., Odetayo, A., David-Olawade, A. C., Asaolu, F., & Eberhardt,
J. (2024). Enhancing mental health with artificial intelligence: Current trends and future
prospects. Journal of Medicine, Surgery, and Public Health, 3, 100099.
[14] Islam, M. M., Hassan, S., Akter, S., Jibon, F. A., & Sahidullah, M. (2024). A
comprehensive review of predictive analytics models for mental illness using machine learning
algorithms. Healthcare Analytics, 6, 100350.
[15] Munnukka, J., Talvitie-Lamberg, K., & Maity, D. (2022). Anthropomorphism and social
presence in human–virtual service assistant interactions: The role of dialog length and attitudes.
Computers in Human Behavior, 135, 107343.
54