Sensors 24 01096 v2
Sensors 24 01096 v2
Article
Enhancing Stress Detection: A Comprehensive Approach
through rPPG Analysis and Deep Learning Techniques
Laura Fontes , Pedro Machado , Doratha Vinkemeier , Salisu Yahaya , Jordan J. Bird
and Isibor Kennedy Ihianle *
Department of Computer Science, Nottingham Trent University, Nottingham NG1 4FQ, UK;
n1119003@my.ntu.ac.uk (L.F.); pedro.machado@ntu.ac.uk (P.M.); doratha.vinkemeier@ntu.ac.uk (D.V.);
salisu.yahaya@ntu.ac.uk (S.Y.); jordan.bird@ntu.ac.uk (J.J.B.)
* Correspondence: isibor.ihianle@ntu.ac.uk
Abstract: Stress has emerged as a major concern in modern society, significantly impacting human
health and well-being. Statistical evidence underscores the extensive social influence of stress, es-
pecially in terms of work-related stress and associated healthcare costs. This paper addresses the
critical need for accurate stress detection, emphasising its far-reaching effects on health and social
dynamics. Focusing on remote stress monitoring, it proposes an efficient deep learning approach
for stress detection from facial videos. In contrast to the research on wearable devices, this paper
proposes novel Hybrid Deep Learning (DL) networks for stress detection based on remote photo-
plethysmography (rPPG), employing (Long Short-Term Memory (LSTM), Gated Recurrent Units
(GRU), 1D Convolutional Neural Network (1D-CNN)) models with hyperparameter optimisation
and augmentation techniques to enhance performance. The proposed approach yields a substantial
improvement in accuracy and efficiency in stress detection, achieving up to 95.83% accuracy with the
UBFC-Phys dataset while maintaining excellent computational efficiency. The experimental results
demonstrate the effectiveness of the proposed Hybrid DL models for rPPG-based-stress detection.
Keywords: 1D Convolutional Neural Network (1D-CNN); Deep Learning (DL); Gated Recurrent
Units (GRU); Long Short-Term Memory (LSTM); physiological signals; remote photoplethysmography
Citation: Fontes, L.; Machado, P.;
Vinkemeier, D.; Yahaya, S.; Bird, J.J.;
(rPPG); stress detection
Ihianle, I.K. Enhancing Stress
Detection: A Comprehensive
Approach through rPPG Analysis and
Deep Learning Techniques. Sensors 1. Introduction
2024, 24, 1096. https://doi.org/ Stress in humans is related to mental health and well-being [1]. It is the biological
10.3390/s24041096 response to a situation such as a threat, challenge, or physical and psychological barrier [2].
Academic Editor: Helmut Karl The sympathetic nervous system (SNS) and the parasympathetic nervous system (PNS)
Lackner are two components of the autonomic nervous system (ANS) that directly affect how the
body reacts to stress [3,4]. In highly stressful events, the SNS executes the fight or flight
Received: 7 January 2024 survival response. As a result, the body redirects its efforts toward fighting off threats.
Revised: 29 January 2024
Given its subjective nature, identifying and monitoring the onset, duration, and severity
Accepted: 2 February 2024
of stressful events is challenging. This is especially true in workplace situations [5] where
Published: 7 February 2024
there is often an intelligent choice to ignore stress for professional gain. Recent studies have
shown an increase in stress levels in the office environment [6]. Due to the plasticity of the
brain, chronic or persistent stress has been shown to increase the volume of the amygdala,
Copyright: © 2024 by the authors.
a structure within the limbic system that defines and regulates emotions, stores emotional
Licensee MDPI, Basel, Switzerland. memories, and, most importantly, executes the fight or flight response [7]. Similarly, chronic
This article is an open access article stress is associated with a reduction in the mass of the prefrontal cortex [8], which is used
distributed under the terms and to intelligently regulate thoughts, actions, and emotions.
conditions of the Creative Commons Recent research in the field has introduced various sensor-based solutions for stress
Attribution (CC BY) license (https:// detection, as evidenced by studies such as [4,9,10]. Although some of these solutions use
creativecommons.org/licenses/by/ only a single type of sensor, others employ multimodal sensing. Traditionally, electrocardio-
4.0/). graphy (ECG) has been used to measure heart rate variability (HRV) for stress detection [11].
Biomarkers like galvanic skin response (GSR), electrodermal activity (EDA), respiration,
and electromyography (EMG) are increasingly recognized for assessing affective states
and stress levels [12–14], utilising sensing devices. While these traditional sensor types are
considered the gold standard and provide excellent opportunities for the measurement of
stress-related biomarkers, the ease of use for these devices in a practical scenario becomes
a challenge, as experimentation can only be carried out in a designated equipped setting.
The focus of research is shifting to developing simpler and more convenient sensing so-
lutions that are applicable to everyday life to measure physiological parameters. Recent
advances in technology have led to significant developments in wearable and personal
sensing devices with applications in healthcare, for example, the use of a wearable device
to capture physiological data for health monitoring [15–20]. These devices include chest
bands [15,16,21,22], portable ECG devices [17,23], etc. HRV parameters can be measured
using wristbands such as Empatica E4 wristband [18,24], Microsoft Band 2 [19,25], Polar
watch [20,26], and Fitbit watch [20,26], among others. Researchers analyse personal data
from these devices to provide relevant insights into the individual’s physical and health
status. Although these devices show promise and provide a non-intrusive means of acquir-
ing data for stress detection models, a major limitation of these devices relates to the size,
making them uncomfortable for practical use cases [27].
On the contrary, rPPG technology measures Blood Volume Pulse (BVP) using a camera,
eliminating the need for sensor attachments [28,29]. By extracting skin pixels from facial
data captured by the camera, rPPG technology utilises changes in skin colour corresponding
to heartbeat to obtain the BVP signal [28,30–32]. This method simplifies the measurement,
reduces sensor complexity, and avoids attachment-related problems. Furthermore, rPPG
can be used to capture HRV measures for analysis, especially in healthcare applications. The
widespread availability of cameras in the form of webcams or smartphones makes rPPG
technology easily accessible to anyone. Due to its advantages, rPPG finds applications in
healthcare, fitness, and forensic science. Integration of rPPG technology into smart mirrors
or smartphones increases its potential as a professional health indicator. Although still in
an early stage, rPPG-based non-contact affective computing has become a growing area
of research in recent years, which can drastically improve human–computer interaction
in real time for stress detection. This paper explores the feasibility of end-to-end methods
for recognising stress by proposing a rPPG-based stress detection system to leverage non-
contact and physiological techniques, facilitating the continuous monitoring of pervasive
long-term biomedical signals. The contributions made in this paper are as follows:
• A novel system leveraging non-contact and physiological techniques is proposed,
enabling the continuous monitoring of pervasive biomedical signals for long-term
stress detection.
• Hybrid DL networks and models for rPPG signal reconstruction and Heart Rate (HR)
estimation to significantly improve accuracy and efficiency in stress detection up to
95.83% with the UBFC-Phys dataset.
• Extensive experiments and empirical evaluations of Deep Learning (DL) models for
stress detection provide valuable insights and comparisons.
The remainder of this paper is structured as follows. Section 2 presents a comprehen-
sive literature review of the existing approaches, while Section 3 introduces the methodol-
ogy, collection protocol, and preprocessing steps. In Section 4, the experimental results are
discussed while the conclusion and future work plan are outlined in Section 5.
2. Related Work
The term stress was initially introduced into medical terminology in 1936, referred to as
a syndrome produced by diverse nocuous agents that seriously threaten homeostasis [33].
Selye’s experiments demonstrated that prolonged exposure to severe stress could lead to
disease and tissue damage [34]. Recently, research on stress, its causes, and implications
has gained traction [4,9,10,12–14]. It is characterised by a complex interactive phenomenon,
arising when a situation is deemed important, carries the possibility of damage, and
Sensors 2024, 24, 1096 3 of 18
to remotely detect and monitor stress [28,30–32]. There have been a growing number of
research papers. For example, Benezeth et al. [46] proposed an rPPG-based algorithm
that estimates HRV using a simple camera, showing a strong correlation between the HRV
features and different emotional states. Similarly, Sabour et al. [29] proposed an rPPG-
based stress estimation system with an accuracy of 85.48%. Some other works on the use of
rPPG are encouraging, indicating that noncontact measures of some human physiological
parameters (e.g., breathing rate (BR) and Heart Rate (HR)) are promising and have great
potential for various applications, such as health monitoring [47,50] and affective comput-
ing [51–53]. While these contributions are noteworthy, this paper significantly advances
the field by introducing Hybrid Deep Learning (DL) networks and models for rPPG signal
reconstruction and Heart Rate (HR) estimation. This novel approach presents a substantial
improvement in accuracy and efficiency in stress detection, achieving up to 95.83% accuracy
with the UBFC-Phys dataset. The integration of Hybrid DL networks represents a contri-
bution, offering enhanced capabilities for signal reconstruction and stress classification.
Considering these, rPPG is well-suited for both business and everyday applications and
has the significant advantage of measuring ECG and photoplethysmography (PPG).
Wearable and contactless devices offer promising alternatives for stress measurement,
providing convenient and non-invasive methods for continuous monitoring. However, the
quality and accuracy of the data generated by these devices can vary. A major limitation
to adapting rPPG is evident in the decrease in the signal-to-noise ratio, which requires
advanced signal processing. Many articles lack peer review and validation in clinical
settings, raising concerns about the reliability of data. Although wearable devices can
be sensitive to factors such as movement, heat, and transpiration, leading to inaccurate
measurements, ease of use, especially during sleep or physical activities, is another huge
limitation. Individuals with skin sensitivities, allergies, or specific health conditions may
also find wearing these devices intolerable.
3. Method
The proposed methodology consists of three main parts, as shown in Figure 1. The
primary objective is to detect social stress using contactless physiological signals extracted
from facial videos through DL techniques. In the first part, a pyVHR toolbox (Python
framework for Virtual Heart Rate) [54] is used to capture and estimate the beats per minute
(BPM) from facial video data. The second part involves the increase in the estimated BPM
and is subsequently input into four DL models (Recurrent Neural Network (RNN), LSTM,
GRU, and 1D-CNN). The performance of these models is then evaluated and compared on
the basis of specific metrics. The proposed methodology is implemented using Python 3
and relevant libraries for data manipulation, leveraging an NVIDIA graphics processing
unit (GPU) with Compute Unified Device Architecture (CUDA) version 12.2 and CUDA
Deep Neural Network (CuDNN) library. It should be noted that the default parameters of
pyVHR, including a window size of 8, patch size of 40, and pre/post filter, were used for
the estimation of BPM. The selected methods include Regions of Interest (ROI) approaches:
holistic and convex hull, as well as CuPy CHROM, Torch CHROM, and CuPy POS. Refer
to Table 1 for a brief overview of the methods.
outlined in this paper, attention is given to ground-truth (GT) BVP signals labelled as T1
and T2 for the stress and non-stress classes, respectively. These signals, obtained using the
Empatica 4 wristband at a 64 Hz sampling rate, consist of vectors with 11,520 data points
each (64 × 180 = 11,520). Subsequently, the first 500 data points of the GT BVP signals for
subjects s1 to s4 were plotted to visually depict the impact of stress (T1) and non-stress (T2)
on signal behaviour. Refer to Figure 2 for these graphs.
Data processing included the application of the Fast Fourier Transform (Fast Fourier
Transform (FFT)) to generate frequency domain features from the Blood Volume Pulse (BVP)
signals. In addition, the data augmentation was implemented with Linear Interpolation
and Gaussian White Noise.
Figure 1. Stress detection framework. The video frames serve as inputs to the pyVHR toolbox,
enabling the extraction of rPPG BPM signals from facial regions within the frames. The derived BPM
signals are subsequently channelled through DL models (LSTM, GRU, and 1D-CNN), culminating in
stress classification outcomes.
Table 1. Parameters and methods used for rPPG with pyVHR toolbox.
Parameters Description
The number of consecutive video frames processed to estimate the
Window
physiological signal.
Skin extraction technique that sets the stage for calculating the
Holistic RGB trace, which is achieved by calculating
the average intensity of facial skin colour for each channel separately.
A skin extractor that subtracts the eyes and mouth regions from the
Convexhull rest of the entire face. It offers dependable real-time face and landmark
detection and tracking.
A chrominance-based method used to infer the pulse signal from
CuPy CHROM the RGB traces built with the CuPy Python library designed for
GPU-accelerated computing with open-source arrays.
Built with PyTorch, which is an open-source ML framework that facilitates
Torch CHROM building, training, and deploying DL models through a dynamic
computational graph.
Plane POS is another method also used to infer the pulse signal from
Cupy POS RGB traces, but from a projection plane that is perpendicular to the skin
tone built with the CuPy library.
Sensors 2024, 24, 1096 6 of 18
Figure 2. GT BVP signals behaviour during no-stress task (T1) and stress task (T2) of subjects s1 to s4.
pyVHR toolbox prevented the evaluation of the performance of 1D-CNN models versions
2 and 3, given their respective architectures. For a detailed architecture, layer descriptions,
parameters, and functions of the 1D-CNN-MLP models, please refer to Table 2. The design
flow of the 1D-CNN with 3 CNN and 2 MLP layers, labelled “CNNv2”, is illustrated in
Figure 3.
flatten 92,096 0
dense 16 1,473,552
dense 1 17
Sensors 2024, 24, 1096 8 of 18
Table 2. Cont.
TP + TN
Ac = (3)
TP + FN + TN + FN
Sensors 2024, 24, 1096 9 of 18
Recall—This metric is also known as sensitivity metric, or true positive rate. It computes
the proportion of true positive predictions out of all actual positive instances. In the context
of this research project, a high recall value indicates that the model is sensitive to detecting
social stress, which is critical for its practical application.
TP
Re = (4)
TP + FN
Precision—Calculates the proportion of true positive predictions out of all positive in-
stances. The higher the value, the more accurate the model is predicting the true posi-
tive instances. This helps minimise false positives, which is crucial when dealing with
stress assessment.
TP
Pr = (5)
TP + FP
F1—This metric provides a balanced view of the model’s performance by considering both
precision and recall. In stress classification, achieving a balance between minimising false
positives Pr and false negatives Re is vital. A high F1 indicates that the model accurately
identifies instances of social stress and minimises false classifications.
Pr × Re
F1 = 2 × (6)
Pr + Re
where the classification outcomes are True Positive (TP), True Negative (TN), False Positive
(FP), and False Negative (FN).
4. Experimental Results
The visualisations provided in Figure 4 offer a distinct view of the contrasting charac-
teristics between the non-stress task (T1) and the stress-induced task (T2) in both the time
and frequency domains. In the time domain analysis, the T1 signal exhibits fluctuations
within the range of −250 to 250 units, while in the presence of stress during T2, this range
becomes wider, spanning from −500 to 500 units. This change in range suggests a poten-
tially heightened physiological response during the stress task. Likewise, when we delve
into the frequency domain, we notice a parallel pattern. In the frequency domain repre-
sentation, the T1 signal presents values oscillating between 0 and 1, whereas the T2 signal
exhibits a wider span of 0 to 5. This expanded variation in the frequency domain further
emphasises the distinction between the non-stress and stress-induced states. Moreover,
the implications of these observations extend beyond mere visualisation. The frequency
domain signal has immense potential as a feature for training and testing deep learning
methods aimed at stress classification. While the raw BVP signal encapsulates temporal
patterns, the frequency domain offers insight into the underlying frequency components
that contribute to those patterns. By extracting features from the frequency domain, deep
learning models can potentially capture and leverage distinctive spectral characteristics
related to stress. The plots in Figure 4 illustrate the GT BVP signals of subject 1 during
tasks T1 and T2 before and after FFT being applied to the data.
Figure 5 shows the estimated heart rate (BPM) extracted from video T1 of subject 1,
using the CuPy CHROM method from the pyVHR toolbox. This visualisation illustrates the
state before and after augmentation using linear interpolation, where it is possible to infer
that expanding the original dataset of 173 data points to 11,009 data points did not alter the
underlying signal, reinforcing the consistency between the original and augmented data.
The processed and augmented dataset is then partitioned into training, validation, and test
datasets using 10% for validation and 10% for testing.
Likewise, Figure 6 shows the estimated heart rate (BPM) plotted from the T1 and
T2 videos of subject 1, using the CuPy CHROM method from the pyVHR toolbox. This
visualisation illustrates the state before and after augmentation using white noise, where
it is possible to infer that expanding the original dataset of 173 data points to 11,180 data
points did not alter the underlying signal.
Sensors 2024, 24, 1096 10 of 18
Figure 4. Graphs depicting the Time Domain (TD) and Frequency Domain (FD) representations of
the GT BVP signals for subject 1 during tasks T1 and T2.
Figure 5. Plot of estimated BPM extracted from video T1 of subject 1, using the method CuPy
CHROM, before and after augmentation using linear interpolation.
Figure 6. Plot of estimated BPM extracted from videos T1 of subject 1, using the method CuPy
CHROM, before and after augmentation using white noise.
Sensors 2024, 24, 1096 11 of 18
Figure 7. Validation loss and train and accuracy curves of the GT-1D-CNNv2 model.
Regarding the precision and recall in Table 3, precision in some cases is balanced with
recall, while in others, trade-offs are evident. As previously mentioned, models with both
high precision and high recall scores are effective at correctly classifying stress instances
(true positives) and minimising both false positives and false negatives. For instance, the
1D-CNNv2 model achieved this balance, with an accuracy of 83.33% and Precision and
Recall of 83.33%. On the other hand, models with high Recall, but lower Precision predict
more instances as stressed, including those that are uncertain. This is useful when capturing
all stress instances is a priority, even if it means more false positives. The GRUv1 model in
the FD shows this pattern, with Recall of 91.67% but Precision of 61.11%. It is also clear that
the 1D-CNNv2 model achieved the highest accuracy 83.33% among the tested methods.
This suggests that it might be the most effective model for classifying stress and non-stress
states from the GT-BVP signals. From Table 4, it can be inferred that the results achieved by
the traditional machine learning method employed by the dataset’s authors 75% and the
Sensors 2024, 24, 1096 12 of 18
CNN-MLP model utilised in the study by Hasanpoor et al. 82% [54] were both exceeded in
this work 83.33%.
Figure 8. Plot of estimated BPM extracted from videos T1 of subject 1, using the method CuPy
CHROM, before and after augmentation using white noise.
Considering the importance of accuracy, precision, and recall metrics, along with the
focus on real-world deployment utilising edge devices, the following models appear to be
the stronger candidates: 1D-CNN models, namely 1D-CNNv1, using the CuPy-CHROM
method, white noise augmentation, FD, and 100 epochs, with a mere 7.8 s of execution time;
1D-CNNv2, also using the CuPy-CHROM method, with linear interpolation augmentation,
FD, and 50 epochs; and the 1D-CNNv3 using the Torch-CHROM method, with linear
interpolation augmentation, TD, and 50 epochs.
Sensors 2024, 24, 1096 14 of 18
pyVHR Method DL-Method Version Aug. Domain Epochs Accuracy Precision Recall F1-Score Time
v1 inter freq 50 83.33% 83.33% 83.33% 83.33% 59.4
LSTM
v2 none freq 100 83.33% 90.00% 75.00% 81.82% 9.8
v2 none freq 50 83.33% 83.33% 83.33% 83.33% 6.3
GRU
CuPy_CHROM v1 none freq 50 79.17% 76.92% 83.33% 80.00% 4.6
v1 wn freq 100 95.83% 100.00% 91.67% 95.65% 7.8
1D-CNN v2 inter freq 50 95.83% 100.00% 91.67% 95.65% 14.5
v3 inter time 50 91.67% 100.00% 83.33% 90.91% 15.0
v1 inter freq 50 83.33% 83.33% 83.33% 83.33% 59.6
LSTM
v2 none freq 50 83.33% 90.00% 75.00% 81.82% 6.6
v3 wn freq 100 83.33% 78.57% 91.67% 84.62% 114.7
GRU
Torch_CHROM v2 none freq 50 83.33% 83.33% 83.33% 83.33% 6.5
v3 inter time 50 95.83% 92.31% 100.00% 96.00% 15.1
1D-CNN v2 inter freq 100 91.67% 100.00% 83.33% 90.91% 27.6
v1 none freq 50 87.50% 84.62% 91.67% 88.00% 2.4
v2 none freq 50 83.33% 83.33% 83.33% 83.33% 6.0
LSTM
v1 inter time 50 79.17% 76.92% 83.33% 80.00% 59.2
v1 none time 50 83.33% 83.33% 83.33% 83.33% 5.3
GRU
CuPy_POS v1 none time 100 83.33% 83.33% 83.33% 83.33% 9.7
v1 inter time 50 83.33% 83.33% 83.33% 83.33% 14.8
1D-CNN v3 wn time 50 83.33% 83.33% 83.33% 83.33% 14.5
v1 none freq 50 79.17% 73.33% 91.67% 81.48% 2.1
Aug. (Augmentation), inter (linear interpolation), wn (white noise), and s (seconds).
As shown in Table 6, two of the three CNN models (1D-CNNv2 and 1D-CNNv3)
achieved perfect scores (100%) in all performance metrics. These results were omitted from
Sensors 2024, 24, 1096 15 of 18
the best results in Table 5 and are likely the consequence of overfitting, due to training a
heavy model on a small dataset. The authors believe that it is reasonable to assume that
the deployment of these models, along with their associated weights, to real-world data
scenarios would probably yield performance outcomes that are less impressive.
Author Contributions: Conceptualisation, L.F., P.M. and I.K.I.; Methodology, L.F.; Software, L.F.;
Validation, L.F., P.M. and I.K.I.; Formal analysis, L.F., P.M. and I.K.I.; Investigation, L.F.; Resources,
L.F. and P.M.; Writing—original draft preparation, L.F.; Writing—review and editing, L.F., P.M., D.V.,
S.Y., J.J.B. and I.K.I.; Visualisation, L.F. and P.M.; Supervision, I.K.I. All authors have read and agreed
to the published version of the manuscript.
Funding: This research received no external funding.
Institutional Review Board Statement: Not applicable.
Informed Consent Statement: All the participating authors have signed and given the informed
consent statements.
Data Availability Statement: The dataset for this study is publicly available.
Sensors 2024, 24, 1096 16 of 18
References
1. Siddique, C.; D’Arcy, C. Adolescence, stress, and psychological well-being. J. Youth Adolesc. 1984, 13, 459–473. [CrossRef]
2. Everly, G.S., Jr.; Lating, J.M.; Everly, G.S.; Lating, J.M. The anatomy and physiology of the human stress response. In A Clinical
Guide to the Treatment of the Human Stress Response; Springer: New York, NY, USA, 2019; pp. 19–56.
3. Thayer, J.F.; Åhs, F.; Fredrikson, M.; Sollers III, J.J.; Wager, T.D. A meta-analysis of heart rate variability and neuroimaging studies:
Implications for heart rate variability as a marker of stress and health. Neurosci. Biobehav. Rev. 2012, 36, 747–756. [CrossRef]
[PubMed]
4. McCarty, R. The Fight-or-Flight Response: A Cornerstone of Stress Research; Elsevier: Amsterdam, The Netherlands, 2016; pp. 33–37.
5. Thorsteinsson, E.B.; Brown, R.F.; Richards, C. The relationship between work-stress, psychological stress and staff health and
work outcomes in office workers. Psychology 2014, 5, 1301–1311. [CrossRef]
6. van Kraaij, A.W.J.; Schiavone, G.; Lutin, E.; Claes, S.; Van Hoof, C. Relationship between chronic stress and heart rate over time
modulated by gender in a cohort of office workers: Cross-sectional study using wearable technologies. J. Med. Internet Res. 2020,
22, e18253. [CrossRef] [PubMed]
7. McEwen, B.S. Neurobiological and systemic effects of chronic stress. Chronic Stress 2017, 1, 2470547017692328. [CrossRef]
8. McKlveen, J.M.; Morano, R.L.; Fitzgerald, M.; Zoubovsky, S.; Cassella, S.N.; Scheimann, J.R.; Ghosal, S.; Mahbod, P.; Packard,
B.A.; Myers, B.; et al. Chronic stress increases prefrontal inhibition: A mechanism for stress-induced prefrontal dysfunction.
Biol. Psychiatry 2016, 80, 754–764. [CrossRef] [PubMed]
9. Samson, C.; Koh, A. Stress monitoring and recent advancements in wearable biosensors. Front. Bioeng. Biotechnol. 2020, 8, 1037.
[CrossRef] [PubMed]
10. Dalmeida, K.M.; Masala, G.L. HRV features as viable physiological markers for stress detection using wearable devices. Sensors
2021, 21, 2873. [CrossRef]
11. Shaffer, F.; Ginsberg, J.P. An overview of heart rate variability metrics and norms. Front. Public Health 2017, 5, 258. [CrossRef]
[PubMed]
12. Rodríguez-Arce, J.; Lara-Flores, L.; Portillo-Rodríguez, O.; Martínez-Méndez, R. Towards an anxiety and stress recognition system
for academic environments based on physiological features. Comput. Methods Programs Biomed. 2020, 190, 105408. [CrossRef]
13. Greco, A.; Valenza, G.; Lázaro, J.; Garzón-Rey, J.M.; Aguiló, J.; De-la Camara, C.; Bailón, R.; Scilingo, E.P. Acute stress state
classification based on electrodermal activity modeling. IEEE Trans. Affect. Comput. 2021, 14, 788–799. [CrossRef]
14. Pourmohammadi, S.; Maleki, A. Stress detection using ECG and EMG signals: A comprehensive study. Comput. Methods Programs
Biomed. 2020, 193, 105482. [CrossRef] [PubMed]
15. Marois, A.; Lafond, D.; Gagnon, J.F.; Vachon, F.; Cloutier, M.S. Predicting stress among pedestrian traffic workers using
physiological and situational measures. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting; SAGE
Publications Sage CA: Los Angeles, CA, USA, 2018; Volume 62; pp. 1262–1266.
16. Sánchez-Reolid, R.; Martínez-Rodrigo, A.; López, M.T.; Fernández-Caballero, A. Deep support vector machines for the identifica-
tion of stress condition from electrodermal activity. Int. J. Neural Syst. 2020, 30, 2050031. [CrossRef] [PubMed]
17. Tanev, G.; Saadi, D.B.; Hoppe, K.; Sorensen, H.B. Classification of acute stress using linear and non-linear heart rate variability
analysis derived from sternal ECG. In Proceedings of the 2014 36th Annual International Conference of the IEEE Engineering in
Medicine and Biology Society, Chicago, IL, USA, 26–30 August 2014; pp. 3386–3389.
18. Garbarino, M.; Lai, M.; Bender, D.; Picard, R.W.; Tognetti, S. Empatica E3—A wearable wireless multi-sensor device for real-
time computerized biofeedback and data acquisition. In Proceedings of the 2014 4th International Conference on Wireless
Mobile Communication and Healthcare-Transforming Healthcare Through Innovations in Mobile and Wireless Technologies
(MOBIHEALTH), Athens, Greece, 3–5 November 2014; pp. 39–42.
19. Shcherbina, A.; Mattsson, C.M.; Waggott, D.; Salisbury, H.; Christle, J.W.; Hastie, T.; Wheeler, M.T.; Ashley, E.A. Accuracy
in wrist-worn, sensor-based measurements of heart rate and energy expenditure in a diverse cohort. J. Pers. Med. 2017, 7, 3.
[CrossRef] [PubMed]
20. Caminal, P.; Sola, F.; Gomis, P.; Guasch, E.; Perera, A.; Soriano, N.; Mont, L. Validity of the Polar V800 monitor for measuring
heart rate variability in mountain running route conditions. Eur. J. Appl. Physiol. 2018, 118, 669–677. [CrossRef] [PubMed]
21. Salai, M.; Vassányi, I.; Kósa, I. Stress detection using low cost heart rate sensors. J. Healthc. Eng. 2016, 2016. . [CrossRef] [PubMed]
22. Moridani, M.; Mahabadi, Z.; Javadi, N. Heart rate variability features for different stress classification. Bratisl. Lek. Listy 2020,
121, 619–627. [CrossRef] [PubMed]
23. Salahuddin, L.; Cho, J.; Jeong, M.G.; Kim, D. Ultra short term analysis of heart rate variability for monitoring mental stress in
mobile settings. In Proceedings of the 2007 29th Annual International Conference of the IEEE Engineering in Medicine and
Biology Society, Lyon, France, 22–26 August 2007; pp. 4656–4659.
24. Schmidt, P.; Reiss, A.; Duerichen, R.; Marberger, C.; Van Laerhoven, K. Introducing wesad, a multimodal dataset for wearable
stress and affect detection. In Proceedings of the 20th ACM International Conference on Multimodal Interaction, Boulder, CO,
USA,16–20 October 2018; pp. 400–408.
25. Chudy, N.S. Testing of Wrist-Worn-Fitness-Tracking Devices during Cognitive Stress: A Validation Study. Bachelor’s Thesis,
University of Central Florida, Orlando, FL, USA, 2017.
Sensors 2024, 24, 1096 17 of 18
26. Giles, D.; Draper, N.; Neil, W. Validity of the Polar V800 heart rate monitor to measure RR intervals at rest. Eur. J. Appl. Physiol.
2016, 116, 563–571. [CrossRef]
27. Heikenfeld, J.; Jajack, A.; Rogers, J.; Gutruf, P.; Tian, L.; Pan, T.; Li, R.; Khine, M.; Kim, J.; Wang, J. Wearable sensors: Modalities,
challenges, and prospects. Lab Chip 2018, 18, 217–248. [CrossRef]
28. McDuff, D. Camera Measurement of Physiological Vital Signs. Acm Comput. Surv. 2023, 55, 1–40. [CrossRef]
29. Sabour, R.M.; Benezeth, Y.; De Oliveira, P.; Chappe, J.; Yang, F. Ubfc-phys: A multimodal database for psychophysiological
studies of social stress. IEEE Trans. Affect. Comput. 2021, 14, 622–636. [CrossRef]
30. Cheng, Y.C.; Chou, T.I.; Indikawati, F.I.; Winiarti, S.; Dahlan, A.; Selatan, R.; Yogyakarta, D. Stress Detection from Multimodal
Wearable Sensor Data. IOP Conf. Ser. Mater. Sci. Eng. 2020, 771, 012028. [CrossRef]
31. Herranz Olazábal, J.; Wieringa, F.; Hermeling, E.; Van Hoof, C. Camera-Derived Photoplethysmography (rPPG) and Speckle
Plethysmography (rSPG): Comparing Reflective and Transmissive Mode at Various Integration Times Using LEDs and Lasers.
Sensors 2022, 22, 6059. [CrossRef] [PubMed]
32. Yang, Z.; Wang, H.; Lu, F. Assessment of Deep Learning-Based Heart Rate Estimation Using Remote Photoplethysmography
Under Different Illuminations. IEEE Trans. Hum. Mach. Syst. 2022, 52, 1236–1246. [CrossRef]
33. Schneiderman, N.; Ironson, G.; Siegel, S.D. Stress and health: Psychological, behavioral, and biological determinants. Annu. Rev.
Clin. Psychol. 2005, 1, 607–628. [CrossRef] [PubMed]
34. Selye, H. Short letter. Nature 1936, 138, 32. [CrossRef]
35. Dhama, K.; Latheef, S.K.; Dadar, M.; Samad, H.A.; Munjal, A.; Khandia, R.; Karthik, K.; Tiwari, R.; Yatoo, M.I.; Bhatt, P.; et al.
Biomarkers in stress related diseases/disorders: Diagnostic, prognostic, and therapeutic values. Front. Mol. Biosci. 2019, 6, 91.
[CrossRef] [PubMed]
36. Can, Y.S.; Arnrich, B.; Ersoy, C. Stress detection in daily life scenarios using smart phones and wearable sensors: A survey. J.
Biomed. Inform. 2019, 92, 103139. [CrossRef] [PubMed]
37. Arsalan, A.; Anwar, S.M.; Majid, M. Mental stress detection using data from wearable and non-wearable sensors: A review. arXiv
2022, arXiv:2202.03033.
38. Nath, R.K.; Thapliyal, H. Smart wristband-based stress detection framework for older adults with cortisol as stress biomarker.
IEEE Trans. Consum. Electron. 2021, 67, 30–39. [CrossRef]
39. Chan, S.F.; La Greca, A.M. Perceived stress scale (PSS). In Encyclopedia of Behavioral Medicine; Springer: Berlin/Heidelberg,
Germany, 2020; pp. 1646–1648.
40. Cheng, B.; Fan, C.; Fu, H.; Huang, J.; Chen, H.; Luo, X. Measuring and computing cognitive statuses of construction workers
based on electroencephalogram: A critical review. IEEE Trans. Comput. Soc. Syst. 2022, 9, 1644–1659. [CrossRef]
41. Wang, X.; Li, D.; Menassa, C.C.; Kamat, V.R. Investigating the effect of indoor thermal environment on occupants’ mental
workload and task performance using electroencephalogram. Build. Environ. 2019, 158, 120–132. [CrossRef]
42. Abellán-Huerta, J.; Prieto-Valiente, L.; Montoro-García, S.; Abellán-Alemán, J.; Soria-Arcos, F. Correlation of blood pressure
variability as measured by clinic, self-measurement at home, and ambulatory blood pressure monitoring. Am. J. Hypertens. 2018,
31, 305–312. [CrossRef]
43. Chen, Y.; Rao, M.; Feng, K.; Niu, G. Modified Varying Index Coefficient Autoregression Model for Representation of the
Nonstationary Vibration From a Planetary Gearbox. IEEE Trans. Instrum. Meas. 2023, 72, 1–12. [CrossRef]
44. Shahid, M.M.; Agada, G.E.; Kayyali, M.; Ihianle, I.K.; Machado, P. Towards Enhanced Well-Being: Monitoring Stress and Health
with Smart Sensor Systems. In Proceedings of the 2023 International Conference Automatics and Informatics (ICAI), Varna,
Bulgaria, 5–7 October 2023; pp. 432–437.
45. Ihianle, I.K.; Machado, P.; Owa, K.; Adama, D.A.; Otuka, R.; Lotfi, A. Minimising redundancy, maximising relevance: HRV
feature selection for stress classification. Expert Syst. Appl. 2024, 239, 122490. [CrossRef]
46. Benezeth, Y.; Bobbia, S.; Nakamura, K.; Gomez, R.; Dubois, J. Probabilistic signal quality metric for reduced complexity
unsupervised remote photoplethysmography. In Proceedings of the 2019 13th International Symposium on Medical Information
and Communication Technology (ISMICT), Oslo, Norway, 8–10 May 2019; pp. 1–5.
47. Hassan, M.; Malik, A.; Fofi, D.; Saad, N.; Meriaudeau, F. Novel health monitoring method using an RGB camera. Biomed. Opt.
Express 2017, 8, 4838–4854. [CrossRef] [PubMed]
48. Selvaraju, V.; Spicher, N.; Wang, J.; Ganapathy, N.; Warnecke, J.M.; Leonhardt, S.; Swaminathan, R.; Deserno, T.M. Continuous
monitoring of vital signs using cameras: A systematic review. Sensors 2022, 22, 4097. [CrossRef]
49. Lee, R.J.; Sivakumar, S.; Lim, K.H. Review on remote heart rate measurements using photoplethysmography. Multimed. Tools
Appl. 2023, 1–30. [CrossRef]
50. Abbas, L.; Samy, S.; Ghazal, R.; Eldeib, A.M.; ElGohary, S.H. Contactless Vital Signs Monitoring for Public Health Welfare.
In Proceedings of the 2021 9th International Japan-Africa Conference on Electronics, Communications, and Computations
(JAC-ECC), Alexandria, Egypt, 13–14 December 2021; pp. 183–186.
51. Yu, Z.; Li, X.; Zhao, G. Facial-video-based physiological signal measurement: Recent advances and affective applications. IEEE
Signal Process. Mag. 2021, 38, 50–58. [CrossRef]
52. Casado, C.Á.; Cañellas, M.L.; López, M.B. Depression recognition using remote photoplethysmography from facial videos. IEEE
Trans. Affect. Comput. 2023, 14, 3305–3316. [CrossRef]
Sensors 2024, 24, 1096 18 of 18
53. Lokendra, B.; Puneet, G. AND-rPPG: A novel denoising-rPPG network for improving remote heart rate estimation. Comput. Biol.
Med. 2022, 141, 105146. [CrossRef] [PubMed]
54. Hasanpoor, Y.; Motaman, K.; Tarvirdizadeh, B.; Alipour, K.; Ghamari, M. Stress Detection Using PPG Signal and Combined
Deep CNN-MLP Network. In Proceedings of the 2022 29th National and 7th International Iranian Conference on Biomedical
Engineering (ICBME), Tehran, Iran, 21–22 December 2022; pp. 223–228.
55. Kirschbaum, C.; Pirke, K.M.; Hellhammer, D.H. The ‘Trier Social Stress Test’–a tool for investigating psychobiological stress
responses in a laboratory setting. Neuropsychobiology 1993, 28, 76–81. [CrossRef] [PubMed]
56. Boccignone, G.; Conte, D.; Cuculo, V.; D’Amelio, A.; Grossi, G.; Lanzarotti, R.; Mortara, E. pyVHR: A Python framework for
remote photoplethysmography. PeerJ Comput. Sci. 2022, 8, e929. [CrossRef] [PubMed]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual
author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to
people or property resulting from any ideas, methods, instructions or products referred to in the content.