Abstract
This paper presents the conceptual design, implementation and evaluation of a VR based system for treating phobias that simulates stress-provoking real-world situations, accompanied by physiological signals monitoring. The element of novelty is the holonic architecture we propose for the real-time adaptation of the virtual environment in response to biophysical data (heart rate (HR), electrodermal activity (EDA) and electroencephalogram (EEG)) recorded from the patients. In order to enhance the impact of the therapy, we propose the use of gamified scenarios. 4 acrophobic patients have been gradually exposed to anxiety generating scenarios (on the ground and at the first, 4th and 6th floors of a building, at different distances from the railing), where EEG, EDA and HR have been recorded. The patients also reported their level of fear on a scale from 0 to 10. The treatment procedure consisted in a VR-based game where the subjects were exposed to the same heights. They had to perform some small quests at various distances from the railing and report the in-game stress level, while biophysical data was recorded. The real-life scenarios have been repeated, with the purpose of assessing the efficiency of the VR treatment plan.
You have full access to this open access chapter, Download conference paper PDF
Similar content being viewed by others
Keywords
1 Introduction
This paper presents the conceptual design, development and evaluation of a system for treating phobias, based on gradual exposure to VR simulated real-life situations generating discomfort, accompanied by physiological signals monitoring. The element of novelty in our approach is the holonic architecture we pro-pose and the real-time adaptation of the virtual environment (VE) in response to biophysical data (heart rate, electrodermal activity and electroencephalogram) recorded from the users, which is performed by means of deep learning algorithms and neural networks (NN) solutions.
We designed a prototype for diagnosing and treating acrophobia that includes the corresponding holons operating the input data and system’s workflow. The prototype has been tested with 4 users in both the real-world and virtual environments, while biophysical data was recorded. The data collected has been fed to two neural networks (a shallow and a deep one) for fear level classification based on the physiological signals. We obtained a classification accuracy of over 70%. Moreover, even though the training period was short, there were improvements for almost all the studied parameters in the post-treatment phase of the experiment, demonstrating the efficiency of VR based gameplay for relieving acrophobia.
The contributions of the authors are: introducing a holonic architecture for the VR system, designing neural networks as the main machine learning technique used for ensuring system adaptability (stress level estimation and adaptive treatment), acquiring biophysical data and their use in providing the most appropriate treatment. Our VR-based system (PhoVRET – Phobia Virtual Reality Exposure Therapy) is still in development and the results obtained so far allow us to conclude that the conceptual strategy and the chosen machine learning techniques will lead to fulfilling the objective of the system, to be a useful tool in treating various phobias.
2 Related Work - Virtual Reality and Phobia Treatment
Phobia is a type of anxiety disorder manifested through an extreme, uncontrolled and irrational fear, triggered by various stimuli. Phobias are classified as follows: agoraphobia – fear of crowds, social phobias – fear of public speaking and specific phobias – caused by various situations and objects that lead to panic at-tacks [1]. A crisis of phobia causes both physical - high heart rate, sweating, dizziness and emotional symptoms – anxiety, panic, incapacity of controlling one’s fear despite a conscious effort. 13% of the world’s population suffers from a form of phobia: acrophobia (fear of height) – 7.5%, arachnophobia (fear of spiders) – 3.5% and aerophobia (fear of flying) – 2.6% [2]. Of these, only 20% seek specialized therapy [3].
The treatment indicated in the case of phobias is either medical (pills) or psychological – Cognitive-Behavioral Therapies (CBT) that determine the patient to see in a different way the traumatic experience, through thought and behavior control, gradual exposure in-vivo to anxiety-producing stimuli, in the presence of the therapist who monitors the procedure and adjusts the exposure intensity.
Virtual Reality was used since the 1990s in phobia therapy. VR can train phobic patients in ways that replicate real-world threatening environments on a gradual scale, being immersive, attractive, safe and offering a great variety of environments that can be repeated, with visual and auditory stimuli controlled by the therapist at relatively low costs.
A challenge in the development of VRET environments is to build a system that can be used to treat more specific phobias. For our multi-phobias treatment system we chose a holonic-based approach and for adjusting the exposure level we measure biophysical responses – HR, GSR and EEG. Level adjustment is performed by means of deep learning algorithms and neural network solutions. Furthermore, in order to enhance the impact of the therapy, we propose the use of gamified scenarios. Thus, a diverse, entertaining and engaging gameplay experience will attract and motivate the phobic patients to train and learn how to over-come their fears.
As far as we know there is no VRET system for treating multiple phobias and there is no usage of neural networks-based techniques in phobias treatment. To design such type of system we chose a holonic-based paradigm in order to over-come the complexity of the system and neural networks to estimate the fear level and to generate an adaptive, automatic treatment for the patients, without interference from the therapist.
3 A Holonic-Based Architecture for Phobias VRET System
3.1 The Proposed Architecture
Even if the holon term was proposed by Arthur Koestler in 1967 [4] to highlight the relationship between wholes and parts, the first applications of holonic paradigm were issued in the early 1990s in manufacturing systems. The term holon emerged from the Greek holos = whole and the suffix on, which, as the words “proton” or “neutron”, suggests a particle or part, to explain the complexity and the evolution of the biological and social systems. In these types of systems, in general, it is hard to make a distinction between wholes and parts. Often an entity is both a part and a whole. Koestler used the Janus effect to describe a component of a hierarchy: all have two faces looking in opposite directions: the face turned towards the subordinate levels is that of a self-contained whole; the face turned upward towards the apex, that of a dependent part. One is the face of the master, the other the face of the servant [4]. In short, from Koestler’s point of view, a holon is a component of a hierarchy that can act intelligently. A hierarchy of holons is called holarchy and a holon can be a holarchy, so a holon has a recursive structure [4].
The Holonic Manufacturing System (HMS) consortium transposed Koestler’s concepts in manufacturing systems [5]. A holon is seen as an autonomous and cooperative building block having two parts: an information processing part and a physical processing part [5, 6]. The main attributes of a holon are: autonomy, cooperation, self-organization and re-configurability [5,6,7]. Autonomy means the capability of a holon to act independently, so it can create its own plans and control their execution [7]. The cooperation refers to the fact that a group of holons develop mutually acceptable plans together and they execute them [6, 7]. The holons’ ability to reorganize themselves into the hierarchy in order to achieve the purpose of the system defines the self-organization [7]. A holon can modify its functions to be more flexible, its modular structure allows it to reconfigure itself [7]. In a holarchy the holons cooperate to achieve an objective using a set of rules imposed by holarchy [7, 8].
A reference architecture for HMS, called the Product-Resource-Order-Staff Architecture (PROSA), was proposed by Van Brussel et al. in [8]. PROSA consists of three basic holons: resource holons, product holons and order holons to capture the main concerns in a manufacturer company related to resources, processes and customers’ demands. Recently, the Prosa architecture has been refined and became the ARTI – Activity Resource Type Instance – architecture. ARTI focuses on intelligent beings and considers decision making technology as a repository of available tools [9]. The basic holons were transformed as follows: order holons into activity instances, product holons into activity types, and the resource holons are subdivided into instances and types [9].
Another holonic manufacturing paradigm-based architecture is ADACOR (ADAptive holonic COntrol aRchitecture for distributed manufacturing systems) proposed by Leitão and Restivo in [9]. In ADACOR, an adaptive control is introduced, in order to address the agile reaction to unexpected changes and disturbances of the environment. Barbosa extended the ADACOR holonic architecture to ADACOR2, adding self-organizing capabilities [10]. For developing a HMS, it was proposed a multi-agent methodology, called ANEMONA [11, 12].
There are few holonic paradigm-based for medical systems. A medical holarchy is defined in [13] as a system of collaborative medical entities: patients, physicians, medical and sensor devices, etc. There are designed three levels for a Holonic Enterprise in medicine: Inter-Enterprise Level: medical units (hospitals, pharmacies, clinics/laboratories); Intra-Enterprise Level: units at the enterprise level; Resource Level: medical personnel, physicians, medical assistants, devices for medical tests, information processing resources (medical files, software and hardware resources, databases, decision support systems), etc. Holonic paradigm-based architectures were used for some medical diagnosis or patients’ monitoring systems in [14,15,16] without considering their integration into a unitary medical system.
A holon-based architecture for medical systems seen in its entirety and as a part of a national system, which is part of a continental system is proposed by Moise et al. in [17]. The HMedA architecture (Holonic Medical Architecture) provides the following attributes for medical systems: flexibility, robustness, scalability, quick reaction to unexpected disturbances, modularity, decentralization, auto-configuration.
In the view of developing an integrated VRET system to be used in the treatment of various phobias, we have chosen to use a holonic-based architecture for several reasons:
-
We see the phobias treatment system as a part of a medical system;
-
We want to design a VRET system for treating multiple phobias, not just one;
-
We want to design an easily configurable and flexible system, which can auto-adapt to patients’ physiological records.
The architecture proposed for the phobias treatment VRET (PhoVRET) system is adapted from HMedA (see Fig. 1) [17].
We use four classes of holons: supervisor holons (hexagon shapes), services holons (rectangle shapes), resources holons (ellipse shapes), specific phobia holons (rounded rectangle shapes) [12]. The supervisor holons coordinate the entire activities of holons from a holarchy. So, the holon Supervisor holon – PhoVRET system coordinates the entire activities of PhoVRET holons through supervisors of the holarchies. Services holons provide services in a holarchy. The holon Services holon – Diagnosis & Treatment & Monitoring is as well a holarchy composed of four holons: a supervisor and three services holons for phobias diagnosis, treatment and monitoring. The holon Services holon – Patients data recorder deals with the management of the data related to patients. Specific phobias holons are responsible for other resources and services holons of the system: Acrophobia Holon, Aerophobia Holon, Social Phobia Holon, etc. Resources holons deal with the primary resources of the system like patients’ data (Resources holon – Patients data), sensor devices (Resources holon – Sensor devices) or medical specialists (Resources holon – Medical specialists). For sensor devices management, we designed a dedicated holarchy (a resource holon) – Resources holon – Sensor devices.
Holons possess social abilities - they cooperate with each other and with the humans. We adopted the concept of cooperation domain from [18] in order to facilitate the communication and cooperation between holons. The cooperation domains can be both dynamically and statically created, and a holon can be simultaneously part of more cooperation domains.
The fixed cooperation domains are associated for each of the next holons: Services holon – Diagnosis & Treatment & Monitoring; Services holon – Patients data recorder; each Specific phobias holon; Resources holon – Sensor devices, Resources holon – Medical specialists. So, the holons from the same groups interact directly and cooperate in the cooperation domain associated with the group. The holons from different groups interact with each other within the cooperation domain created ad-hoc.
The general architecture for a holon was proposed by Christensen in 1994 and consists of two parts: the information processing part and optionally, the physical processing part (see Fig. 2). The physical processing component is optional and it is divided into the hardware part and the controllers of the hardware part. The information processing part contains three modules: interholon interface, decision making (the holon’s kernel) and human interface.
3.2 Implemented Holons in the Current Prototype
In order to validate the holonic-based architecture and the techniques proposed in the system, we chose the most common type of phobia – acrophobia – and designed a prototype to test the applicability of our approach. The holons implicated in the current prototype are: Sensor devices holon, Patients data recorder holon and Diagnose & Treatment & Monitoring holon.
The Sensor devices holon includes the devices employed for recording biophysical data. The recording device we used was the Acticap Express Bundle [19] with 16 dry electrodes, with the ground and reference electrodes attached to the ears. The electrodes have been placed in the following positions, according to the 10/20 system: FP1, FP2, FC5, FC1, FC2, FC6, T7, C3, C4, T8, P3, P1, P2, P4, O1, O2. Electrodermal activity and heart rate were recorded using the Shimmers Multisensory Device [20], particularly the GSR unit. The data has been filtered and the outliers (values too low or too high) have been removed from the analysis. Before the start of the experiment, we waited for the device to connect and to calibrate in order to make sure that it saves correct, uncontaminated and unbiased values. The GSR Unit records Skin Conductance, Skin Resistance, PPG and transforms PPG into HR. In our analysis, we used Skin Conductance, measured in microsiemens and HR, measured in bpm.
The Patients data recorder holon is responsible for data synchronization, recording and storing. All the subjects’ activity was synchronized in real-time using timestamps via the Lab Stream Layer (LSL) protocol [21]. A multi-threaded C# application operating both the EEG and game data (events in the game and the players’ self-reported level of fear – Subjective Unit of Distress (SUD) at various moments of time) saved the recordings in log files on the computer. The subjects reported their perceived level of fear on the 11-choices-scale. Thus, 0 stands for complete relaxation and 10 for a high level of fear. There are 11 possible fear levels (0–11), this is why we called the scale as the 11-choices-scale.
We recorded the alpha (8–12 Hz), beta (13–30 Hz) and theta (3–8 Hz) log-normalized powers for all channels (the signals have been averaged by applying the typical log(1 + X2) formula), as well as the ratio of the theta to the beta powers (slow waves/fast waves), segmented into 1-second long epochs. The Shimmer Capture Android application stored the GSR and HR information on a mobile phone that was connected to the Shimmer unit attached to the patient’s hand.
The Diagnose & Treatment & Monitoring holon deals with fear level estimation based on the physiological data recorded (the Diagnose part) and treatment using the VR-based game (the Treatment part). The Monitoring subsection of this holon refers to post-treatment evaluation.
The game has been developed using the Unity engine and the C# programming language. For ensuring a full VR experience, it has been integrated with the HTC Vive head-mounted display. In the game, the user had to collect coins of various colors (bronze, silver and gold), situated on the ground floor and on the balconies of a building, at the first, fourth and sixth floor, as well as on the rooftop. The coins have been positioned according to their colors, so that the gold ones were the closest to the balcony’s railing, forcing somehow the player to bend over it and catch a glance of the view. Each time the user collected one coin, a virtual panel appeared in from of him, asking for the perceived level of fear (SUD). The virtual panel’s choices were disposed on 4 rows, corresponding to the answers: 0 for complete relaxation, 1–3 for low fear, 4–7 for medium stress level and 8–10 for high anxiety. Each patient played the game three times and biophysical data was recorded. The scenes were predefined and presented in the same order across trials. Each subject had to collect 3 × 5 = 15 coins across all the 5 floors. In order to finish a level and ascend to the next one, he had to collect all the coins from that level.
The game input was provided by virtual teleportation using the touchpad’s center button on the HTC Vive controllers, as we wanted to minimize user movement and prevent noise contamination in the biophysical signals. The player collected the coins by performing a small bending movement towards the ground and by pressing the hair trigger of the controller while raising the coin. The answers to the perceived in-game stress level were provided by pointing the virtual laser towards the panel and pressing the center button of the controller.
In order to automatically adapt the levels of the game based on the player’s physiological data, in the Diagnose part of the Diagnose & Treatment & Monitoring holon, we trained two neural networks (a shallow and a deep one), having as inputs the patient’s biophysical signals and as output, the perceived level of fear (SUD) on different scales.
Both networks had on the input layer a number of 71 neurons, corresponding to the EEG log-normalized powers in the alpha, beta and theta ranges for each of the 16 channels, the ratio of the theta to the beta powers for each of the 16 electrode positions, differences between the right and left power activations in the pre-frontal (PFaD), frontal (FaD) and fronto-central (FCaD) lobes for the alpha range, beta (bDB) and theta/beta (tbDB) differences from the baseline, GSR and HR values. The data has been pre-processed and noise has been removed prior to analysis. The output layer had only one neuron, corresponding to the perceived level of fear, on a 2, 4 and 11 choices scale.
The ratings from the 11-choices-scale, which range from 0 to 10, have been grouped into 4 clusters in order to create the 4-choices-scale (Table 1):
-
0 (relaxation) - rating 0 in the 11-choices-scale
-
1 (low fear) – ratings 1–3 in the 11-choices-scale
-
2 (medium fear) – ratings 4–7 in the 11-choices scale
-
3 (high fear) – ratings 8–10 in the 11-choices scale
Similarly, the ratings from the 4-choices-scale, which range from 0 to 3, have been grouped into 2 clusters in order to create the 2-choices scale (Table 1):
-
0 (relaxation) – ratings 0–1 in the 4-choices scale
-
1 (fear) – ratings 2–3 in the 4-choices scale
The purpose of this grouping is to improve categorization and classification in the neural networks.
The networks have been designed using the scikit-learn library [22] in a Python script and the capabilities of the Tensor Flow framework [23]. The shallow network has one hidden layer with 150 neurons, while the deep one has 3 hidden layers with 150 neurons on each layer. For the hidden layers, we used the “relu” activation function, while for the output layer, the “sigmoid” function for the 2-choices-scale and the “softmax” activation function for the 4-choices and 11-choices scales.
We had a total number of 63 trials for each user. Training and cross-validation have been done using a 10-fold cross-validation procedure, which means that the data was shuffled randomly, split into 10 groups and then each group was hold out as test group, while the rest of 9 groups were kept for training and fitting the model. Both the training and test sets have been also scaled in order to standardize their features. We trained and tested the data in both a user-dependent and user-independent modality. For the user-dependent modality, the model has been trained and cross-validated on each user’s data. In the case of the user-independent modality, a model has been computed using the data from 3 subjects and tested on the data of the 4th subject. This modality is less user-specific and allows evaluating the performance of the model in a more general case.
4 The Experiment
We performed an experiment in which 4 adult patients (1 male, 3 females, aged 21–49, mean age 30.75) have been gradually exposed to different heights in both the real-world and in a virtual environment, while biophysical data (EEG, GSR and HR) have been recorded, together with the Subjective Unit of Distress (SUD) value, representing the current level of fear the patient perceived, on a scale from 0 to 10. All subjects gave their consent for participating in the experiment and personally signed a consent form. The experiment was approved by the ethics committee of the UEFISCDI project 1/2018 and UPB CRC Research Grant 2017 and University POLITEHNICA of Bucharest, Faculty of Automatic Control and Computers.
The participants also filled in a demographic and a Visual Height Intolerance questionnaire [24]. The questionnaires can be found on the project’s website. According to the results of the latter questionnaire, they have been divided into 3 groups: high level of acrophobia – 1 user (User 1), medium level – 2 users (User 2 and User 3) and low level – 1 user (User 4).
The experiment consists of four stages: baseline, pre-treatment, treatment and post-treatment, performed along a period of 6 consecutive days. In the baseline phase of the experiment, we recorded physiological data in a resting position at the ground floor. In the pre-treatment stage (Diagnose), each patient has been in-vivo exposed to different heights (the first, fourth and sixth floors of a building), at various distances from the railing (4 m, 2 m and 0 m) for approximately 20 s in each position. The physiological and EEG data have been recorded during these 20 s of exposure. The order of the in-vivo exposure was fixed. At this step, each subject performed 3 × 3 = 9 trials. After each trial, the subjects reported the perceived level of fear – the SUD. In the treatment stage, the users have been asked to play the Virtual Reality game for 3 times, in 3 consecutive days, totalizing 3 × 5 × 3 = 45 trials. In the VR game, the physiological and EEG data have been recorded throughout the entire trials, which means from the moment the user moves towards the coin to the moment he collects it. During this time, he can look around, walk at his own pace and interact with the objects from the environment.
In the post-treatment phase of the experiment (Monitoring), we repeated the in-vivo exposure to the first, fourth and sixth floors, at various distances from the railing (4 m, 2 m and 0 m) for approximately 20 s in each position (9 trials). Similarly to the pre-treatment phase, the physiological and EEG data have been recorded during these 20 s of exposure. The reported fear levels have been recorded, together with the corresponding biophysical data. Thus, for each subject, we totalized a number of 9 + 45 + 9 = 63 trials.
5 Results and Validation of the Experiment
As fear is subjective for each patient apart and fear classification is a matter of subjectivism and individualization as well, we present in Table 2 the classification accuracy for each user, for both network types, in the user-dependent modality.
Table 3 presents the classification accuracy for each user, for both network types, in the user-independent modality.
Fear classification accuracy is comparable for both network types, for both the user-dependent and the user-independent modalities. It is higher for the 2-choices-scale, with values of over 65% (average 72.86% (Shallow NN) and 76.51% (Deep NN) for the user-dependent modality and 75.30% (Shallow NN) and 78.30% (Deep NN) for the user-independent modality), above the “by-chance” threshold of 50%. For the 4-choices scale, the results are higher than the threshold of 25% (average 38.39 (both Shallow NN and Deep NN) for the user-dependent modality and 38.21% (Shallow NN) and 38.81% (Deep NN) for the user-independent modality). Similarly, for the 11-choices scale, we record and average of 25.3% (both Shallow NN and Deep NN) for the user-dependent modality and 25.46% (Shallow NN) and 26.56% (Deep NN) for the user-independent modality. Table 4 presents the classification report for User 3, the user who obtained the lowest classification accuracy for the 2-choices scale, in the user-dependent modality. He has 42 ratings of 0 and 21 ratings of 1. Table 5 presents the confusion matrix.
Table 6 presents the classification report for User 3, for the 4-choices scale, in the user-dependent modality. He has 18 ratings of 0, 23 ratings of 1 and 22 ratings of 2. Table 7 presents the confusion matrix.
Table 8 presents the distribution of responses for the 2-choices scale. User 1 suffers from a more severe form of acrophobia, User 2 and User 3 from a moderate form of acrophobia, while User 4 has low acrophobia. Table 9 and 10 present the distribution of responses for the 4-choices scale and for the 11-choices scale respectively.
Thus, we conclude that the total number of 63 biophysical recordings obtained during the experiment for each user was enough for a good classification of the fear emotion in the binary range (0 – lack of fear, 1-fear). The classification accuracy in the user-dependent modality is similar to that in the user-independent modality. Thus, in order to have a stronger computational model, we need to train on a larger dataset. In addition, the approach of training on the data collected from more users provides good generalization results. These networks can be successfully used for automatically adapting the game levels (part of the Diagnose & Treatment & Monitoring holon) during a real-time gameplay in our future research.
Another objective that we pursued in our research was to observe how the virtual training sessions affected in-vivo exposure to heights in the post-treatment phase of the experiment. We considered as fear indicators a negative value of the PFaD parameter - mapped to alpha lateralization (high alpha activation in the right prefrontal lobe, corresponding to low alpha power), a large value of bDB (increased value of the beta power compared to the baseline tests), high GSR and HR values.
For each user, we run an ANOVA (ANalysis Of Variance) test to see the significant differences in the post-treatment phase, compared to pre-treatment. We considered that a high HR and an increase in GSR are indicators of fear. When someone is frightened, the heart rate increases in order to pump more blood into the vessels and to generate the “fright or flight” response of the sympathetic nervous system. Similarly, the sweat glands produce more sweat in stressful conditions, fact that has been observed in numerous studies [25].
There are improvements for almost all the parameters, although not all of them are statistically significant at alpha = 0.05. Table 11 presents the average values for the PFaD, bDB, GSR, HR and SUD parameters in the pre- and post-treatment phases. The statistically significant improvements are specified in the corresponding cells.
For the bDB parameter we recorded the highest improvement, statistically significant for all the users in the post-treatment phase. GSR was situated on the second position, with a significant reduction of its mean value for 3 users, followed by PFaD and HR. The level of fear decreased as well after the VR-based treatment session – with 25% for User 1 (who suffered from the highest level of acrophobia according to the initial questionnaire) and User 2 (medium level of acrophobia). For User 3 and 4 there is a small decrease (2% and 4.5%), but also the initial responses were low (1.66 to 1.62 and 1.11 to 1.06). More information is available on the project’s website.
6 Conclusions and Future Directions of Research
The purpose of the current research was to investigate the feasibility of a holonic-based architecture for a system aimed at diagnosing, treating and monitoring multiple phobias. We developed parts of the proposed holons – Sensor devices, Patients data recorder, Diagnose & Treatment & Monitoring holons and performed a series of tests with 4 acrophobic users in both the real-world and virtual environment. The physiological data we collected, together with the Subjective Unit of Distress values, have been fed as training data to two neural networks – a shallow and a deep one – in order to generate a model that would estimate the perceived level of fear based on the individual’s biophysical signals. The classification algorithms offered good results, especially for the 2-choices-scale (a binary scale where 0 is associated with relaxation and 1 with fear). These fear level estimation models will be used in our future research for a real-time automatic adaptation of the game levels based on the patient’s physiological data. Such, whenever the user is relaxed (level of fear equal to 0), the game level increases (the player is taken to a higher floor) and when he finds himself stressed or anxious (level of fear equal to 1), the game level decreases (for instance, is taken to a lower floor). In order to increase the classification accuracy in the case of the 4-choices and 11-choices scales, we plan to perform further tests, collect more data and increase the number of users participating in the experiment, so that we would have a more robust and stable training database. In addition, we will try to develop other network topologies, vary the number of neurons, the adaptive functions and benefit from the powerful resources of the machine and deep learning techniques.
Another research direction we pursued was to study the effect of VR-based training for reducing the level of fear. Even though the patients played the game only 3 times, they still obtained improvements in the post-treatment phase. For half of the parameters there was a high statistical significance in the results. In the future, we will vary the game levels, add more scenes, increase the interaction complexity and automatically increase or decrease the game’s intensity exposure based on the recorded biophysical signals.
To sum up, we conclude that the holonic-based architecture is feasible for designing a phobia diagnosis and treatment system. The holons we designed for the current prototype worked, synchronized and communicated appropriately, so that we can extend their use for any type of phobia treatment, not just acrophobia. Once they are implemented, they can be employed for diagnosing, treating and monitoring agoraphobia, social phobia, claustrophobia or multiple phobias. We have successfully approached the two desired research directions – designing a machine learning model for fear level estimation based on biophysical data and improving the patients’ acrophobic condition through VR immersive and interactive gameplay.
References
Arlington: Diagnostic and Statistical Manual of Mental Disorders, American Psychiatric Association (2013)
Nation Wide Phobias Statistics. https://blog.nationwide.com/common-phobias-statistics/. Accessed 12 Nov 2019
Phobias Statistics. http://www.fearof.net/phobia-statistics-and-surprising-facts-about-our-biggest-fears/. Accessed 12 Nov 2019
Koestler, A.: The Ghost in The Machine. Arkana, New York (1967)
Garcia-Herreros, E., Christensen, J., Prado, J.M., Tamura, S.: IMS - holonic manufacturing systems: System components of autonomous modules and their distributed control. Technical report, HMS Consortium (1994)
Christensen, J.H.: Holonic manufacturing systems: initial architecture and standard directions. In: Proceedings of the First European Conference on Holonic Manufacturing Systems, Hannover (1994)
Benaskeur, A., Irandoust, H.: Defence R&D Canada – Valcartier Technical report (2008)
Van Brussel, H., Wyns, J., Valckenaers, P., Bongaerts, L., Peeters, P.: Reference architecture for holonic manufacturing systems: PROSA. Comput. Ind. 37(3), 255–274 (1998)
Valckenaers, P.: ARTI reference architecture – PROSA revisited. In: Borangiu, T., Trentesaux, D., Thomas, A., Cavalieri, S. (eds.) SOHOMA 2018. SCI, vol. 803, pp. 1–19. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-03003-2_1
Barbosa, J.: Self-organized and evolvable holonic architecture for manufacturing control. Chemical and Process Engineering. Université de Valenciennes et du Hainaut-Cambresis, (2015). https://tel.archives-ouvertes.fr/tel-01137643v2/document
Botti, V., Giret, A.: ANEMONA: A Multi-Agent Methodology for Holonic Manufacturing Systems, 1st edn. Springer, London (2008). https://doi.org/10.1007/978-1-84800-310-1_2
Giret, A., Botti, V.: J. Intell. Manuf. 15, 645 (2004). https://doi.org/10.1023/B:JIMS.0000037714.56201.a3
Leitão, P., Restivo, R.: ADACOR: a holonic architecture for agile and adaptive manufacturing control. Comput. Ind. 57(2), 121–130 (2006)
Ulieru, M., Geras, A.: Emergent holarchies for e-health applications: a case in glaucoma diagnosis. In: Proceedings of IECON 2002 – 28th Annual Conference of the IEEE Industrial Electronics Society, Seville, Spain, 5–8 November, pp. 2957–2962 (2002)
Unland, R.: A holonic multi-agent system for robust, flexible, and reliable medical diagnosis. In: Meersman, R., Tari, Z. (eds.) OTM 2003. LNCS, vol. 2889, pp. 1017–1030. Springer, Heidelberg (2003). https://doi.org/10.1007/978-3-540-39962-9_97
Ulieru, M.: Internet-enabled soft computing holarchies for e-health applications - Soft computing enhancing the Internet and the Internet enhancing soft computing-. In: Nikravesh, M., Azvine, B., Yager, R., Zadeh, L.A. (eds.) Enhancing the Power of the Internet. Studies in Fuzziness and Soft Computing, vol. 139, pp. 131–165. Springer, Heidelberg (2004). https://doi.org/10.1007/978-3-540-45218-8_6
Akbari, Z., Unland, R.: A holonic multi-agent system approach to differential diagnosis. In: Berndt, J.O., Petta, P., Unland, R. (eds.) MATES 2017. LNCS (LNAI), vol. 10413, pp. 272–290. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-64798-2_17
Moise, G., Moise, P.G., Moise, P.S.: Towards holons-based architecture for medical systems. In: Proceedings of 2018 ACM/IEEE International Workshop on Software Engineering in Healthcare Systems SEHS 2018, Gothenburg, Sweden, pp. 26–30 (2018)
Acticap Xpress Bundle. https://www.brainproducts.com/productdetails.php?id=66. Accessed 12 Nov 2019
Shimmer Sensing. http://www.shimmersensing.com/. Accessed 12 Nov 2019
Lab Stream layer. https://github.com/sccn/labstreaminglayer. Accessed 12 Nov 2019
Scikit Learn Python Library. http://scikit-learn.org. Accessed 12 Nov 2019
Tensor Flow Library. https://www.tensorflow.org/. Accessed 12 Nov 2019
Huppert, D., Grill, E., Brandt, T.: A new questionnaire for estimating the severity of visual height intolerance and acrophobia by a metric interval scale. Front. Neurol. 8 (2017). https://doi.org/10.3389/fneur.2017.00211
Kometer, H., Luedtke, S., Stanuch, K., Walczuk, S., Wettstein, J.: The effects virtual reality has on physiological responses as compared to two-dimensional video. University of Wisconsin School of Medicine and Public Health, Department of Physiology (2010)
Acknowledgements
The work has been funded by the Operational Programme Human Capital of the Ministry of European Funds through the Financial Agreement 51675/09.07.2019, SMIS code 125125, UEFISCDI proiect 1/2018 and UPB CRC Research Grant 2017.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Bălan, O., Moise, G., Moldoveanu, A., Moldoveanu, F., Leordeanu, M. (2020). Classifying the Levels of Fear by Means of Machine Learning Techniques and VR in a Holonic-Based System for Treating Phobias. Experiments and Results. In: Chen, J.Y.C., Fragomeni, G. (eds) Virtual, Augmented and Mixed Reality. Industrial and Everyday Life Applications. HCII 2020. Lecture Notes in Computer Science(), vol 12191. Springer, Cham. https://doi.org/10.1007/978-3-030-49698-2_24
Download citation
DOI: https://doi.org/10.1007/978-3-030-49698-2_24
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-49697-5
Online ISBN: 978-3-030-49698-2
eBook Packages: Computer ScienceComputer Science (R0)