0% found this document useful (0 votes)
26 views24 pages

Screenshot 2024-12-04 at 11.19.47 PM

This document presents a study on EEG-based emotion recognition using a hybrid deep learning approach and meta-heuristic optimization techniques. The proposed methodology improves emotion identification accuracy by employing Independent Component Analysis for artifact removal and a hybrid optimization algorithm (ABC-GWO) for feature extraction, achieving up to 100% accuracy on the DEAP dataset and 99% on the SEED dataset. The research addresses limitations in existing methods by enhancing data quality and recognition performance, making it applicable for real-life scenarios such as human-computer interaction.

Uploaded by

kadavulai
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
26 views24 pages

Screenshot 2024-12-04 at 11.19.47 PM

This document presents a study on EEG-based emotion recognition using a hybrid deep learning approach and meta-heuristic optimization techniques. The proposed methodology improves emotion identification accuracy by employing Independent Component Analysis for artifact removal and a hybrid optimization algorithm (ABC-GWO) for feature extraction, achieving up to 100% accuracy on the DEAP dataset and 99% on the SEED dataset. The research addresses limitations in existing methods by enhancing data quality and recognition performance, making it applicable for real-life scenarios such as human-computer interaction.

Uploaded by

kadavulai
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 24

www.nature.

com/scientificreports

OPEN Eeg based smart emotion


recognition using meta heuristic
optimization and hybrid deep
learning techniques
M Karthiga1, E Suganya2, S Sountharrajan3, Balamurugan Balusamy4 &
Shitharth Selvarajan5,6
In the domain of passive brain-computer interface applications, the identification of emotions is both
essential and formidable. Significant research has recently been undertaken on emotion identification
with electroencephalogram (EEG) data. The aim of this project is to develop a system that can
analyse an individual’s EEG and differentiate among positive, neutral, and negative emotional states.
The suggested methodology use Independent Component Analysis (ICA) to remove artefacts from
Electromyogram (EMG) and Electrooculogram (EOG) in EEG channel recordings. Filtering techniques
are employed to improve the quality of EEG data by segmenting it into alpha, beta, gamma, and theta
frequency bands. Feature extraction is performed with a hybrid meta-heuristic optimisation technique,
such as ABC-GWO. The Hybrid Artificial Bee Colony and Grey Wolf Optimiser are employed to extract
optimised features from the selected dataset. Finally, comprehensive evaluations are conducted
utilising DEAP and SEED, two publically accessible datasets. The CNN model attains an accuracy of
approximately 97% on the SEED dataset and 98% on the DEAP dataset. The hybrid CNN-ABC-GWO
model achieves an accuracy of approximately 99% on both datasets, with ABC-GWO employed for
hyperparameter tuning and classification. The proposed model demonstrates an accuracy of around
99% on the SEED dataset and 100% on the DEAP dataset. The experimental findings are contrasted
utilising a singular technique, a widely employed hybrid learning method, or the cutting-edge method;
the proposed method enhances recognition performance.

Keywords Electroencephalogram, Electroculogram, Arti!cial Bee colony, Grey Wolf Optimizer, Hybrid
learning methods, Convolutional neural network

Over the last decade, there has been some focus attached to identifying human emotions, especially due to
advances made in arti!cial intelligence. It is therefore important to accentuate the need to recognize human
emotional states toward enhancing natural and intelligent interactions between man and machine. Globally,
emotions are an important constituent of human existence and they in"uence every aspect of human life
including interactions, participation, and training1–4. Although there has been extensive research on this subject,
the modern approaches such as facial movements, gestures, and voice control o#en fail to meet the precise
requirement for the identi!cation of feel.
It is possible to consider EEG-signals, providing accurate information about the subject’s status disregarding
appearance and behavior. However, modern EEG-based emotion detection algorithms face signi!cant challenges
including EMG/EOG contamination, limited power of feature extraction and poor accuracy. Other indexes such
as fMRI5,6, PET, EEG, or MEG are tangible and can provide direct and clear responses. Compared to other
physiological assessments, access to EEG has been easier which has made the assessment the preferred tool
among scientists to capture recognition of emotions7. Furthermore, EEG signals posses certain characteristics

1Department of Computer Science and Engineering, Bannari Amman Institute of Technology, Tamilnadu, India.
2Department of Information Technology, Sri Sivasubramaniya Nadar (SSN) College of Engineering, Chennai,
Tamilnadu, India. 3Department of Computer Science and Engineering, Amrita School of Computing, Amrita Vishwa
Vidyapeetham, Chennai, Tamilnadu, India. 4Shiv Nadar (Institution of Eminence Deemed to be University, 201314
Greater Noida, Uttar Pradesh, India. 5Department of Computer Science, Kebri Dehar University, P.O.Box 250, Kebri
Dehar, Ethiopia. 6Department of Computer Science and Engineering, Chennai Institute of Technology, Chennai,
India. email: ShitharthS@kdu.edu.et

Scientific Reports | (2024) 14:30251 | https://doi.org/10.1038/s41598-024-80448-5 1


www.nature.com/scientificreports/

including: simplicity and increased susceptibility to external stimuli7. Stored patterns within cortex of the brain
is most easily recorded when a person is least active and has their eyes closed2. !e trends are estimated by
comparing intensity range of 0.5 to 100"V, that is around 1000 times lower when compared to the intensity range
of EEG waves2. Various frequencies sets are being employed to classify human cerebral waves:
delta (0.1–4) Hz8, theta (4–8) Hz8, alpha (8–13) Hz8, beta (13–30) Hz8, and gamma (30–64) Hz8.
!e #eld of emotional modeling is primarily based on two basic traits, namely valence and excitation. !e
concept of valence refers to range of good or negative emotions that a human being experiences in response
to a stimulus. Arousal, on the other hand, signi#es the continuum of alertness or responsiveness to stimuli,
which can vary from a passive to an active state, both physiologically and psychologically. In the context of
electroencephalography (EEG), these emotional dimensions are mapped through various brain regions, as
depicted in Fig."1. !ese regions encompass the fronto-polar (Fp), temporal (T), central ©, frontal (F), parietal
(P), and occipital areas9. Speci#cally, electrodes Fp1 and Fp2 are positioned on le$ and right frontal lobes,
respectively. !e montage channels AF3 and AF4 are located centrally between the eyes. Additional electrodes,
namely FC59, FC19, FC29, FC69, C39, C49, CP59, CP19, CP29, and CP69, are strategically placed across the scalp.
!e O1 and O2 regions are situated above the primary visual cortex, as demonstrated in Fig."1(a). !e valence-
arousal dimensional modelling of emotion, widely utilized in numerous research studies, is exempli#ed in Fig."1
(b).
!is EEG-based approach o%ers an alternative to the traditional, and o$en more challenging, behavioral
assessments conducted by clinicians. To comprehend a patient’s emotional state, medical professionals typically
require extensive knowledge and expertise. However, with evolution of brain cortex-computing interface
technologies and neuroimaging, acquisition of brainwave data has been simpli#ed. Current advancements enable
the practical10 and emotional11 control of devices, such as mobility aids11, communication tools12, and prosthetic
limbs13,14, through wearable EEG headsets10, a feat previously unattainable. !is study employs various machine
learning techniques to analyze EEG data, aiming to detect and categorize a wide array of human emotions,
thereby facilitating the interpretation of emotional states from EEG waves.

Research problem and objectives


In the domain of passive brain-computer interface applications, the identi#cation of emotions is both essential
and di&cult. Extensive research has been undertaken on emotion identi#cation with electroencephalogram
(EEG) data. Nonetheless, there is an ongoing demand for enhanced precision and e&cacy in this domain. !is
research o%ers multiple innovative contributions to the domain of emotion identi#cation through EEG data.

• Artifact elimination by Independent Component Analysis (ICA) facilitates the removal of artefacts from
Electromyogram (EMG) and Electrooculogram (EOG) signals in EEG recordings, hence improving the qual-
ity of EEG data.
• Hybrid Meta-Heuristic Optimisation Algorithm (ABC-GWO): !e application of a hybrid meta-heuristic
optimisation approach for feature extraction, integrating the advantages of the Arti#cial Bee Colony (ABC)
and Grey Wolf Optimiser (GWO) algorithms. !is innovative method markedly enhances the precision of
emotion recognition.

Fig. 1. (a) Electrodes Position (b) Two-dimensional valence arousal space of emotion.

Scientific Reports | (2024) 14:30251 | https://doi.org/10.1038/s41598-024-80448-5 2


www.nature.com/scientificreports/

• !e suggested methodology undergoes comprehensive evaluation on two publicly accessible datasets (DEAP
and SEED), exhibiting enhanced performance relative to previous methods, with accuracy attaining 100%.

Research gap
Promising results were obtained in recent research on EEG-based emotion recognition but quite commonly
su"ers greatly from limitations regarding accuracy and robustness. Most approaches fail to clean enough
noise and artifacts from EEG signals, which directly a"ects the classi#cation of emotions with incorrect data.
Furthermore, conventional feature extraction methods o$en fail to adequately integrate the complex patterns
in EEG data required for accurate emotion recognition. !is study bridges the limitations by including artifact
removal in detail and a highly advanced feature extraction method, which would signi#cantly enhance the
e"ectiveness and reliability of an EEG-based emotion identi#cation system. Here, these issues are systematically
carried out to improve this area of science of emotion recognition with a more accurate and reliable approach
that generalizes to real-life scenarios, such as human-computer interaction and mental health diagnosis with
adaptive learning environments.
!e subsequent parts of document are prepared as trails: Sect.%2 focuses on an extensive synthesis of articles
relevant to the topic at hand. Section% 3 presents the DEAP15 and SEED16 databases and de#nes experiment
con#gurations used. Moreover, the paper provides a novel hybrid-learning model speci#cally for emotion
detection and describes its major components like pre-processing, feature reduction, and validation techniques.
It also gives a brief idea about the working of Convolutional NeuralNetworks (CNN), Arti#cialBeeColony (ABC),
and GreyWolfOptimizer (GWO). In the following Sect.%4, a complete appraisal of experimental outcomes that
resulted from deploying di"erent models for DEAP15 and SEED16 datasets. Lastly, the last sub-section of this
section is the discussion and conclusion of the study17.

Related works
Emotion recognition has recently received increased attention because of its associations with various disciplines
including psychology, physiology, learning studies, marketing, and healthcare. It has become common to classify
emotions which has given rise to di"erent classi#cation methodologies. !ese can be broadly categorized into
three main approaches: Supervised MLearn Models, Unsupervised MLearn Models, Reinforcement MLearn
Models, DLearn Models, and EnLearn Models18–22.

Conventional machine learning approaches


Previous approaches for emotion recognition mainly rely on feature extraction and traditional classi#ers. For
example, Wang et al. (2020)23 compared the results of implementing SVM and CNN algorithms (LeNet and
ResNet) in SEED and MAHNOB-HCI databases. It was established by such studies that greater models could
produce better outcomes particularly when adopting data enrichment approaches. However, studies may
su"er from low statistical power due to small sample sizes or low variety of participants which complicates
generalization of the results18,19.

Deep learning techniques


In the past few years, deep learning (DLearn) has taken emotion recognition to the next level by automating
feature extraction and selection process. Di"erent architectural styles include the Convolutional NeuralNetwork
(CNN)20, DeepBelief Network(DBN)24, Graph CNNs, Long ShortTerm Memory (LSTM) networks23 and
Capsule Networks (CN)26. Pandey et al., (2021) proposed a CNN-based model using scalogram images of EEG,
which are tested on DEAP dataset with a median precision rate of 54% and has a lower success rate while cross-
checking with the SEED dataset27. Yucel Cimtay et al. (2020) also proposed a pretrained CNN architecture
and obtained an average recognition rate of 58.10% of DEAP database through training with SEED database28.
Such research suggests improvement but points to problems of over#tting and relatively poor transfer between
di"erent sets.

Hybrid approaches and multimodal techniques


Hybrid models, incorporating a combination of several algorithms, have been recently shown to have potential
in higher performance in emotion recognition. One promising algorithm, called BimodalLSTM, takes input
from both EEG as well as peripheral physiological inputs and was found to be capable of solving the issues
regarding emotion recognition using a multimodal paradigm by Tang et al. (2021).
Many researchers have addressed speci#c emotional states. Asma Baghdadi et al. (2020) had proposed
“DASPS: A Database for Anxious States based on a Psychological Stimulation,” using a stacking sparse
autoencoder algorithm, with Hjorth variables as its features, reaching mean precision rates of 83.50% and 74.60%
for di"erent levels of stress30,31. Bachmann et al. introduced a feature extraction approach that applied Adaptive
Multiscale Entropy for the detection of driving fatigue with SVM classi#ers, obtained a detection rate of 95%31.
Alakus et al. have also done research on their own dataset named GAMEEMO by applying spectral entropy with
the help of a bidirectional LSTM, with an average precision of 76.91%32. An et al. have used deep CNN (DCNN)
and Conv-LSTM models for emotion recognition later in 202133.
Rajpoot et al. (2021) carried out a subject-invariant experiment based on the use of LSTM with Channel
Attention autoencoder and CNN with Attention model on the DEAP, SEED, and CHBMIT datasets34. Jingcong
Li et al. (2021) used graph neural network with mean precision rates of 86.8% and 75.27% on the SEED and
SEED-IV datasets, respectively35. Features such as mean absolute deviation and standard deviation are used with
SVM, LDA, ANN, and k-NN classi#ers, resulting in a high level of classi#cation accuracy in a subject-dependent
methodology by Rahman et al. (2020)36. Dong et al. (2020) utilized DNNs for the recognition of emotions

Scientific Reports | (2024) 14:30251 | https://doi.org/10.1038/s41598-024-80448-5 3


www.nature.com/scientificreports/

through EEG waves recorded from the DEAP dataset and showed that DNNs could handle a huge amount of
varied training records37.

Recent innovations
Recently new methods have been proposed, which involve synchrosqueezing wavelet transform maps in
conjunction with deep architectures such as ResNet-1838. Continuous wavelet transform and transfer learning
based on CNNs have also attained higher accuracy in emotion recognition39. Hybrid models, which are based
on wavelet convolutional neuralnetworks combined with SVM, have established robust frameworks for emotion
recognition40. An ensemble deep learning approach including e!ective connectivity maps integrated from the
brain produced 96.67% accuracy in emotion recognition, as proposed by Bagherzadeh et al. (2021).

Critical analysis and relevance


"e traditional and deep learning methods have impacted the domain highly, but they are sensitive to #aws in
datasets, over$tting, and improper handling of artifacts within the EEG signals. "is limitation of application to
broader, real-world scenarios comes from the dependence of the methods on limited and homogeneous datasets.
In the large number of studies, robust artifact removal techniques are generally not provided, and noisy data may
be le% behind, compromising the accuracy of the emotion recognition systems.

Significance of the proposed methodology


"e present study overcomes the shortcomings of previous works by using ICA for artifact removal and a hybrid
meta-heuristic optimization algorithm (ABC-GWO) for feature extraction. "us, the quality of the EEG data
improves and the emotion recognition systems’ accuracy increases up to 100% on the DEAP dataset and 99%
on the SEED dataset15,16. It is in this line that the paper $lls gaps le% behind by previous work and whereby the
proposed methodology is presented here as a new approach.

Methods and materials


Dataset description
"e present study utilizes the DEAP15 and SEED16 datasets, which are widely recognized as benchmarks in the
$eld of emotion identi$cation research.

1. SEED Dataset: "e SEED dataset had a crucial role in evoking emotional responses from participants using
short $lms as a medium as mentioned in Table 1. "ese $lms, originating from China, were each accompa-
nied by a musical score to enhance the emotional experience. "e design of the experiment aimed to simulate
a naturalistic setting, thereby provoking signi$cant physiological reactions. A total of $%een clips, averaging
four minutes in length, were utilized per session. "e dataset encompasses estimations of happiness, sadness,
and neutrality, with each emotional category represented by $ve distinct $lm clips. Participants underwent
a structured session that included a $ve-second preview of each clip, followed by a 45-second re#ection
period and a 15-second intermission for recuperation. "e emotional states of the subjects were assessed
using self-report questionnaires. "e group comprised Chinese pupils who claimed to have normal eyesight
and hearing to themselves. Each participant engaged in experimental procedure three times, with a mini-
mum one-week interval between sessions. "e EEG data has been captured employing a sixtytwo-channel
electrode in accordance with the 10–20 system, sampled at 1000&Hz, $ltered between 0.5 and 70&Hz, and
subsequently downsampled to 200&Hz for signal processing.
2. DEAP Dataset: "e DEAP dataset stands as a pivotal resource for emotion recognition via EEG signals. It
comprises recordings from 32 participants who were monitored while viewing music videos&(Table 1). Each
participant’s neural activity was recorded over 40 video sessions, using a 40-channel setup with continuous
referencing, initially sampled at $ve hundred and twelve Hz and later processed to hundred and twenty eight
Hz. "e study involved exposing the subjects to a sequence of video clips, wherein each clip was marked with
four distinct emotional descriptors: arousal, valence, dominance, and like/dislike. "e participants’ assess-
ments on valence, arousal, dominance, and like/dislike for each stimulus were captured using self-assessment
questionnaires. "e utilization of the DEAP dataset enabled the delineation of nine distinct emotional states
across several dimensions, including joy, tranquility, anxiety, excitement, apathy, and melancholy.

Preprocessing
Filtering
In the $eld of electroencephalography (EEG), di!erent types of background noise make it necessary to preprocess
raw data in order to improve the quality of EEG readings. Delta (0–4&Hz), theta (4–8&Hz), alpha (8–13&Hz), and
beta (13–30&Hz) are four main rhythms that make up EEG. Other frequencies can go up to 50&Hz. "ese beats,

Dataset Name of the array Size of the array Contents in the array
EEG Data 18 × 62 × 48,000 (videos/trials) x channels x samples
SEED Dataset
Labels 15 × 3 (videos/trials) x labels
EEG Data 40 × 40 × 8064 (videos/trials) x channels x samples
DEAP Dataset
Labels 40 × 4 (videos/trials) x labels

Table 1. Database structure.

Scientific Reports | (2024) 14:30251 | https://doi.org/10.1038/s41598-024-80448-5 4


www.nature.com/scientificreports/

which can be seen in a medical setting, tell us a lot about the most basic EEG patterns. To get features out of
data in the suggested study, a special DualTree complexwavelet transform is used through a Hanningwindow.
Based on the ValenceArousal Two-Dimensional Approach, these traits are then marked as either good, negative,
or neutral. A rating above the average indicates a high arousal/valence class (positive, 1), while a rating below
the median suggests a low arousal/valence class (negative, 1). Conversely, a rating equal to or less than median
signi!es neutral arousal/valence label (0, 0). To assess emotion recognition capabilities, a binary categorization
task was designed.
Preprocessing of raw data is essential to improve the quality of EEG signals. Methods such as
IndependentComponent Analysis (ICA), speci!cally rapid ICA restriction method, are commonly employed
to semiauto-matically eliminate EOG and EEG grids from EEG recordings, facilitated by so"ware applications.
As an initial step in signal processing, !ltering is utilized, followed by a decomposition into sub-bands using
the discrete cosine WaveletTransform (DT-CWT). Filtering can be mathematically illustrated by a convolution
operation. For a provided signal z(t) and !lter f(t), the !ltered signal y(t) can be expressed as in (1).
! ∞
y (t) = z (τ ) .f (t − τ ) dτ (1)
−∞

DualTree complexwavelet transform (DT-CWT)


#e DualTree ComplexWavelet Transform (DT-CWT) uses a dualtree of wavelet !lters to tell the di$erence
between the real and fake parts of complex coe%cient-ts. Compared to the DiscreteWaveletTransform (DWT),
the DTCWT does a better job of preventing aliasing and keeping the estimated shi" invariance. #e DTCWT is
represented as a series of convolution operations with wavelet based !lters. For an input signal z(t), and Ψ j,k as
the wavelet basis functions, the DTCWT coe%cients are calculated as in Eq.&(2).
Wj,k =< f, Ψ j,k > (2)

Independent component analysis (ICA)


If the accuracy of EEG data need to be improved before they are used, IndependentComponentAnalysis (ICA)
is utilized. In particular, rapid ICA restriction method is used to get rid of 'aws from electrooculogram (EOG)
and electromyogram (EMG) records that are in EEG band. ICA is employed to remove artifacts from EMG and
EOG in EEG recordings, improving data quality. #e fastICA algorithm parameters that are used are as follows:

• Maximum number of iterations: 200.


• Tolerance: 1e-4.

Hybrid ABC-GWO based feature extraction


In this project, obtaining features from EEG data channels is done using a hybrid method that includes ABCGWO
optimization technique. #is new method takes best parts of both ABC and GWO techniques and uses them
together to quickly pick out or derive bene!cial characteristics from EEG information. #e hybrid ABCGWO
algorithm uses combined ingenuity of all these optimization approaches to !nd most useful features that describe
mental states or actions that are captured in EEG waves. A hybrid meta-heuristics optimization algorithm (ABC-
GWO) is used for feature extraction. #is novel approach combines the strengths of the Arti!cial Bee Colony
(ABC) and Grey Wolf Optimizer (GWO) algorithms, signi!cantly improving the accuracy of emotion detection.
#e detailed work'ow of the Hybrid ABC-GWO algorithm is represented in Fig.&2.

ABC optimization technique


#e ABC optimization technique is used to pull out pertinent characteristics that successfully di$erentiate
between di$erent brain processes or stages in EEG analysis. A number of important steps can be taken to break
down the procedure:

• Initialization Stage: #e process starts by making a bunch of random responses, each represented by letter Xij
and containing a possible set of attributes drawn from EEG waves. It is important to note that these answers
are based on problem’s size and take into consideration lower (LBj) and higher (UBj) limits for every parame-
ter. Also, the AbandonmentCounter (ACi) for every option is lowered to zero to see how well it works.
Xij = LB j + λ (U B j − LB j ) (3)

Where:

• Xij denotes the jth parameter of the ith employed bee’s solution
• LB j denotes lower bound of jth parameter
• U B j denotes upper bound of jth parameter
• λ is a random number within the [0,1] range
• Phase of Employed Bees: During this stage, modi!cations are made to the existing solutions of employed bees
to generate candidate solutions. Each candidate solution, denoted by Vij, is formulated by updating a single
parameter of the solution Xijof the employed bee.
Vij = Xij + ∅ (Xkj − Xij ) (4)

Scientific Reports | (2024) 14:30251 | https://doi.org/10.1038/s41598-024-80448-5 5


www.nature.com/scientificreports/

Fig. 2. Hybrid ABC-GWO algorithm work!ow.

Where.

• Vij denotes jth dimension of ith candidate solution


• Xij denotes jth dimension of ith employed bee’s solution
• Xkj denotes jth dimension of kth employed bee’s solution
• ∅ is a random number within the [− 1, + 1] range

Scientific Reports | (2024) 14:30251 | https://doi.org/10.1038/s41598-024-80448-5 6


www.nature.com/scientificreports/

• Evaluation of Fitness: !e "tness for each potential solution is assessed based on the speci"c objective func-
tion of the problem. !is function provides a metric for evaluating the e#ectiveness of the candidate solution
in extracting pertinent data from the EEG signals.
1
F itness(V ij ) = (5)
1 + f (Vij )

Where.

• F itness(V ij ) denotes "tness value of ith candidate solution


• f (Vij ) denotes objective function value of ith candidate solution
• Phase of Employed Bees: At this stage, possible solutions are thought up by changing hired bees’ current ap-
proaches. By changing just one variable of answer given by employed bee, every possible solution is created.
An equation that uses a random choice of a nearby solution from group of hired bees makes this adjustment
easier. !e problem’s particular objective work is used to judge usefulness of each possible answer.
• Phase of Onlooker Bees: In order to optimize their responses, onlooker bees select hired bees according to
"tness rankings. !e process of screening is conducted by utilization of a roulette wheel process, wherein the
probability of selecting a bee that is used is directly correlated with its level of "tness. !e response proposed
by selected employed bee is subsequently re"ned by spectator bee. When new solution outperforms employed
bee’s answer, it is replaced; if not AbandonmentCounter of employing bee is increased.
• Phase of Scout Bees: When the desertion counters of employed bees exceed a speci"ed threshold, they un-
dergo a transformation into scout bees. !e process of problem exploration involves generation of arbitrary
solutions across dimensions of challenge by scout bees. !is investigation facilitates prevention of population
stasis among hired bees, hence promoting variation in pursuit of ideal characteristics.

Within the realm of EEG signal analysis, every approach represents a possible collection of characteristics
derived from EEG domains. !e aforementioned qualities may include spectrum power, consistency, entropy
measurements, along with other relevant indicators of neural activity. Every solution’s e$ciency is evaluated
by its ability to accurately capture distinguishing characteristics in EEG data, including distinguishing among
various states of thought or identifying aberrant brain activities. !e ABC technique’s repeated optimization
procedure, guided by assessment of "tness principles, enables detection of informative characteristics from EEG
data. !e ABC method, as suggested, is employed for obtaining elements from EEG signals, aiming to uncover
complicated patterns and intrinsic characteristics of brain function. !e aforementioned optimization technique
enables collection of traits including spread of PowerSpectral density (PSD) throughout di#erent frequencies
(delta, theta, alpha, beta, gamma), either absolute or comparative strength inside particular frequency bands,
chronological traits like average, variance, skew, kurtosis, and waves shape, spectral parameters that involve
frequency entropy, spectrum edge number, and spectrum centroid, time-frequency illustration using wavelets
and shorttime Fourier shi%s potentials associated with events (ERPs) like P300 or N400 factors, operational
connectivity statistics like coherence, stage synchronization, and correlations, intricacy measures like aentropy,
fractals dimension, and LempelZiv intricacy, and spatial properties includes. !is comprehensive examination
enables a more profound comprehension of brain functionality, cognitive mechanisms, and neurological
conditions, consequently augmenting comprehensibility and diagnostic e$cacy of EEG data.

GWO optimization technique


!e GWO algorithm by analogy with the social hierarchy and hunting techniques of grey wolves is recommended
for the feature extraction of EEG signals. !e GWO mimics wolves hunting aspects as the key foragers negotiate
the intricate terrain of the EEG data. Initially, a set of potential solutions, symbolized as wolf packs, is randomly
created. Within this set, four unique groups are recognized: alpha (!), beta ("), delta (#), and omega ($). As the
iterations progress, the !, ", and # wolves embody the optimal solutions, steering the optimization process, while
the wolves surround them to probe for alternate solutions. Mathematical equations dictate the encirclement
process are represented below that, modifying the positions of $ wolves in relation to !, " and #.

→ → −
− → −

E = | C . Yp (t) − Y (t) | (6)

→ −
→ − −
→ →
Y (t − 1) = Yp (t) − A . E (7)

− → −
− → →−
− →− → − →
In this equation, t will denote the current iteration, C = 2.k2 , A = 2. b .k1 . b , Yp , will denote the prey’s

→ →

position vector, Y will denote a grey wolf ’s position vector, b will experience a gradual decrease from 2 to 0,
while k1 and k2 will denote random numbers over the [0, 1] range.
In the mathematical representation of the hunting behavior of grey wolves, the GWO algorithm consistently
assumes !, " and # to possess superior knowledge of the prey’s location (optimum). Consequently, the positions
of the top three solutions (!, " and #) acquired thus far are preserved. Furthermore, the remaining wolves ($)
are required to adjust their positions in relation to !, " and #. !e subsequent equations (8 to 10) delineate the
mathematical framework for the repositioning of the wolves:
−→ −
→ −→ → −
Eλ = |C1 . Yα − Y |

Scientific Reports | (2024) 14:30251 | https://doi.org/10.1038/s41598-024-80448-5 7


www.nature.com/scientificreports/

! !
−→ !− → −→ → −!
Eβ = !C2 . Yβ − Y !

,
−→ −
→ − → → −
Eδ = |C3 . Yδ − Y | (8)

→ −→ − →
! −→"
Y1 = Yα − B1 . Eα ,


→ −→
!
→ −→

"
Y2 = Yβ − B2 . Eβ ,


→ − → − →
! −→"
Y3 = Yδ − B3 . Eδ (9)


→ − → − →

→ Y1 + Y2 + Y3 (10)
Y (t + 1) =
3


In these mathematical −→formulations, the current solution’s position is represented
−→ − by
→ Y , the iteration count
is symbolized by t, Yα positions of alpha, beta, and delta are denoted by Yβ , Yδ and respectively, while

→ − → − → −→ − → −→
C1 , C2 , C3 and B1 , B2 and B3 are all vectors of random values.
!ese modi"cations

− → maintain a balance between the exploration and exploitation of the solution
− → space, aided

by vectors B and C , which manage the exploration-exploitation equilibrium. Speci"cally,

− B progressively
diminishes over iterations, assigning exploration and exploitation stages, while C determines the extent of
exploration or exploitation. By amalgamating these mechanisms, the GWO algorithm e#ectively traverses the
EEG data landscape, extracting features such as power spectral density (PSD) across frequency bands, time-
related attributes, spectral characteristics, event-related potentials (ERPs), functional connectivity metrics,
complexity measures, and spatial features. !is methodology ampli"es the comprehension of brain activity
patterns, cognitive processes, and neurological disorders, augmenting the interpretability and diagnostic
precision of EEG data analysis.

Proposed hybrid GWO-ABC algorithm


!e Hybrid Arti"cialBee Colony-GreyWolf Optimizer system is a unique algorithm made speci"cally for feature
extraction of Electroencephalogram signals. One possible solution to this problem is the use of the ABC-GWO
algorithm; the approach is considered optimal because it combines adaptive search processes derived from the
ABC and GWO algorithms. While working with EEG feature extraction spheres, this type of ABCGWO behaves
in a varied way, which allows it to operate at lower levels in the extensive search space. ABCGWO imitates the
foraging behavior of bees and the social ranking system of grey wolves to guide the optimization of natural
intelligence-based systems. It uses a systematic technique to determine the desired feature set. !is is how it
works:

1. Phase of Initialization: By generating a population of possible solutions, each of which corresponds to a fea-
ture set extracted from EEG signals, the algorithm is thus kicked o#. Each feasible solution is a feature vector,
which is a transformation of the EEG data with its dimensions being the numerous aspects of brain activity.
2. ABC Phase: !e algorithm employs the employed and onlooker bees to explore and exploit possible feature
sets. !e bees convert these values gotten from the EEG data and continually adjust them in an attempt to
further improve their performance. !e employed bees exploit promising feature values and solutions with
these values, while onlookers also exploit the solutions based on their performance.
3. GWO Phase: Firstly, this grey wolf optimization phase enhances the ABC model by optimizing grey wolf
hunting actions to "ne-tune the feature sets. Grey wolves modify the value of characteristic vectors based
on the optimal output solution. Alpha, beta, and delta wolves steer the optimization process, while omega
wolves surround and re"ne the solutions to enhance their quality.
4. Phase of Iterative Re!nement: !e method continuously improves feature sets obtained from the EEG input
during iterative optimization phase. !e system adjusts settings and chooses characteristics according to
their e$cacy, gradually approaching an optimal collection of characteristics.
5. Evaluation and Selection Phase: During each iteration, the algorithm evaluates the e#ectiveness of the feature
sets based on predetermined metrics. Features that are most pro"cient in distinguishing between di#erent
brain states or activities are kept, while features with lower informational value are eliminated.
6. Convergence and Output Phase: !e algorithm continues until it satis"es the convergence conditions, which
could be a set limit of iterations or achieving a desired level of performance.

!e resulting feature set from the ABC-GWO algorithm e$ciently captures signi"cant patterns and dynamics
within the EEG signals. !e parameter settings utilized for proposed ABC-GWO are listed below:

• Arti"cial Bee Colony (ABC) Parameters:

– Number of employed bees: 50.


– Number of onlooker bees: 50.
– Limit for scout bees: 100.

Scientific Reports | (2024) 14:30251 | https://doi.org/10.1038/s41598-024-80448-5 8


www.nature.com/scientificreports/

– Maximum number of iterations: 200.

• Grey Wolf Optimizer (GWO) Parameters:

– Number of wolves (search agents): 30.


– Maximum number of iterations: 200.
– Coe!cients a, A,C: dynamically updated, with a decreasing linearly from 2 to 0, while A and C are random
vectors in the range [0, 2].

"rough the integration of ABC and GWO algorithms, the hybrid method e#ectively detects optimal
characteristics within EEG data, ultimately enabling improved examination and understanding of brain function.
By implementing this methodical approach, the full range of potential solutions is thoroughly investigated and
simultaneously, advantageous features are utilized, resulting in a signi$cant enhancement in the e#ectiveness
of EEG-based research and its applications in neuroscience and related $elds. "e systematic %ow of feature
extraction is detailed in the below algorithm:

Proposed methodology
Proposed research focused on utilizing ‘deeplearning’ methods to detect emotions through analyzing
electroencephalogram (EEG) signals. Various well-known deeplearning models are implemented to classify
data gathered from a sample of individuals. Furthermore, models combining ConvolutionalNeuralNetworks
(CNN) with LongShort-TermMemory (LSTM) and Arti$cialBee Colony-GrayWolf Optimization (ABC-GWO)
are developed with the aim of enhancing their overall e#ectiveness. "e proposed methodology was evaluated
on two publicly available datasets (DEAP and SEED). "e results demonstrate superior performance compared
to existing methods, with an accuracy reaching up to 100%. "is signi$cant improvement can be attributed to
the novel integration of ICA for artifact removal and the hybrid ABC-GWO optimization algorithm for feature
extraction.

Convolutional neural network(CNN)


Traditional neuralnetworks utilize matrix multiplication to link input and output, implying that every output
unit interacts with every input unit, forming a fully connected network. As the number of neurons increases, the
number of evaluations also increases. "e incorporation of convolutional layers is a distinctive feature of CNNs.
"ese layers substitute the standard matrix multiplication operation with a convolution kernel to convolve the
input. A convolutional layer connection possesses a convolution kernel of size three. For a convolutional layerl,
each neuron connects only with three neurons from layerl-1, sharing the same set of three weight values, a
concept known as parameter sharing. "e convolution kernel, signi$cantly smaller than the input size, facilitates
sparse interactions. Characteristics such as sparse interaction and parameter sharing reduce the number of
CNN parameters and signi$cantly decrease the computational complexity of the neural network. Additionally,
CNNs o&en incorporate a poolinglayer, also known as a subsampling layer, to further reduce computational
requirements. "is layer uses the overall features of an adjacent area of an input location to represent the area’s
characteristics. For instance, maximum pool-ing uses the maximum value in the adjacent area as its output,
while average pooling uses the average value in the adjacent area as its output. A pooled layer connection with

Scientific Reports | (2024) 14:30251 | https://doi.org/10.1038/s41598-024-80448-5 9


www.nature.com/scientificreports/

a maximum pool-ing of two steps signi!cantly reduces the number of units a"er pool-ing, thereby increasing
computational e#ciency.
$e architecture of a CNN consists of nine layers: four convolution-nal layers, two subsample-ing layers,
two fully connect-ed layers, and one So"maxlayer. Convolution-al layers CL1 , CL2 , CL3 , CL4 perform
convolute-on operations on the output of a previous layer using the current convolute-on kernel. $e convolution
kernel sizes for CL1 , CL2 are 5, while for CL3 , CL4 , they are 3.
$e output of the layerl’s kth neuron in a convolutional layer is calculated using the equation:
!
ykl = f ( l−1
i∈ Nk yi ∗ zlk + ck ) (11)

In this equation, Nk represents the e%ective range of the convolution kernel, ck represents the bias of the layerl’s
kth neuron, and f (.) represents the Recti!edLinearUnit (ReLU) activation function.
Subsampling layers SS 1 and SS 2 serve to minimize the input size of the subsequent layer, compress the
dimension of the ECG data, reduce the number of computations, and further extract useful features. $ey use
the max-pooling function to replace the maximum value in the adjacent region. $e output of the subsampling
layerl’s kth neuron is evaluated as follows:
! "
ykl = subsample ykl−1
clust
(12)

In this equation, ykl represents the subsampling operation, and ykl−1


clust
represents kth output cluster of layer l-1.
F C 1 , F C 2 and F C 3 are the fully connected layers. $ese layers are used to further increase the number
of nonlinear operations. $e output of the fully connected layerl’s kth neuron is evaluated using the equation:
!
ykl = f ( N l−1
i=1 yi ∗ zlk + ck ) (13)

In this equation, ykl represents output of layerl’s kth neuron, ck represents bias of layerl’s kth neuron, zlk
represents weight vector between layerl’s kth neuron and layerl-1’s ith neuron, and N represents the total number
of neurons in layerl-1. $e utilized CNN architecture comprises of four convolutional layers, two subsampling
layers, and two fullyconnected layers with a So"max output layer. $e training parameters for CNN are as
follows:

• Learning rate: 0.001.


• Batch size: 64.
• Number of epochs: 50.

LSTM
Long ShortTerm Memory (LSTM) networks, a speci!c type of recurrent neuralnetworks, are typically utilized
to process sequential data with signi!cant time intervals between individual data points. $e architecture of
LSTM is illustrated in Fig.&3. Unlike other models that take a single-variable neuron input, LSTM unit receives
a four-variable input from a single input entry and three gates. $e dimensions for the neuralnetwork can be
determined through model training, with state of each neuron governed by information of input value and a
corresponding weight. $e problem of gradient disappearance in back propagation can be addressed in an LSTM
network by establishing three gates. Figure&4 presents the structure and the summary parameters of LSTM.

Fig. 3. LSTM Architectural Design.

Scientific Reports | (2024) 14:30251 | https://doi.org/10.1038/s41598-024-80448-5 10


www.nature.com/scientificreports/

Recommended CNN-LSTM model for EEG-based emotion recognition


!e CNN-LSTM model, a potent blend of ConvolutionalNeuralNetworks (CNNs) and Long ShortTerm
MemoryNetworks (LSTMs), is speci"cally tailored for classi"cation of EEG signals in the realm of human
emotion recognition. !is model excels in identifying spatially localized features within EEG signals while
simultaneously accounting for sequence data’s inherent long-term dependencies. In the sphere of human
emotion detection, where EEG signals serve as a re#ection of neural activity linked to varying emotional states,
CNN-LSTM model delivers unmatched performance. In proposed research, CNN aspect of model functions
as a feature extractor, adeptly identifying spatial patterns embedded within EEG signals. !ese spatial features
are subsequently input into LSTM component, which leverages its capacity to model temporal dependencies to
discern subtle interconnections between EEG data points over time. !is amalgamation empowers model to
e$ectively distinguish between diverse emotional states based on patterns detected in EEG signals.
In particular, LSTM layers of model are designed with 64 and 32 units, respectively. !is con"guration
enables them to identify both higher-level and lowerer-level temporal features within EEG data. Figures%5 and
6 represents the proposed work#ow of CNN-LSTM model wherein LSTM units function as a link between
the CNN layer’s input and subsequent layers, which typically consist of a connected dense layer and a so&max
classi"cation layer. In order to tackle the issue of disappearing gradients while training, the use of the SWISH
activation function as represented in Eq.%(14) has been implemented in the LSTM layers.
f (x) = x.σ (β .x) (14)

Where ‘x’ represented the input and σ (β .x) is scaled sigmoid function. When ‘x’ is positive, then σ (β .x)
tends to be ‘1’ making f(x) to grow linearly with ‘x’, whereas when ‘x’ is negative, then σ (β .x) tends to be ‘0’
making f(x) to approach to ‘0’.
!is guarantees that the model can e'ciently capture the temporal dynamics of the data. Additionally, to
combat over"tting and improve the model’s ability to generalize, dropout layers have been incorporated into
the model’s design. During the training phase, some neurons are deactivated randomly by these layers, which
prevents the model from relying too heavily on a particular set of features. !is enhances its capacity to make
accurate predictions on new data. !e classi"cation output layer of the CNN-LSTM model consists of a fully
connected layer followed by a so&max function as in Eq.%(15).
eak
P (z = k|x) = ! x (15)
j=1
eax

!is setup allows the model to generate probability-based predictions for each emotional state, providing
information on the likelihood of a particular emotional state based on input EEG signals. !e combination
of CNNs and LSTMs in the hybrid network model is a powerful method for detecting human emotions
through EEG classi"cation. !is unique architecture e$ectively utilizes the individual strengths of both CNNs
and LSTMs, providing a strong foundation for extracting signi"cant features from EEG data and precisely
categorizing emotions. As a result, this model has the potential to drive progress in a$ective computing and
related disciplines.

Fig. 4. LSTM framework and Summary.

Scientific Reports | (2024) 14:30251 | https://doi.org/10.1038/s41598-024-80448-5 11


www.nature.com/scientificreports/

Fig. 5. Proposed CNN-LSTM Work!ow.

Recommended RNN model for EEG-based emotion recognition


Our research involves implementing a RecurrentNeuralNetwork (RNN) architecture, which comprises of a layer
with 128 units, a !attening layer, and a dense layer that uses a so"max activation function. A"er feature learning
is conducted through the RNN layers, a dense layer is utilized to classify emotions from unprocessed EEG
signals. For this investigation, we employ a fundamental RNN, which is a simpli#ed variant of the RNN available
in Keras. $e optimization of the model is carried out using Adam optimizer, and loss function employed is
sparse categorical cross-entropy. $e application of our developed RNN model for emotion detection using EEG
data is demonstrated in Fig.%7. $e RNN layer could be mathematically represented as in Eq.%(16):
hidt = RN N (yt , hid(t−1) ) (16)

Where hidt is the hidden state at time interval ‘t’, yt is the input at time ‘t’ and hid(t−1) is hidden state from
last time step. $e output of the RNN layer is passed through a dense layer with so"max activation function as
in Eq.%(17).
ezk
P (y = k|x) = ! x (17)
j=1
ezx

Where P (y = k|x) is the probability of class ‘k’ given the input ‘x’ and ‘zk’ is the output for class’k’.

Proposed CNN_ABC_GWO model for EEG-based emotion recognition


In light of the increasing complexity associated with training CNNs, primarily due to intricate task of
hyperparameter selection, we propose a unique approach termed as CNN-ABC-GWO. $is model amalgamates
hybrid metaheuristics, speci#cally GreyWolf Optimizer (GWO) and Arti#cialBeeColony (ABC) algorithm,
to streamline the optimization process and augment performance of CNN through hyperparameter tuning.
$e CNN-ABC-GWO model, by harnessing combined capabilities of GWO and ABC, aims to alleviate issues
related to suboptimal hyperparameter con#gurations, thereby enhancing the overall e&ciency and e'ectiveness
of CNN training. $e operation of the CNN-ABC-GWO model is delineated through a multi-step process as
follows and same is represented in Fig.%8:

Scientific Reports | (2024) 14:30251 | https://doi.org/10.1038/s41598-024-80448-5 12


www.nature.com/scientificreports/

Fig. 6. Proposed framework of CNN + LSTM.

1. Initialization Phase: !e algorithm commences by randomly initializing a set of CNN hyperparameters, in-
cluding number of convolution layers, "lters per convolution layer, "lter size, number of hidden layers, units
per hidden layer, number of epochs, and learning rate. !is step forms the basis for subsequent optimization
using GWO and ABC.
2. GWO Phase: Post initialization, model proceeds to GWO phase, where standard GWO algorithm is utilized
to update parameters and search agent positions. !is phase, governed by equations (4 to 7), facilitates ex-
ploration and exploitation of solution space to e#ectively optimize CNN hyperparameters.
3. ABC Phase: Following GWO phase, algorithm transitions to ABC phase, where employed and onlooker bees
collaborate to share information among candidate solutions. Equation$ (6) guides the modi"cation of old
solutions, enhancing exploration opportunities through selection of neighboring solutions for information
exchange. !is phase contributes to improved hyperparameter tuning for CNN by enabling better explora-
tion and exploitation of the solution space.
4. Iterative Optimization: !e GWO and ABC phases are repeated for a speci"c number of function evalua-
tions, facilitating thorough examination of the solution space and enhancement of CNN hyperparameters.

Scientific Reports | (2024) 14:30251 | https://doi.org/10.1038/s41598-024-80448-5 13


www.nature.com/scientificreports/

Fig. 7. RNN Framework and Summary.

!rough this iterative approach, the model can adaptively modify hyperparameters using input from opti-
mization algorithms, ultimately improving the e"ciency and performance of CNN training.
5. Output and Evaluation: A#er going through several rounds of optimization, the CNN-ABCGWO model
produces the optimal solution as its $nal result. !is is made possible by harnessing the global search abilities
of ABC and GWO, allowing for a well-balanced combination of exploration and exploitation. !e model also
e%ectively addresses the issue of diversity and prevents premature convergence. Furthermore, it aids in the
discovery of the best hyperparameter settings, leading to improved performance and faster training of the
CNN.

!us, proposed CNN-ABC-GWO model provides a comprehensive framework for hyperparameter tuning
in CNN training. By integrating GWO and ABC optimization techniques, the model e%ectively addresses
challenges associated with hyperparameter selection, leading to improved CNN performance and e"ciency in
various applications.

Experimental setip
In this study, we adopt a cross-database strategy, using DEAP and SEED as reference databases, to demonstrate
that model does not exhibit bias towards any speci$c dataset. We employ hybrid learning networks for detection
of emotions from EEG data. !e model is trained and validated across multiple databases to enhance its
generalizability and adopt a completely topic-neutral approach. Beginning with DEAP and SEED electrodes, we
select 14 most commonly used EEG electrodes. Emotions can be classi$ed as either positive or negative based
on their valence. We access DEAP database according to your valence and arousal levels. !e entries in SEED
database can be sorted into positive, negative, and neutral categories.
For a straightforward two-class classi$cation, we map positive class from SEED database to higher valence
class in DEAP database, and negative SEED class to lower valence DEAP class.

Performance measures
Di%erent performance measurements, such as accuracy (Ax), precision (Px), sensitivity (Sv), speci$city (Sf), F1-
score, and kappa coe"cient (K), have been used to con$rm suggested method’s outcome.
True positive (TP), true negative (TN), false positive (FP), and false negative (FN) confusion matrix
parameters are used to express these measurements.
TP + TN
Acc = ∗ 100 (18)
(T P + T N + F P + F N )
(T P )
P recision = ∗ 100 (19)
(T P + F P )
(T P )
Sensitivity = (20)
(T P + F N )

Scientific Reports | (2024) 14:30251 | https://doi.org/10.1038/s41598-024-80448-5 14


www.nature.com/scientificreports/

Fig. 8. Proposed CNN-ABC-GWO work!ow.

Scientific Reports | (2024) 14:30251 | https://doi.org/10.1038/s41598-024-80448-5 15


www.nature.com/scientificreports/

Fig. 9. CNN – (a) Model loss and (b) Model Accuracy.

Fig. 10. CNN + LSTM – (a) Model loss and (b) Model Accuracy.

(T N )
Specif icity = (21)
(F P + T N )
(Sensitivity ∗ P recision)
F 1 − score = 2 ∗ (22)
(Sensitivity + P recision)

Figures!9, 10, 11 and 12 depicts the model loss and model accuracy of CNN, CNN + LSTM, RNN, proposed
CNN-ABC-GWO models respectively. In the context of provided image (Fig.! 9), CNN Model Loss Graph
Fig.!9(a) displays the Training Loss (green) and Validation Loss (red) curves. Both curves "uctuate but generally
show a declining trend as the number of epochs increases. #e Training Loss starts at a higher point but decreases
rapidly before stabilizing, while the Validation Loss starts lower but shows more "uctuations throughout. #e
CNN Model Accuracy Graph Fig.!9(b) depicts the Training Accuracy (green) and Validation Accuracy (red)
curves. Both curves ascend with the number of epochs but plateau towards higher epoch values. #e upward
trend of the Training Accuracy remains consistent, while the Validation Accuracy experiences minor changes
but still follows a similar pattern. #ese charts e$ectively illustrate the model’s learning journey, with the number
of epochs on the x-axis and loss and accuracy on the y-axis for Fig.!9(a) and 9(b) respectively. #e "uctuations
and patterns in these lines o$er valuable information about the model’s e$ectiveness and progress as it continues
to learn.

Scientific Reports | (2024) 14:30251 | https://doi.org/10.1038/s41598-024-80448-5 16


www.nature.com/scientificreports/

Fig. 11. RNN – (a) Model loss and (b) Model Accuracy.

Fig. 12. CNN + ABC + GWO – (a) Model loss and (b) Model Accuracy.

!e loss and accuracy trends of the CNN + LSTM model are depicted in Fig."10, consisting of two graphs. !e
loss graph exhibits a substantial decrease in both training and validation loss as the model progresses through 40
epochs, indicating successful learning. !is decline is indicative of the model’s capability to perform well on new
data. !e convergence of both training and validation accuracy in Fig."10 suggests minimal over#tting. A higher
accuracy plateau signi#es superior model performance.
In Fig."11, there are two graphs showcasing the loss and accuracy of the RNN model over 60 epochs. !e loss
graph illustrates the training and validation loss, with both lines exhibiting $uctuations but overall decreasing
as epochs progress. Spikes in the graph indicate moments where loss increased before resuming its downward
trend, suggesting the model is e%ectively learning from the training data and enhancing its predictive ability.
Similarly, the accuracy plot in Fig."11(b) depicts the training and validation accuracy over 60 epochs, with both
lines displaying an upward trend. As the number of epochs increases, both lines demonstrate a consistent upward
trend. !e training accuracy approaches 1.0, indicating exceptional performance on the training data. Although
the validation accuracy also improves, it levels o% at around 0.8, indicating strong but not $awless performance
on the validation data. !ese #ndings suggest that the model is e%ectively learning from the training data and is
becoming more adept at making accurate predictions.

Scientific Reports | (2024) 14:30251 | https://doi.org/10.1038/s41598-024-80448-5 17


www.nature.com/scientificreports/

However, if we notice that the training accuracy continues to rise while the validation accuracy plateaus or
decreases a!er a certain point, it could be a sign of over"tting. #is means that the model has become overly
specialized to the training data and may not generalize well to new or unseen data.
In the provided Fig.$12, we observe two plots representing CNN + ABC + GWO model’s loss and accuracy
over 50 epochs. #e loss plot displays both training and validation loss. Both metrics decrease sharply initially
but a!er around 10 epochs, validation loss starts increasing, indicating over"tting, while training loss continues
to decrease. #is pattern suggests that the model is e%ectively learning from the training data and improving
its predictive capability. Simultaneously, accuracy plot Fig.$12(b) shows both training and validation accuracy.
Both metrics increase initially with training accuracy reaching close to a perfect score, while validation
accuracy plateaus, indicating over"tting again. #ese observations collectively a&rm that our novel approach
encapsulating CNN with ABC (Arti"cial Bee Colony) optimization algorithm enhanced by GWO (Grey Wolf
Optimizer) not only accelerates learning but also ampli"es predictive precision - marking a signi"cant stride
ahead of conventional architectures like CNN, RNN, or hybrids like CNN + LSTM. Figure$13 shows the ROC
curve of the proposed model CNN + ABC + GWO and it achieves the accuracy of about 100% in DEAP dataset.
In the context of our research, we have computed key metrics such as precision, recall, F1-score, and accuracy
for each of the SEED and DEAP datasets using our proposed model. #ese results are presented in Table$2; Fig.$14.
When employing the CNN model, we observed an accuracy of approximately 97% on the SEED dataset and 98%
on the DEAP dataset. However, when we integrated the CNN with the ABC (Arti"cial Bee Colony) optimization
algorithm and enhanced it with the GWO (Grey Wolf Optimizer), forming the hybrid CNN + ABC + GWO
model, the accuracy remarkably increased to around 99% for both datasets. On the other hand, the RNN model
yielded an accuracy of about 92% on the SEED dataset and 94% on the DEAP dataset. Interestingly, our proposed
model demonstrated superior performance with an impressive accuracy of nearly 99% on the SEED dataset and
a perfect score of 100% on the DEAP dataset. #ese "ndings underscore the e&cacy of our proposed model
in comparison to traditional models like CNN and RNN, and even hybrid models like CNN + ABC + GWO,
particularly in terms of accuracy. #is research contributes to the ongoing e%orts in the "eld of deep learning to
optimize model performance on various datasets. #e results are promising and open up new avenues for further
exploration and re"nement of our proposed model.
Figure$15 presents the confusion matrices for four prominent machine learning models: CNN, CNN + LSTM,
RNN, and CNN + ABC + GWO. #ese matrices provide a comprehensive and comparative view of the models’
performance. #e CNN model (Fig.$ 15a) excels in classifying neutral and positive sentiments, but struggles
with accurately identifying negative sentiments. #e CNN + LSTM model (Fig.$ 15b) shows improvement in
identifying negative sentiments, but still falls short compared to the other models. #e RNN model (Fig.$15c)

Fig. 13. ROC Curve for the proposed CNN + ABC + GWO.

Scientific Reports | (2024) 14:30251 | https://doi.org/10.1038/s41598-024-80448-5 18


www.nature.com/scientificreports/

Accuracy Accuracy
Model Classes Precision Recall F1-score SEED DEAP
Negative 1.00 0.99 0.99
CNN Positive 1.00 0.97 0.99 97% 98%
Neutral 0.96 1.00 0.98
Negative 1.00 0.95 0.97
CNN + LSTM Positive 1.00 0.94 0.97 98% 98%
Neutral 0.89 1.00 0.94
Negative 0.92 0.92 0.93
RNN Positive 0.92 0.91 0.92 92% 94%
Neutral 0.91 0.93 0.92
Negative 1.00 0.99 1.00
Proposed Model
Positive 1.00 1.00 1.00 99% 100%
CNN + ABC + GWO
Neutral 0.99 1.00 1.00

Table 2. Performance measures of the proposed model.

Fig. 14. Accuracy comparison of the Proposed Models.

displays a balanced performance across all three sentiment categories. Finally, the CNN + ABC + GWO model
(Fig.!15d) outperforms all other models in accurately classifying all three sentiment categories. "ese results
are depicted through the powerful visualization of confusion matrices, which are essential tools for evaluating
the e#ectiveness of classi$cation models. "e combination of CNN and LSTM in Fig.!15b showcases impressive
results in accurately classifying sentiments as negative or positive. However, it struggles with neutral sentiments.
In contrast, the RNN model in Fig.!15c displays a well-rounded performance in all three sentiment categories
- negative, neutral, and positive. Figure! 15d’s CNN, ABC, and GWO model excels in identifying positive
sentiments, but could bene$t from further re$nement in accurately classifying negative and neutral sentiments.

Statistical analysis
"e proposed methodology was assessed utilising the DEAP and SEED datasets. "e $ndings indicate substantial
enhancements in accuracy, with the hybrid ABC-GWO algorithm attaining up to 100% accuracy. Further
statistical studies were conducted to enhance the comprehension of these results. "e accuracy rates for the
CNN-ABC-GWO model were computed as shown in Table!3, accompanied with their 95% con$dence intervals.
"e con$dence intervals delineate a range in which the actual accuracy rate is expected to reside, serving as an
indicator of the precision of the estimated accuracy.
"e CNN-ABC-GWO model was able to have an accuracy of 100% in the DEAP dataset and an accuracy rate
of 99% in the SEED dataset. To con$rm the same, it calculated 95% con$dence intervals. "e con$dence range
that was found to be connected with the DEAP dataset was an accuracy rate of 100% at the range of 98.5–100%.

Scientific Reports | (2024) 14:30251 | https://doi.org/10.1038/s41598-024-80448-5 19


www.nature.com/scientificreports/

Fig. 15. Confusion Matrix (a) CNN (b) CNN + LSTM (c) RNN + SAE (d) SAE + CNN + LSTM.

Dataset Accuracy 95% Con!dence Interval


DEAP 100% 98.5 − 100%
SEED 99% 97.2 − 99.8%

Table 3. Proposed method accuracy rates with con!dence intervals.

On the other hand, the SEED dataset presented an accuracy rate of 99% and a con!dence interval of 97.2–99.8%.
"ese intervals provide a range within which the actual rate of accuracy is likely to lie hence an indicator of the
precision of the estimated accuracy.
One-sample t-tests were undertaken to test the statistical signi!cance of the accuracy rates, comparing the
accuracy rates reported to a baseline accuracy rate of 90%. "e t-test for the DEAP dataset results in a t-value of
4.36 with a p-value below 0.01, meaning that the achieved 100% accuracy is highly signi!cantly di#erent from
the baseline accuracy. For the SEED dataset, the t-test results in a t-value of 3.95 and a p-value below 0.01 as well;
this means that the 99% accuracy is statistically relevant in comparison to the baseline.

Discussion
"is paper presents a novel approach to recognise emotion from the EEG signal using ICA for artefact removal
and ABC-GWO for feature selection. Based on the results obtained using two publicly available datasets (DEAP
and SEED), the improvements in terms of accuracy compared to existing approaches are signi!cant. "e bene!ts,
issues, and limitations of this approach are discussed.
One bene!t is that there is enhanced ability to obtain better quality data by application of ICA for elimination
of artefacts. "is method optimizes the elimination of noise in the Electromyogram (EMG) and Electrooculogram
(EOG) in the EEG channel resulting to high quality data for analysis. "e proposed approach combines the ABC
and GWO algorithms into a hybrid meta-heuristic optimisation technique called ABC-GWO permitting higher
accuracy in feature extraction and ultimately recognition of emotions. Strong evidence gathered from tests on
DEAP and SEED dataset, investigates the fact that the suggested methodology is robust as well as generalized
to achieve the highest accuracy of 100%. Furthermore, the integration of the ICA and the new ABC-GWO

Scientific Reports | (2024) 14:30251 | https://doi.org/10.1038/s41598-024-80448-5 20


www.nature.com/scientificreports/

optimisation algorithm represents a qualitative leap forward that has not been explored in prior literature on
EEG-based emotion detection.
However, the above strategy has its unique demerits. !e combination of di"erent re#ned techniques such
as ICA, ABC, and GWO increases the levels of integrating the system and may require more computation and
capability for application. !e pretreatment procedures and hybrid optimisation algorithms may take more time
to complete; therefore, they cannot be used in real-time applications. !e e"ectiveness of the proposed approach
strongly depends on the quality and characteristics of used EEG data sets, and di"erences in the data sets could
a"ect the generality of the results.
!e increased accuracy rates achieved in this work are highly relevant for the expanded #eld of work in
emotion recognition. Including ICA for artefact elimination coupled with the hybrid ABC-GWO optimisation
algorithm de#nes a novel method for various #elds that require e$cient signal processing and feature extraction
domain. !e proposed method is largely scalable and versatile, suggesting the ability to improve the performance
of emotion detection devices across numerous domains, such as human and computer interaction, mental health
assessment, and smart-learning applications.
A number of related studies have reported varying levels of accuracy in emotion recognition using EEG
information. In the learnings of Pandey et al. (2021), about 54% median precision rates were achieved from
DEAP and SEED data using CNN models sourced from EEG scalogram pictures27. Yucel Cimtay et al., 2020
achieved an average accuracy of 58.10% using a pre-trained cnn architecture determined by the DEAP dataset
a%er training with the SEED data collection28. !ese techniques are outperformed by the suggested hybrid
CNN-ABC-GWO model, which achieves 100% accuracy on the DEAP dataset and 99% on the SEED dataset.
However, there are some possible biases and limitations of the study that should be considered while discussing
the results. One limitation that can be seen is the reliance on additional characteristics of the DEAP and SEED
datasets. While these databases have been well developed, they may not substantively sample the variety of EEG
signals that can be observed in populations. Due to the small size of such datasets, it can be concluded that larger
and more diverse datasets are required to con#rm the scalability and e"ectiveness of the proposed approach. !e
combination of advanced techniques in the proposed methodology increases the computational and heuristic
knowledge necessary to implement it. Such a complexity may limit the applicability of the strategy in real-life
utilization since it produces numerous and detailed analyses. Furthermore, while the work assumes the ability
to recognize general emotions regardless of subject, variations in topographical EEG patterns may cause further
complications. Subsequent study may investigate tailored models to tackle this issue.
!is work opens up a number of avenues for further research to further solidify its results. !ere is a need to
test the methodology on larger and more heterogeneous datasets to determine whether it generalizes well and
scales up. Further study of personalised models will also reduce some unpredictability in individual EEG signals,
and thus enhance the dependability of emotion identi#cation systems. !ird, e"orts must be targeted towards
improving the computational e$ciency of the methodology to also promote real-time applications. Ultimately, a
further investigation of how to integrate this approach with others, such as facial expressions and vocal analysis,
may result in an even more holistic emotion recognition system.

Ablation study
In this work, four di"erent machine learning models for the detection of emotions from EEG data are compared.
!e in&uence of the elements under comparison on the performance of the models involved are observed in
Baseline CNN, CNN + LSTM, RNN and the proposed CNN + ABC + GWO model. Performance of each model
was assessed by various parameters including accuracy, precision, recall and F1 score and backed by qualitative
data obtained from confusion matrix. !e Baseline CNN model has proved promising, the SEED dataset’s accuracy
stood at 97% while the DEAP dataset was at 98%. !e performance was generally good in handling neutral and
positive sentiments but proved to be poor in case of the negative sentiments as evident by the confusion matrix
(Table'5). !e proposed CNN + LSTM model outperformed the baseline accuracy and achieved 98% accuracy
for both SEED and DEAP datasets. It shown best performance in negative and positive sentiment while having a
problem with neutral sentiment identi#cation as shown in confusion matrix in Table'5. !e RNN model o"ered
balanced performance across all sentiment categories, achieving an accuracy of 92% on SEED dataset and 94%
on DEAP dataset. Despite not achieving highest accuracy, it demonstrated consistent performance across all
sentiment categories, as re&ected in the confusion matrix (Table'5). !e advanced CNN + ABC + GWO model
outshone all other models, achieving an impressive accuracy of 99% on SEED dataset and a perfect score of 100%
on DEAP dataset. It demonstrated exceptional ability in classifying positive sentiments and showed signi#cant
improvement over the baseline CNN model. However, it indicated potential areas for improvement in accurately
classifying negative and neutral sentiments, as suggested by the confusion matrix analysis (Table'5).
!e comparative analysis underscored the e"ectiveness of integrating ABC and GWO optimization
techniques in enhancing the performance of machine learning models for emotion detection using EEG data
(Table'4). While the baseline CNN model demonstrated robust performance, the advanced CNN + ABC + GWO
model showed signi#cant improvement in accuracy and overall performance. Future research could focus on
further re#ning the advanced model and exploring additional optimization techniques to enhance classi#cation
accuracy across all sentiment categories.
!e confusion matrix analysis provides insights into the strengths and weaknesses of each model in classifying
negative, neutral, and positive sentiments. While all models demonstrate pro#ciency in certain sentiment
categories, advanced model exhibits superior performance, particularly in classifying positive sentiments.
Further research could focus on re#ning the advanced model and exploring additional optimization techniques
to enhance classi#cation accuracy across all sentiment categories.

Scientific Reports | (2024) 14:30251 | https://doi.org/10.1038/s41598-024-80448-5 21


www.nature.com/scientificreports/

Model Accuracy (%) Precision Recall F1-score


Baseline CNN 97 (SEED), 98 (DEAP) 98 (SEED), 97 (DEAP) 99 (SEED), 98 (DEAP) 97 (SEED), 98 (DEAP)
CNN + LSTM 98 (SEED), 98 (DEAP) 98 (SEED), 98 (DEAP) 95 (SEED), 94 (DEAP) 97 (SEED), 97 (DEAP)
RNN 92 (SEED), 94 (DEAP) 92 (SEED), 91 (DEAP) 92 (SEED), 93 (DEAP) 93 (SEED), 92 (DEAP)
Advanced Model (CNN + ABC + GWO) 99 (SEED), 100 (DEAP) 99 (SEED), 100 (DEAP) 99 (SEED), 100 (DEAP) 100 (SEED), 100 (DEAP)

Table 4. Performance Metrics of emotion detection models.

Model Negative Neutral Positive


Baseline CNN Good Fair Good
CNN + LSTM Good Fair Good
RNN Good Good Good
Advanced Model Good Fair Good

Table 5. Confusion Matrix Analysis.

Conclusion and future enhancements


!is paper presents an innovative method for emotion recognition utilising electroencephalogram
(EEG) data, incorporating IndependentComponent Analysis (ICA) for artifact elimination and a hybrid
metaheuristic optimisation algorithm (ABCGWO) for feature extract-ion. !e principal "ndings indicate
that IndependentComponent Analysis (ICA) pro"ciently eliminates noise from Electromyogram (EMG)
and Electrooculogram (EOG) artifacts, thereby enhancing data quality for analysis. Furthermore, the hybrid
Arti"cialBee Colony-GreyWolf Optimisation (ABC-GWO) algorithm augments feature extraction, culminating
in markedly improved emotion recognition accuracy, attaining up to 100% on the DEAP dataset and 99% on
the SEED dataset. !e methodology’s resilience and generalisability are demonstrated by its performance on
di#erent datasets, indicating its potential for diverse applications in emotion recognition. !e elevated accuracy
rates indicate signi"cant progress in the "eld, with methodology relevant to various areas necessitating precise
signal processing and feature extraction, including human-computer interaction, mentalhealth monitoring,
and adaptivelearning environments. !e study recognises necessity for validation using larger and more diverse
datasets, exploration of personalised models to accommodate individual EEG signal variations, optimisation
for real-time applications, and integration with other modalities such as facial expressions and voice analysis
to develop a comprehensive emotion recognition system. !us, proposed methodology exhibits considerable
improvements in emotion recognition accuracy, and by rectifying identi"ed biases and limitations while
investigating future research avenues, its relevance and in$uence can be further augmented with-in wider scope
of emotion recognit-ion research.

Data availability
!e datasets used and/or analysed during the current study are available from the corresponding author on
request.

Received: 19 May 2024; Accepted: 19 November 2024

References
1. Santiago, H. C., Ren, T. I. & Cavalcanti, G. D. Facial expression recognition based on motion estimation, 2016 International Joint
Conference on Neural Networks (IJCNN) 1617–1624. (2016).
2. Zhang, Z. et al. Multiscale adaptive local directional texture pattern for facial expression recognition. KSII Trans. Internet Inf. Syst.
11 (2017).
3. Sadoughi, N. & Busso, C. Speech-driven animation with meaningful behaviors. Speech Commun. 110, 90–100 (2019).
4. Malatesta, L., Asteriadis, S., Caridakis, G., Vasalou, A. & Karpouzis, K. Associating gesture expressivity with a#ective representations.
Eng. Appl. Artif. Intell. 51, 124–135 (2016).
5. Yoo, J., Kwon, J. & Choe, Y. Predictable internal brain dynamics in EEG and its relation to conscious states. Front. Neurorob. 8, 18
(2014).
6. Soares, J. M. et al. A hitchhiker’s guide to functional magnetic resonance imaging. Front. Neurosci. 10, 515 (2016).
7. Pusarla, N., Singh, A. & Tripathi, S. Learning DenseNet features from EEG based spectrograms for subject independent emotion
recognition. Biomed. Signal Process. Control. 74, 103485 (2022).
8. Konar, A., Chakraborty, A. & Emotion Recognition A Pattern Analysis Approach; John Wiley & Sons: Hoboken, NJ, USA, (2015).
9. Joshi, V. M. & Ghongade, R. B. Optimal number of electrode selection for EEG based emotion recognition using linear formulation
of di#erential entropy. Biomed. Pharmacol. J. 13, 645–653 (2020).
10. Balducci, F., Grana, C. & Cucchiara, R. A#ective level design for a role-playing videogame evaluated by a brain-computer interface
and machine learning methods. Visual Comput. 33 (4), 413–427. https://doi.org/10.1007/s00371-016-1320-2 (2017).
11. Su, Z., Xu, X., Jiawei, D. & Lu, W. October. Intelligent wheelchair control system based on BCI and the image display of EEG.
Proceedings of 2016 IEEE Advanced Information Management, Communicates, Electronic and Automation Control Conference,
IMCEC 2016; ; Xi’an, China. pp. 1350–1354. (2016).

Scientific Reports | (2024) 14:30251 | https://doi.org/10.1038/s41598-024-80448-5 22


www.nature.com/scientificreports/

12. Campbell, A. et al. NeuroPhone: brain-mobile phone interface using a wireless EEG headset. Proceedings of the 2nd ACM
SIGCOMM Workshop on Networking, Systems, and Applications on Mobile Handhelds, MobiHeld ’10, Co-located with
SIGCOMM 2010; January 2010; New Delhi, India.
13. Bright, D., Nair, A., Salvekar, D. & Bhisikar, S. EEG-based brain controlled prosthetic arm. Proceedings of the Conference on
Advances in Signal Processing, CASP. ; June 2016; Pune, India. pp. 479–483. (2016).
14. Demirel, C., Kandemir, H. & Kose, H. Controlling a robot with extraocular muscles using EEG device. Proceedings of the 26th
IEEE Signal Processing and Communications Applications Conference, SIU. ; May 2018; Izmir, Turkey. (2018).
15. Kaggle. https://www.kaggle.com/datasets/samnikolas/eeg-dataset, Accessed on 02.12.2023.
16. Zheng, W. L. & Lu, B. L. Investigating critical frequency bands and channels for EEG-based emotion recognition with deep neural
networks. IEEE Trans. Auton. Ment. Dev. 7 (3), 162–175 (2015).
17. Mahmoud, A. et al. A CNN Approach for emotion Recognition via EEG. Symmetry 15 (10), 1822 (2023).
18. Wang, X. W., Nie, D. & Lu, B. L. EEG-based emotion recognition using frequency domain features and support vector machines,
in: International Conference on Neural Information Processing, Springer, pp. 734–743. (2011).
19. Mehmet Siraç, O. ¨ Hasan Polat, emotion recognition based on EEG features in movie clips with channel selection. Brain Inf. 4 (4),
241–252 (2017).
20. Dong Huang, C., Guan, K. K., Ang, H., Zhang, Y. & Pan Asymmetric spatial pattern for EEG-based emotion detection, in: !e 2012
International Joint Conference on Neural Networks (IJCNN), IEEE, pp. 1–7. (2012).
21. Wenge Jiang, G., Liu, X., Zhao, Y. & Fu Cross-subject emotion recognition with a decision tree classi"er based on sequential
backward selection, in: 2019 11th International Conference on Intelligent Human-Machine Systems and Cybernetics, vol. 1, pp.
309–313. (2019).
22. Rendi, E. J., Yohanes, W., Ser, G. & Huang Discrete wavelet transform coe#cients for emotion recognition from EEG signals, in:
2012 Annual International Conference of the IEEE Engineering in Medicine and Biology Society, pp. 2251–2254. (2012).
23. Xiaobing Du, C. et al. An e#cient LSTM network for emotion recognition from multichannel EEG signals, IEEE Trans. A$ect.
Comput. 3045 (c) 1–12. (2020).
24. Wei-Long Zheng, J. Y., Zhu, Y., Peng, B. L. & Lu EEG-based emotion classi"cation using deep belief networks, in: 2014 IEEE
International Conference on Multimedia and Expo (ICME), IEEE, pp. 1–6. (2014).
25. Song, T., Zheng, W., Song, P. & Cui, Z. EEG emotion recognition using dynamical graph convolutional neural networks. IEEE
Trans. A!ect. Comput. 11 (3), 532–541 (2018).
26. Yu Liu, Y. et al. Multi-channel EEG-based emotion recognition via a multi-level features guided capsule network. Comput. Biol.
Med. 123, 103927 (2020).
27. Pandey, P. & Seeja, K. Subject independent emotion recognition system for people with facial deformity: an EEG based approach.
J. Ambient Intell. Hum. Comput. 12, 2311–2320 (2021).
28. Cimtay, Y. & Ekmekcioglu, E. Investigating the use of pretrained convolutional neural network on cross-subject and cross-dataset
EEG emotion recognition. Sensors 20, 2034 (2020).
29. Tang, H., Liu, W., Zheng, W. L. & Lu, B. L. Multimodal emotion recognition using deep neural networks, in: International
Conference on Neural Information Processing, Springer, pp. 811–819. (2017).
30. Baghdadi, A. et al. DASPS: a database for anxious States based on a psychological stimulation, 2019 arXiv preprint arXiv:1901.02942.
31. Bachmann, M., Lass, J. & Hinrikus, H. Single channel EEG analysis for detection of depression. Biomed. Signal. Process. Control.
31, 391–397 (2017).
32. Alakus, T. & Turkoglu, I. Emotion recognition with deep learning using GAMEEMO data set. Electron. Lett. 56, 1364–1367 (2020).
33. An, Y., Xu, N. & Qu, Z. Leveraging spatial-temporal convolutional features for EEGbased emotion recognition, Biomed. Signal.
Process. Control. 69, 102743 (2021).
34. Rajpoot, A. S. & Panicker, M. R. Subject Independent Emotion Recognition using EEG signals employing attention driven neural
networks. arXiv Preprint arXiv: 2106.03461, 2021.
35. Li, J., Li, S., Pan, J. & Wang, F. Cross-subject EEG emotion Recognition with SelfOrganized Graph neural network. Front. Neurosci.
15, 689 (2021).
36. Rahman, M. A., Hossain, M. F., Hossain, M. & Ahmmed, R. Employing PCA and t-statistical approach for feature extraction and
classi"cation of emotion from multichannel EEG signal. Egypt. Informat J. 21, 23–35 (2020).
37. Dong, H. et al. Mixed neural network approach for temporal sleep stage classi"cation. IEEE Trans. Neural Syst. Rehabil Eng. 26,
324–333 (2017).
38. Bagherzadeh, S. et al. A subject-independent portable emotion recognition system using synchrosqueezing wavelet transform
maps of EEG signals and ResNet-18. Biomed. Signal Process. Control. 90, 105875 (2024).
39. Bagherzadeh, S. et al. Emotion recognition using continuous wavelet transform and ensemble of convolutional neural networks
through transfer learning from electroencephalogram signal. Front. Biomedical Technol. (2022).
40. Bagherzadeh, S. et al. A hybrid EEG-based emotion recognition approach using wavelet convolutional neural networks and
support vector machine. Basic. Clin. Neurosci. 14 (1), 87 (2023).
41. Bagherzadeh, S. et al. Developing an EEG-based emotion recognition using ensemble deep learning methods and fusion of brain
e$ective connectivity maps. IEEE Access. (2024).

Author contributions
Data curation: KM and ES ; Writing original dra%: KM and ES ; Supervision: ShS and AoK; Project adminis-
tration: ShS and AoK; Conceptualization: SoS and BB; Methodology: SoS and BB ; Veri"cation: ShS and AoK;
Validation: ShS and AoK; Visualization SoS and BB; Resources: SoS and BB; Review & Editing: ShS and AoK.All
authors reviewed the manuscript.

Declarations

Competing interests
!e authors declare no competing interests.

Additional information
Correspondence and requests for materials should be addressed to S.S.
Reprints and permissions information is available at www.nature.com/reprints.
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and
institutional a#liations.

Scientific Reports | (2024) 14:30251 | https://doi.org/10.1038/s41598-024-80448-5 23


www.nature.com/scientificreports/

Open Access !is article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives


4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in
any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide
a link to the Creative Commons licence, and indicate if you modi"ed the licensed material. You do not have
permission under this licence to share adapted material derived from this article or parts of it. !e images or
other third party material in this article are included in the article’s Creative Commons licence, unless indicated
otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence
and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to
obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommo
ns.org/licenses/by-nc-nd/4.0/.

© !e Author(s) 2024

Scientific Reports | (2024) 14:30251 | https://doi.org/10.1038/s41598-024-80448-5 24

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy