0% found this document useful (0 votes)
37 views11 pages

Unsupervised Learning in Reservoir Computing For EEG-based Emotion Recognition

Uploaded by

Dijana Tolić
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
37 views11 pages

Unsupervised Learning in Reservoir Computing For EEG-based Emotion Recognition

Uploaded by

Dijana Tolić
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

1

Unsupervised Learning in Reservoir Computing


for EEG-based Emotion Recognition
Rahma Fourati, Student Member, IEEE, Boudour Ammar, Senior Member, IEEE, Javier Sanchez-Medina,
Senior Member, IEEE and Adel M. Alimi, Senior Member, IEEE

Abstract—In real-world applications such as emotion recognition from recorded brain activity, data are captured from electrodes over
time. These signals constitute a multidimensional time series. In this paper, Echo State Network (ESN), a recurrent neural network with
a great success in time series prediction and classification, is optimized with different neural plasticity rules for classification of
emotions based on electroencephalogram (EEG) time series. Actually, the neural plasticity rules are a kind of unsupervised learning
arXiv:1811.07516v2 [cs.CV] 23 Nov 2018

adapted for the reservoir, i.e. the hidden layer of ESN. More specifically, an investigation of Oja’s rule, BCM rule and gaussian intrinsic
plasticity rule was carried out in the context of EEG-based emotion recognition. The study, also, includes a comparison of the offline
and online training of the ESN. When testing on the well-known affective benchmark ”DEAP dataset” which contains EEG signals from
32 subjects, we find that pretraining ESN with gaussian intrinsic plasticity enhanced the classification accuracy and outperformed the
results achieved with an ESN pretrained with synaptic plasticity. Four classification problems were conducted in which the system
complexity is increased and the discrimination is more challenging, i.e. inter-subject emotion discrimination. Our proposed method
achieves higher performance over the state of the art methods.

Index Terms—Emotion recognition, Electroencephalogram, Echo state network, synaptic plasticity, intrinsic plasticity.

1 I NTRODUCTION

A FFECTIVE Computing [1] is the study and development


of systems and devices that can recognize, interpret,
process, and simulate human affects. It is an interdisci-
temporal resolution, i.e. it is recorded on a millisecond. Note
that, human cannot control his brain activity, it is uncon-
scious. In addition, emotion recognition from other modali-
plinary field spanning computer science, psychology, and ties which are external manifestations can lead to inaccurate
cognitive science [2]. In 1990s, Picard pointed out in her interferences particularly where the included subjects in
book ”Affective Computing” [3], that imbuing machines the experiment control their emotions and suppress them
with the ability of detecting, recognizing, and processing or alter them to exhibit false emotions. Nevertheless, the
the human emotion is necessary to further enhance human evoked issue is not available for the case of EEG modality.
machine interaction. Toward that more reliable interaction, Emotion recognition based on EEG classifies the inner hu-
Picard defined three major applications of affective com- man emotions. The issue for EEG-based emotion recognition
puting: (i) systems that detect and recognize emotions, comes down to the variability from an individual to another
(ii) systems that express emotions (e.g., avatars, agents), of the recorded EEG signals in response to a stimulus,
and (iii) systems that feel emotions. Specifically, Emotion which remains the inter-subject emotion discrimination a
Recognition (ER) drew the most attention of scientists. challenging task.
In the early stage of affective computing, the proposed Diverse writers from psychology defined affect or emo-
works focused on the recognition and synthesis of facial tion differently. Ekman [4] distinguished six basic emo-
expression, and the synthesis of voice inflection. After that, a tions which are happiness, sadness, surprise, disgust, anger,
variety of physiological measurements are available which and fear. Whereas, Russel [5] defined two dimensions
would yield clues to ones hidden affective state. Affective of emotion. Although the precise names vary, the two
wearables offer possibilities of new health and medical most common categories for the dimensions are arousal
research opportunities and applications. Medical studies (calm/excited), and valence (negative/positive). Mehrabian
could move from measuring controlled situations in labo- [6], [7] showed that a third dimension is required to dis-
ratory, to measuring more realistic situations in life. criminate between anger and anxiety. The third dimension
Electroencephalogram (EEG) is a direct measurement tends to be called dominance. It ranges from submissive (or
of brain activity. EEG gains interest thanks to its excellent without control) to dominant (or in control/empowered).
Another interesting representation of emotion belongs to
• R. Fourati, B. Ammar and A. M. Alimi are with the Research Groups in
Sun et al.. [8]. They improved the valence-arousal space in
Intelligent Machines, Department of Electrical and Computer Engineer- order to handle emotions by computers. The adaptation was
ing, National Engineering School of Sfax, University of Sfax, Sfax 3038, improved by the introduction of the typical fuzzy subspace.
Tunisia One way to design emotion recognition system is the
E-mail: {rahma.fourati, boudour.ammar, adel.alimi}@ieee.org
• J. J. Sanchez-Medina is with the Innovation Center for the Information use of neural networks. There are two types of neural net-
Society, University of Las Palmas de Gran Canaria, Las Palmas de Gran works: feedforward and recurrent. The FeedForward Neural
Canaria, Spain. Networks (FFNNs) are characterized by their activation,
E-mail:javier.sanchez.medina@ieee.org
they fed forward from input to output through ”hidden
2

layers” like ”Multi-Layer Perceptrons” (MLP), ”Radial Basis TABLE 1


Function Network” (RBFN) [9], etc. However, natural neural Emotional states projection in the VAD space
networks are usually connected in more complex structures,
like recurrent synaptic connections. Recurrent Neural Net- VAD levels Emotional State
works (RNNs) try to emulate that recurrence, which is more HVLALD Protected
suited for dynamical systems modeling. HVLAHD Satisfied
ESNs are a class of RNN. They are characterized by HVHALD Surprised
being three-layered NNs, with a high level of recurrence. HVHAHD Happy
The hidden layer is called reservoir. ESNs are well fitted LVLALD Sad
for a number of applications, like time series prediction LVLAHD Unconcerned
[10], [11], [12], [13] or robot control [14], usually showing LVHALD Frightened
high performance. As the EEG is temporal signal, ESNs LVHAHD Angry
are a natural match for emotion recognition. In our pro-
posed methodology, we used the reservoir to encode the
spatio-temporal information of the EEG signals. It is well 2.1 Existing Affective benchmarks
known that the major drawback of the ESN is its random At first blush, there are several works on EEG-based
initialization which affects both the convergence state and for emotion recognition. Unfortunately, each proposed ap-
the performance rate. In the current work, we circumvent proach was validated on a specific EEG experiment, i.e. a
this problem by adding an unsupervised learning of the private EEG dataset. To the best of our knowledge, there are
random reservoir before the learning of the output layer. five publicly accessible affective benchmarks: MAHNOB-
Here, we investigate the reservoir adaptation with two HCI [15], DEAP [16], SEED [17], DREAMER [18] and HR-
different techniques which are the synaptic plasticity and EEG4EMO [19]. The DEAP dataset is the most used one.
the intrinsic plasticity (IP). While the formal adapts the The elicitation protocol of DEAP dataset used 40 emo-
weights of reservoir synapses, the latter adapts the intrinsic tional videos. It was conducted on 32 participants (16 male
excitability of each neuron in the reservoir to the emotion and 16 female) wearing Biosemi Active 2 acquisition sys-
recognition task. tem with 32 EEG channels and 8 peripheral sensors. For
In this paper, we proposed an enhanced ESN for EEG- each trial, participant ratings were expressed in the three-
based emotion recognition adopting the two-dimensional dimensional space such that arousal, valence and domi-
model of emotions. This work is distinguished by neglecting nance. Consequently, varied classification problems were
the feature extraction step and feeding ESN directly with yielded such as Low/High valence (LVHV), Low/High
the raw EEG as input. To value this choice, we have also arousal (LAHA) and Low/High Dominance (LDHD). The
extracted the power band features in the time-frequency authors of [20] defined two criteria for discrimination of
domain in order to compare the influence of input type anxiety levels. Stress label is obtained if valence<=3 and
on the classification performance. The main contributions of Arousal>=5. Calm label is obtained if 4<=Valence<=6 and
this work are (1) an empirical study of reservoir behaviour Arousal<4. Decomposing the valence arousal space into
improved with unsupervised learning step in an EEG-based 4 quadrants leads to 4 emotions which are LALV, LAHV,
emotion recognition task and (2) an extensive validation of HALV and HAHV [21]. Combination of the three dimen-
the proposed methodology where four classification prob- sions leads to 8 emotional states [22] as shown in Table 1.
lems were conducted raising different level of complexity,
i.e. Low/High Valence discrimination, Low/High Arousal
discrimination, Stress/Calm discrimination and 8 emotional
2.2 Related work on DEAP dataset
states discrimination.
In the literature, most of the existing methods begins with
The outline of this paper is composed of 4 sections.
the signal decomposition and the feature extraction steps.
Section 2 first describes the existing affective benchmarks
As EEG signals contain both time and frequency informa-
and next overviews state of the art methods for EEG-
tion, feature extraction methods also differ and belongs
based emotion recognition. Section 3 begins with the ESN
to three kinds: time-domain, frequency-domain and time-
model, and next introduces the different rules of plasticity
frequency domains. Several works were proposed for clas-
and details the proposed approach. Section 4 illustrates the
sification of 2 valence or arousal levels as in [24], [25] and
experimental results and discussion. Section 5 summarizes
[26]. There is a lack of proposed works for classification of
the paper and outlines our future work.
more than 2 emotions and that’s mainly due to the poor
achieved results.
A very important aspect in the classifier is whether the
task is user dependent or independent. Portioning training
2 L ITERATURE R EVIEW OF EEG- BASED E MOTION data and test data form the same subject and adapting the
R ECOGNITION classifier on a specific subject makes the task subject depen-
dent which has improved results. For instance, in [24] clas-
Several works for EEG-based emotion recognition were sification of statistical features extracted from DEAP dataset
proposed including signal processing and machine learning achieves 82.76% and 82.77% for 2 valence and arousal levels,
methodologies. In this section, we present the DEAP dataset respectively with k-Nearest Neighbor (k-NN). But, here the
and the recent works validated using it. lack of generalization is the cost to pay. Hence, proposing
3

TABLE 2
Previous works on DEAP dataset

Subject-
Study Year Input #Channels Classifier Affective states Performance (%)
Independent
[23] 2019 Yes DWT coefficients 1 MLP Happy/Sad 58.50
LAHA 82.77
[24] 2018 No Statistical features 32 k-NN
LVHV 82.76
10 89.81±0.46
14 92.24±0.33
LAHA
18 93.69±0.30
32 95.69±0.21
[25] 2018 Yes Entropy and energy features of 4s k-NN
10 89.54±0.81
14 92.28±0.62
LVHV
18 93.72±0.48
32 95.70±0.62
[20] 2017 Yes Entropy features 32 SVM Stress and Calm 81.31
8 71.99
LAHA
32 72.10
[26] 2017 No IMF features of 5s SVM
8 69.10
LVHV
32 70.41
LAHA 68.28
[27] 2017 Yes Raw EEG 32 ESN LVHV 71.03
8 emotions 68.79

LALV, HALV,
[21] 2017 Yes Differential entropy of 1s 32 GELM LAHV, and 69.67
HAHV
LAHA 85.65
[28] 2017 No Raw EEG segment 32 LSTM
LVHV 85.45
LAHA 64.30
PSD of 1s
LVHV 58.20
[29] 2015 No 32 SVM
LAHA 64.20
DBN output from 1s raw EEG
LVHV 58.40
LAHA 55.00±4.5
[30] 2014 Yes Raw EEG 32 HMM
LVHV 58.75±3.8
Bandpower features 10 64.90
32 62.90
LAHA
PSD features 10 63.00
32 63.40
[31] 2014 Yes SVM
Bandpower features 10 64.90
32 62.30
LVHV
PSD features 10 56.40
32 60.00
16 65.63
[22] 2013 Yes Fractal dimension features SVM 8 emotions
32 69.53

a model trained and tested with data from independent Wavelet Transform (DWT) [23] or Intrinsic Mode Functions
users will lead to an easy application to new subject, i. e. (IMF) [26]. Most of the work done on EEG-based ER have
no requirement to design a new model for the new subject. suffered from finding informative features from EEG data.

To achieve this merit, several works focused on the These findings have reshaped scientific understanding
feature extraction step aiming to find the most relevant of EEG signals and inspired following works to analyze
features for EEG-based ER. Power spectral features (PSD) them directly instead of performing the feature extraction
using Short-Time Fourier Transform as in [29] and [31] are step. For example, [30] classified the EEG preprocessed
considered the baseline features and the most used. Fractal signals using the Hidden Markov Model (HMM). Likewise,
Dimension (FD) [22], entropy and energy features [25] have feature learning was performed by feeding the raw channel
shown acceptable results as depicted in Table 2. Other works data to the Deep Belief Network (DBN) [29]. The new
tested features of decomposition techniques such as Discrete representation obtained from DBN is then fed to Support
4

Vector Machine (SVM). A comparison with PSD features


showed that raw EEG data achieved better results. The task
here is subject-dependent and the length of the trial is 1
second. Note that, EEG signals are acquired from a number
of channels.
Investigation of the impact of specific channels on the ER
performance is very important. [23] finds that one channel
F4 is sufficient for an EEG-ER task. Also, the authors in
[31] showed that bandpower features from 10 channels
handle better results than using 32 channels for classifying
2 levels of valence and arousal, respectively. In the current
work, ESN has as input the preprocessed EEG raw and the
classification is also performed by the readout layer.
The use of ESN for processing EEG data is not new at Fig. 1. Power bands feature extraction process.
all, but the works that are based on ESNs are really few.
In fact, there are only two works [32] and [33]. We should
notice that the input is event-related potential features. the first feature vector. The output of the reservoir states
Bozkhov et al. [32] have pretrained the reservoir with IP is then fed to SVM with RBF kernel. The authors in [34]
in order to have optimal values of IP parameters, i.e. the tested the proposed methodology on DEAP dataset labeled
gain and the bias. The next step is to compute the steady with the 4 quadrants LALV, HALV, LAHV and HAHV to
states of the reservoir. Bozkhov et al. [32] proved that the yield 78.2% as recognition rate. Compared to our work, our
discrimination of the steady states is more efficient due to feature extraction step is more simple and benefit from all
the adequate representation of the input data with the IP. information of the raw EEG data.
Therefore, the best performance 76.9% found with using Our objective goes more than one form of plasticity, we
Linear Discriminant Analysis (LDA). further investigate a second form and two techniques of
Bozkhov et al. have also integrated a feature selection ESN training which are the offline and the online modes.
step by using projection of 2D, 3D and 4D of the steady Even if in [35] the authors studied the effect of synaptic
states. The input of the ESN is a feature vector with 252 plasticity with these two modes, it looks that the study was
attributes. The experiment was conducted on 26 females evaluated on time series of small size and the IP form was
seeing positive and negatives emotional pictures from IAPS ignored. Briefly, we distinguish our work by illuminating
and wearing an EEG cap with 21 channels. The best recog- the important roles of both adapting the reservoir and
nition rate up to 98.1% was by using 4D projections with training mode to a more challenging task, i. e. EEG-ER.
SVM. In a follow up work of Bozkhov et al. [33], it was
demonstrated that the data representation with reservoir 3 M ETHODOLOGY
computing outperforms the autoencoder. In fact, the highest
In our methodology, we consider the preprocessed EEG
classification accuracy achieved is 81% with 2 layers autoen-
signals. To justify such choice, we also perform a feature
coders.
extraction step and fed power bands features to ESN to
Recently, [27] proposed ESN pretrained with IP and fed
recognize emotions. Note that, our methodology consists
it with preprocessed EEG data. The aim of the work is
in three steps which are feature extraction, reservoir pre-
to show the effectiveness of IP rule on the reservoir layer
training with one kind of plasticity rule and readout layer
for performing feature extraction step. The classification of
training for the classification.
arousal levels, valence levels and 8 emotional states resulted
in 68.28%, 71.03% and 68.79% respectively.
Thus, the use of ESN and its promising results in ER 3.1 Feature Extraction
encourage us to further use the ESN for feature extraction EEG signals are not stationary. To handle this hypothesis, it
and classification of emotions from EEG signals. We are was shown that wavelet decomposition is more convenient
different from state-of-the-art methods [32] and [33]. We use than Fourier transform [36]. In our study, we applied DWT
ESN as an architecture for both representation of input data to decompose EEG channel signal into different bands. The
and its classification. The projection and the classification DWT coefficients are defined as follows:
ranking in [32] and [33] are more complex and hence the Z ∞
computational time is high. Cx(t) (l, n) = x(t)Ψl,n (t)dt (1)
While our work focus only on the EEG modality, we −∞
emphasize a recently published work using ESN pretrained Ψl,n (t) = 2−(l+1) Ψ(2−(l+1) (t − 2−1 n)) (2)
with IP having as input both EEG and physiological signals
[34]. After calculating the asymmetry index from the frontal The EEG signal, x(t), is correlated with a wavelet function
lobe channels, and based on its values some signals were Ψl,n (t). The variable l and n are the scale and translation
discarded in the recognition process. Next, Wavelet Packet variables of the wavelet function, respectively. A dyadic
Transform (WPT) is applied to extract 4 sub-bands. After scale is performed as in (2) to chose the scale and translation
that, k-means is used to cluster the generated WPT coeffi- variables. Note that, the original signal is reconstructed if
cients of each window in each channel. This work used ESN orthogonality is ensured [37]. The scale variable provides
for feature selection step that means to reduce the size of analysis in frequency domain: compressed version of the
5

(a) (b)

Fig. 2. Unsupervised learning of Reservoir layer using (a) Synaptic plasticity rule and offline mode and (b) Intrinsic plasticity rule and online mode.

wavelet function corresponds to the high frequency com- reservoir and the inner weights of reservoir are randomly
ponents of the original signal while the stretched version initialized and remain unchanged during training phase.
corresponds to the low frequency components; and the The simplicity of ESN turns out in the need of training only
translation variable provides analysis in the time domain. the weights of the readout layer which is a linear regression.
The decomposition of DWT leads to coefficients of high Consider a topology of I input neurons, R internal neurons
frequency which are ”detail” and coefficients of coarse and O output neurons. The first step in training ESN is
approximation of original signal in time domain which are to collect the matrix of activation states of each neuron in
”approximation”. After decomposition, power features from the reservoir. The activation equation at each time step is
all bands are extracted. Note that, power bands features expressed in (3):
are the most popular features in the context of EEG-based
x(t) = fres W in u(t) + W res x(t − 1)

emotion recognition. The Definition of EEG frequency bands (3)
differs slightly between studies. Finally, a feature vector is
where W in and W res are the weights of the input and the
formed through the concatenation of all power features from
reservoir layers respectively. fres is the non-linear activation
all channels of the same EEG trial. Thus, the size of the
function, usually, a sigmoid. The second step consists in
feature vector is equal to the number of the bands multiplied
calculating the output weights. Here, we distinguish two
by the number of channels. Fig. 1 illustrates the feature
modes: offline and online. Linear regression is calculated as
extraction process adapted in our methodology.
follows in (4) and it is considered as an offline mode:
−1
W out = Ytarget ∗ X T X (4)
3.2 ESN Model
ESNs are a type of recurrent neural networks proposed by The training in the online mode is ensured through the
Jaeger [38]. It is composed of input layer, a reservoir and an minimization between the target output and the produced
output layer often called readout layer as depicted in Fig. output. The presentation of the training samples is in the
2a. The recurrence in ESNs is handled through the recurrent sequential form.
connections between hidden units and the possible feedback Originally proposed for MLP model, the delta rule is a
connection from the output layer to the reservoir. Direct stochastic gradient descent method used for the update of
connection from input to output layer can be added. one layer of MLP neural network [41]. It is expressed as
The basic structure proposed for solving computational follows:
∆W = η y desired (t) − y(t) x(t)

intelligence problems was the FFNN. However, for specific (5)
problems which have dynamic and temporal nature, FFNNs
are not able to handle complex temporal machine learning W out = W out + ∆W (6)
problems. As a solution, recurrent connections are added
to the structure, giving birth to RNNs for handling several where η is the learning rate and t is the time step of the
problems such as SVM leaning process [39]. Training RNNs learning iterations, t = 1, 2, , T . x(t) is the vector of neuron
was done by extending the canonical backpropagation al- firing activation states of x at time step t. This mechanism
gorithm to be Back-Propagation-Through-Time (BPTT) [40]. computes the incremental adaptation of readout weights.
One of the major limitations of BBTT is its high computa- Then, the output of the ESN can be generated according
tional cost and its slow convergence. Hence, Jaeger et al. [38] to (7) using linear regression if fout is a linear function:
proposed the reservoir computing approach to enhance the y(t) = f out W out ∗ x(t)

(7)
learning process.
The reservoir is composed of sparsely connected neu- For a specific input sample, the neuron having the highest
rons with cycled connections. The weights from input to activation score wins, while the other neurons in the same
6

layer are inhibited. This mechanism is called winner-take- synapses. The threshold varies as a nonlinear function of
all. Assuming that a trial is composed of N channels, the the average output of the postsynaptic neuron, which is the
ESN is fed sequentially with these N signals one by one. main concept of the present BCM model providing stability
As a matter of fact, the ESN output is a channel label. Each properties.
successive N channels labels are combined through majority The BCM rule has several variants and here the one
voting and the class label receiving the most number of suggested in [44] is adopted as given by (9) and (10):
votes is regarded as the trial label.
∆Wkj (t) = yk (yk − θM )xj /θM (9)
  X
3.3 Plasticity Rules for Unsupervised Learning of θM = E yk2 = pk yk2 (10)
Reservoir Layer
where θM is the modification threshold of the postsynaptic
The generation of fixed random reservoir makes the RNN
neuron, yk , and pk is the probability of choosing vector
training fast. Meanwhile, other studies in neuroscience have
of from the dataset, E [] is the temporal average, Wkj is
reported that the modification of the connection strength
the adjustment of the synaptic weight between postsynaptic
endows neural networks with a powerful learning ability.
neuron, yk and presynaptic neuron, xj , at time t.
This mechanism is called Synaptic plasticity illustrated in
Fig. 2a. Here we detail the most used existing ones, to know,
3.3.3 Intrinsic plasticity rule
the Oja’s rule and the BCM rule. Recent findings showed
that the neuron is able to change its intrinsic excitability to From a biological point of view, Triesch [45] stated that
fit the distribution of the input as shown in Fig. 2b. We also the biological neuron does not adapt its synapses, rather
explain the gaussian intrinsic plasticity mechanism. it adapts its intrinsic excitability. While traditional learning
algorithms update the weights of connections between neu-
3.3.1 Oja’s rule rons, the IP rule algorithm update the activation function of
the neuron. Particularly, Triesch derived the IP rule for fermi
Oja learning rule, proposed by Erkki Oja [42], is a model
activation function and for an exponential desired distribu-
of how neurons in the brain or in artificial neural networks
tion. In the same manner, Schrauwen et al. [46] extended IP
alter the strength of connections, or learn, over time. Oja’s
considering also hyperbolic tangent as activation function
rule is an extension of hebbian learning rule [43] proposed
with Gaussian desired distribution.
in Hebb book ”The Organization of Behaviour”. In classical
IP rule is local, that means it is applied on each single
hebbian learning rule the update scheme for weights may
neuron to maximize information about its input. From
result in very large weights when the number of iterations
information theory, entropy measure allows us to realize
is large.
the maximization of information. Equation (11) measures
Oja’s rule is based on normalized weights, the weights
the distance between the actual probability density of the
are normally normalized to unit length. This simple change
neurons output and the targeted probability density using
results in a totally different but more general and stable
the Kullback-Leiber divergence metric.
weight update scheme compared to classical hebbian learn-
ing scheme. It can also be extended to non-linear neurons Z 
p(x))

as well. It has been mathematically proven that when the DKL (p(x), pd (x))) = p(x)log (11)
pd (x)
input data is centered at the origin and when Oja’s update
rule converges it results in the neuron learning the first The Kullback-Leiber divergence can be developed into (12)
principal component of the training data, hence Oja’s rule for a Gaussian distribution with a mean µ and a standard
is important in feature based recognition systems [42]. deviation σ .
The Oja learning rule can be described as follows in (8): 1 
2

DKL (p(x), pd (x))) = −H(x) + 2 E (x − µ)
∆Wkj (t) = ξyk (t) [xj (t) − yk (t)Wkj (t)]) (8) 2σ
1
+ log √ (12)
where Wkj is the change of the synaptic weight between the σ 2π
postsynaptic neuron, yk , and the presynaptic neuron,xj , at
time, t. ξ is the learning rate. Note that Eq. (8) is a modified A balance is achieved between the maximization of the
version of the anti-Hebbian rule by adding a forgetting actual entropy H and the minimization of the expected
factor to limit the growth of the synaptic weight to avoid entropy E . The update of the gain a and bias b is handled
the saturation of Wkj . using (13) and (14).
η
∆a = + ∆b W in u + W res x

3.3.2 BCM rule (13)
a
The BCM rule [44], named for Bienenstock, Cooper and  µ x 
Munro, follows the Hebbian learning principle, with a slid- ∆b = −η − 2 + 2 2σ 2 + 1 − x2 + µx (14)
ing threshold as a stabilizer function to control the synaptic
σ σ
alteration. Each neuron in the reservoir layer will be activated using
The main idea was that the sign of weight modification (15).
should be based on whether the postsynaptic response is x(t) = fres diag(a) W in u(t) + W res x(t − 1) + b (15)
 
above or below a threshold. Responses above the threshold
should lead to strengthening of the active synapses, re- Schrauwen et al. [46] showed that updating IP param-
sponses below the threshold lead to weakening of the active eters in a gaussian distribution with hyperbolic tangent
7

TABLE 3 TABLE 4
Wavelet Decomposition OF EEG channel signal Arousal Discrimination Results

Bandwith (Hz) Frequency Band Decomposition level System Input type Offline Online Hybrid
64-128 Hz Noise D1 Feature 59.77% 49.22% 60.01%
ESN-Oja rule
32-64 Hz Gamma D2 Signal 54.30% 58.98% 60.29%
16-32 Hz Beta D3 Feature 61.72% 54.14% 62.17%
ESN-BCM rule
8-16 Hz Alpha D4 Signal 56.25% 50.00% 59.34%
4-8 Hz Theta D5 Feature 61.21% 59.11% 62.39%
ESN-IP rule
1-4 Hz Delta A5 Signal 68.28% [27] 62.98% 69.23%
SVM with PSD Feature 63.40%
features [31]
activation function is similar to the update in an exponential
HMM [30] Signal 55.00±4.5%
distribution with fermi activation function. Hence, there is
no dependence between the chosen non-linearity function
and the targeted distribution. TABLE 5
Valence Discrimination Results
A very interesting unsupervised rule, the IP, is able to
make reservoir computing more robust in a fashion that System Input type Offline Online Hybrid
its internal dynamics can autonomously tune themselves
Feature 59.77% 52.73% 60.81%
independently of the randomly generated weights or the ESN-Oja rule
Signal 61.26% 54.92% 62.13%
scaling of input to the optimal dynamic regime for emotion
recognition. For reservoir enhancing purpose, we follow Feature 57.42% 46.88% 58.26%
ESN-BCM rule
previous works in which they used the IP rule [45], [46] Signal 56.25% 41.67% 59.31%
and [47], [48]. Feature 53.52% 55.86% 57.94%
ESN-IP rule
Signal 71.03% [27] 66.23% 71.25%

4 E XPERIMENTAL R ESULTS AND D ISCUSSION SVM with Feature 62.30%


bandpower
In this section, we present and discuss our results using features [31]
different configurations. To validate our method, we tested HMM [30] Signal 58.75±3.8%
it on DEAP benchmark in order to be able to compare
our results to the current state-of-the-art methods. Details
TABLE 6
of experimental settings are provided. An interpretation of Emotional states Discrimination Results
emotion recognition results is done. Finally, a sensitivity
analysis of ESN to the hyperparameters is detailed. System Input type Offline Online Hybrid
Feature 35.49% 42.23% 48.29%
ESN-Oja rule
4.1 Experimental Settings Signal 54.29% 58.12 59.29%
For the feature extraction step, we chose daubechies func- Feature 32.42% 33.98% 41.58%
ESN-BCM rule
tion db5 with 5 levels for wavelet decomposition. We choose Signal 56.69% 59.81% 60.23%
5 levels because the sampling rate of recorded EEG signals Feature 38.22% 44.65% 49.58%
in DEAP dataset is 128Hz. Besides, a filter pass band [4-45 ESN-IP rule
Signal 68.79% [27] 69.25% 69.95%
Hz] was done when preprocessing DEAP dataset, therefore
SVM with FD Feature 69.53%
the delta band was not considered in our work. Power features [22]
features are generated from 4 bands using the 32 channels
resulting in a feature vector with 128 as size. Table 3 shows
TABLE 7
details about frequency bands and corresponding wavelet Stress/Calm Discrimination Results
levels. To train and test our model, we used 80% as a
training partition and the remaining 20% as a test partition. System Input type Offline Online Hybrid
Feature 65.45% 47.27% 60.36%
ESN-Oja rule
4.2 Emotion Recognition Results Signal 61.82% 65.45% 67.27%
In this subsection, we present the results of 4 classification Feature 65.45% 50.91% 64.63%
ESN-BCM rule
problems using DEAP dataset. Signal 49.09% 54.55% 58.29%
For arousal classification, using ESN with offline training Feature 69.06% 65.45% 76.15%
which is the linear regression yields better results than the ESN-IP rule
Signal 41.82% 49.09% 61.27%
online training using the delta rule. An exception is made
SVM with Feature 81.31%
for the case of the reservoir pretrained with the Oja’s rule entropy features
using the online training of output weights as depicted in [20]
Table 4. The comparison is handled with the work in [30]
since it belongs to the same context as the current work.
We highlight that using feature as input achieves higher achieved is 69.23% with ESN pretrained with IP using the
results while using BCM rule. But, the best accuracy hybrid mode inputted directly with EEG channel signal.
8

Note that, training the output weights in a first step using


linear regression and using the delta rule as a second step
represents the hybrid mode. The idea here is that instead
using the online mode with random initial weights, we
trained them first with linear regression. As a result, the
performance increased up to 14% over the state of the art
method [30].
For valence classification, the reservoir pretrained with
IP using the hybrid training reaches the highest result up to
71.25%. Our system, again, outperforms the existing work
using HMM with signal as input [30] up to 13% as shown in
Table 5. Furthermore, Oja’s rule achieves higher result with
signal as input 62.13% than the feature 60.81%. While, the
BCM rule achieves higher performance 59.31% with signal
than the feature input 58.26%.
The discrimination of 8 emotional states is considered
as the most challenging task in our work since the com-
plexity of the system is increased. The online mode is more Fig. 3. Impact of Oja’s parameter on the performance.
efficient here than the offline mode for both signal and
feature input. The hybridization allows us to achieve the
best result which is 69.95%. In the literature, most of the
existing works classify arousal and valence levels. There is
only one work which classifies 8 emotional states [22]. Our
ESN trained with linear regression followed with the delta
rule outperforms the SVM classifier with FD features.
According to Table 6, the ESN is more robust when using
signal instead of feature. When discriminating stress and
calm, we remark that ESN with synaptic plasticity achieves
the same accuracy with signal as well feature input which is
65.45%. Combining linear regression and delta rule training
enhanced the performance from 61.82% to 67.27%. While
using ESN with IP, we can rise the accuracy to 76.15% with
bandpower features. When inputting signal to ESN with
IP, we obtained an accuracy of 41.82% and 49.09% with
the offline mode and online mode, respectively. The hybrid
mode shows its effectiveness in this case to reach 61.27% as
recognition rate.
We believe that existing work combining SVM classifier Fig. 4. Impact of spectral radius on the performance.
with entropy features achieves the best result up to 81.31%,
but we highlight that we proved that feeding ESN with EEG
signal directly can achieve better result than bandpower 4.3 Impact of Plasticity Parameters
features which is the aim of our work as depicted in Table
7. Through the aforementioned results, we deduced that in
three cases the IP rule achieves better results that SP rules,
Overall, the online mode is better than the offline mode i. e. arousal, valence and 8 emotional states discrimination.
for the classification of the emotional states. But, for the But, for the discrimination of stress and calm states the Oja’s
other problems the offline outperforms the online mode. rule achieves the highest result.
We can conclude that the delta rule is more efficient when In our study, we delve into the plasticity parameter
the complexity of the architecture is high than the linear which is the number of iterations. Our results showed that
regression. Pretraining the reservoir with Synaptic plasticity the previous findings are the same for our case for the
rules achieves higher results using the feature vector as IP rule. After 10 iterations, the IP is stable and adding
input than the raw EEG signal. The only case the synaptic more iterations does not improve the result. Nevertheless,
plasticity reaches good accuracy rate with signal is the performing more iterations up to 100 of Oja’s rule increases
classification of stress / calm states using Oja’s rule. the recognition rate as shown in Fig.3.
In all cases, pretraining ESN using intrinsic plasticity
rule has achieved the best results either with using feature as
4.4 Impact of spectral radius and reservoir size
input or raw EEG signal. As a consequence, we recommend
the use of IP rule and the hybrid mode for classification of One of the most important parameters in the ESN model is
EEG signals. the spectral radius. We recall that Jaeger [38] recommended
9

synaptic plasticity for EEG signals classification. We suggest


that synergestic learning of the reservoir could yield ESN
model with benefits from each paradigm. We are expecting
that these suggestions will result in a more robust reservoir
computing approach for EEG-based emotion recognition.

ACKNOWLEDGMENTS
The research leading to these results has received fund-
ing from the Ministry of Higher Education and Scientific
Research of Tunisia under the grant agreement number
LR11ES48.

R EFERENCES
[1] S. Poria, E. Cambria, R. Bajpai, and A. Hussain, “A review of affec-
tive computing: From unimodal analysis to multimodal fusion,”
Information Fusion, vol. 37, pp. 98–125, 2017.
[2] J. Tao and T. Tan, “Affective computing: A review,” in Interna-
Fig. 5. Impact of reservoir size on the performance. tional Conference on Affective computing and intelligent interaction.
Springer, 2005, pp. 981–995.
[3] R. Picard, Affective Computing, ser. Inteligencia artificial. The MIT
that the reservoir should satisfy the ESP. It relates asymp- Press, 1997. [Online]. Available: https://books.google.es/books?
totic properties of the excited reservoir dynamics to the id=N1qqQgAACAAJ
[4] P. Ekman, “Basic emotions in handbook of cognition and emotions
driving signal. (t. dalgleish and m. power, eds.),” John Wiley& Sons Ltd, 1999.
Intuitively, the ESP states that the reservoir will asymp- [5] J. A. Russell, “A circumplex model of affect.” Journal of personality
totically wash out any information from initial conditions. and social psychology, vol. 39, no. 6, p. 1161, 1980.
[6] A. Mehrabian, “Framework for a comprehensive description and
The ESP is granted for any input if this spectral radius is
measurement of emotional states.” Genetic, social, and general psy-
smaller than unity. In Fig. 4, it is shown that the best value chology monographs, 1995.
of spectral radius is 0.85 for our classification problems. In [7] ——, “Pleasure-arousal-dominance: A general framework for de-
order to evaluate the sensitivity of our proposed system, we scribing and measuring individual differences in temperament,”
Current Psychology, vol. 14, no. 4, pp. 261–292, 1996.
also varied the reservoir parameter from 500 to 5000 neurons [8] K. Sun, J. Yu, Y. Huang, and X. Hu, “An improved valence-
as depicted in 5. The choice was not arbitrary, rather we tried arousal emotion space for video affective content representation
to take into consideration the channel data length which is and recognition,” in Multimedia and Expo, 2009. ICME 2009. IEEE
8064. The role of the reservoir here is to compress the input International Conference on. IEEE, 2009, pp. 566–569.
[9] D. Mellouli, T. M. Hamdani, and A. M. Alimi, “Deep neural net-
data and to provide a new representation for classification work with rbf and sparse auto-encoders for numeral recognition,”
purpose. in Intelligent Systems Design and Applications (ISDA), 2015 15th
The smaller size 500 does not result in a good repre- International Conference on. IEEE, 2015, pp. 468–472.
[10] N. Chouikhi, B. Ammar, N. Rokbani, A. M. Alimi, and A. Abra-
sentation of channel data leading to low accuracy results. ham, “A hybrid approach based on particle swarm optimization
For a medium size 1500, the performance is acceptable for echo state network initialization,” in Systems, Man, and Cyber-
for both emotional states and stress/calm discrimination. netics (SMC), 2015 IEEE International Conference on. IEEE, 2015,
But, for arousal and valence discrimination the performance pp. 2896–2901.
[11] N. Chouikhi, R. Fdhila, B. Ammar, N. Rokbani, and A. M. Alimi,
is very low. Using 2500 as a reservoir size enhances the “Single-and multi-objective particle swarm optimization of reser-
performance for classification problems. That means the voir structure in echo state network,” in Neural Networks (IJCNN),
reservoir is able to make a new representation of the raw 2016 International Joint Conference on. IEEE, 2016, pp. 440–447.
[12] N. Chouikhi, B. Ammar, N. Rokbani, and A. M. Alimi, “Pso-
EEG signal making the new feature vector more discrimi-
based analysis of echo state network parameters for time series
native. When increasing the reservoir size to 5000 neurons, forecasting,” Applied Soft Computing, vol. 55, pp. 211–225, 2017.
the performance is slightly lower. This is mainly due to the [13] N. Slama, W. Elloumi, and A. M. Alimi, “Distributed recurrent
increased complexity of the system. It can also include that neural network learning via metropolis-weights consensus,” in
International Conference on Neural Information Processing. Springer,
the reservoir is stable with 2500 neurons. 2017, pp. 108–119.
[14] B. Ammar, N. Chouikhi, A. M. Alimi, F. Chérif, N. Rezzoug, and
P. Gorce, “Learning to walk using a recurrent neural network with
5 C ONCLUSION time delay,” in International Conference on Artificial Neural Networks.
Springer, 2013, pp. 511–518.
This paper presented the prominent computational models [15] M. Soleymani, J. Lichtenauer, T. Pun, and M. Pantic, “A multi-
of plasticity and their current applicability to empirical im- modal database for affect recognition and implicit tagging,” IEEE
provements as neural network adaptation mechanisms. The Transactions on Affective Computing, vol. 3, no. 1, pp. 42–55, 2012.
[16] S. Koelstra, C. Muhl, M. Soleymani, J.-S. Lee, A. Yazdani,
presented forms of plasticity are applied to randomly con- T. Ebrahimi, T. Pun, A. Nijholt, and I. Patras, “Deap: A database for
nected recurrent reservoirs to learn the structural informa- emotion analysis; using physiological signals,” IEEE Transactions
tion in EEG signals, achieve sparsity in neural connectivity on Affective Computing, vol. 3, no. 1, pp. 18–31, 2012.
and enhance learning performance of emotion recognition. [17] W.-L. Zheng and B.-L. Lu, “Investigating critical frequency bands
and channels for eeg-based emotion recognition with deep neural
We advocate the use of intrinsic plasticity for reservoir networks,” IEEE Transactions on Autonomous Mental Development,
pretraining since it achieves best results in comparison with vol. 7, no. 3, pp. 162–175, 2015.
10

[18] S. Katsigiannis and N. Ramzan, “Dreamer: a database for emotion [40] P. J. Werbos, “Backpropagation through time: what it does and
recognition through eeg and ecg signals from wireless low-cost how to do it,” Proceedings of the IEEE, vol. 78, no. 10, pp. 1550–
off-the-shelf devices,” IEEE journal of biomedical and health informat- 1560, 1990.
ics, vol. 22, no. 1, pp. 98–107, 2018. [41] J. C. Pemberton and J. J. Vidal, “When is the generalized delta
[19] H. Becker, J. Fleureau, P. Guillotel, F. Wendling, I. Merlet, and rule a learning rule? a physical analogy,” in IEEE International
L. Albera, “Emotion recognition based on high-resolution eeg Conference on Neural Network, vol. 1, 1988, pp. 309–315.
recordings and reconstructed brain sources,” IEEE Transactions on [42] E. Oja, “Simplified neuron model as a principal component ana-
Affective Computing, no. 1, pp. 1–1, 2017. lyzer,” Journal of mathematical biology, vol. 15, no. 3, pp. 267–273,
[20] B. Garcı́a-Martı́nez, A. Martı́nez-Rodrigo, R. Zangróniz, J. M. Pas- 1982.
tor, and R. Alcaraz, “Symbolic analysis of brain dynamics detects [43] D. Hebb, J. Martinez, and S. Glickman, “The organization of
negative stress,” Entropy, vol. 19, no. 5, p. 196, 2017. behavior-a neuropsychological theory-hebb, do,” 1994.
[21] W.-L. Zheng, J.-Y. Zhu, and B.-L. Lu, “Identifying stable patterns [44] G. Castellani, N. Intrator, H. Shouval, and L. Cooper, “Solutions of
over time for emotion recognition from eeg,” IEEE Transactions on the bcm learning rule in a network of lateral interacting nonlinear
Affective Computing, 2017. neurons,” Network: Computation in Neural Systems, vol. 10, no. 2,
pp. 111–121, 1999.
[22] Y. Liu and O. Sourina, “Eeg databases for emotion recognition,” [45] J. Triesch, “A gradient rule for the plasticity of a neurons in-
in Cyberworlds (CW), 2013 International Conference on. IEEE, 2013, trinsic excitability,” in International Conference on Artificial Neural
pp. 302–309. Networks. Springer, 2005, pp. 65–70.
[23] P. Pandey and K. Seeja, “Emotional state recognition with eeg [46] B. Schrauwen, M. Wardermann, D. Verstraeten, J. J. Steil, and
signals using subject independent approach,” in Data Science and D. Stroobandt, “Improving reservoirs using intrinsic plasticity,”
Big Data Analytics. Springer, 2019, pp. 117–124. Neurocomputing, vol. 71, no. 7-9, pp. 1159–1171, 2008.
[24] L. Piho and T. Tjahjadi, “A mutual information based adaptive [47] J. J. Steil, “Online reservoir adaptation by intrinsic plasticity for
windowing of informative eeg for emotion recognition,” IEEE backpropagation–decorrelation and echo state learning,” Neural
Transactions on Affective Computing, 2018. Networks, vol. 20, no. 3, pp. 353–364, 2007.
[25] M. Li, H. Xu, X. Liu, and S. Lu, “Emotion recognition from [48] M. Wardermann and J. J. Steil, “Intrinsic plasticity for reservoir
multichannel eeg signals using k-nearest neighbor classification,” learning algorithms.” in ESANN, 2007, pp. 513–518.
Technology and Health Care, no. Preprint, pp. 1–11, 2018.
[26] N. Zhuang, Y. Zeng, L. Tong, C. Zhang, H. Zhang, and B. Yan,
“Emotion recognition from eeg signals using multidimensional
information in emd domain,” BioMed research international, vol.
2017, 2017.
[27] R. Fourati, B. Ammar, C. Aouiti, J. Sanchez-Medina, and A. M.
Alimi, “Optimized echo state network with intrinsic plasticity
for eeg-based emotion recognition,” in International Conference on
Neural Information Processing. Springer, 2017, pp. 718–727.
[28] S. Alhagry, A. A. Fahmy, and R. A. El-Khoribi, “Emotion recogni- Rahma Fourati (IEEE Graduated Student Mem-
tion based on eeg using lstm recurrent neural network,” Emotion, ber 15). She was born in Sfax, she is a PhD
vol. 8, no. 10, 2017. student in Computer Systems Engineering at
the National Engineering School of Sfax (ENIS),
[29] X. Li, P. Zhang, D. Song, G. Yu, Y. Hou, and B. Hu, “Eeg based
since October 2015. She received the M.S. de-
emotion identification using unsupervised deep feature learning,”
gree in 2011 from the Faculty of Economic Sci-
2015.
ences and Management of Sfax (FSEGS). She
[30] C. A. Torres-Valencia, H. F. Garcia-Arias, M. A. A. Lopez, and is currently a member of the REsearch Group
A. A. Orozco-Gutiérrez, “Comparative analysis of physiological in Intelligent Machines (REGIM).Her research
signals and electroencephalogram (eeg) for multimodal emotion interests include Recurrent Neural Networks, Af-
recognition using generative models,” in Image, Signal Processing fective computing, EEG signals analysis, Sup-
and Artificial Vision (STSIVA), 2014 XIX Symposium on. IEEE, 2014, port Vector Machines, Simulink Modeling.
pp. 1–5.
[31] I. Wichakam and P. Vateekul, “An evaluation of feature extraction
in eeg-based emotion prediction with support vector machines,”
in Computer science and software engineering (JCSSE), 2014 11th
international joint conference on. IEEE, 2014, pp. 106–110.
[32] L. Bozhkov, P. Koprinkova-Hristova, and P. Georgieva, “Learning
to decode human emotions with echo state networks,” Neural
Networks, vol. 78, pp. 112–119, 2016.
[33] ——, “Reservoir computing for emotion valence discrimination
from eeg signals,” Neurocomputing, vol. 231, pp. 28–40, 2017. Boudour Ammar (IEEE Student Member08,
[34] F. Ren, Y. Dong, and W. Wang, “Emotion recognition based on Member13, Senior Member’17) was born in
physiological signals using brain asymmetry index and echo state Sfax, Tunisia. She graduated in computer sci-
network,” Neural Computing and Applications, pp. 1–11, 2018. ence in 2005. She received the Master degree
in automatic and industrial computing from the
[35] M.-H. Yusoff, J. Chrol-Cannon, and Y. Jin, “Modeling neural
National School of Engineers of Sfax, University
plasticity in echo state networks for classification and regression,”
of Sfax in 2006. She obtained a PhD degree
Information Sciences, vol. 364, pp. 184–196, 2016.
in recurrent neural network learning model for
[36] C. B. Chaabane, D. Mellouli, T. M. Hamdani, A. M. Alimi, and a biped walking simulator with the Research
A. Abraham, “Wavelet convolutional neural networks for hand- Group on Intelligent Machines (REGIM), Univer-
written digits recognition,” in International Conference on Health sity of Sfax, since February 2014. She is cur-
Information Science. Springer, 2017, pp. 305–310. rently an assistant professor with the Department of Computer En-
[37] P. S. Addison, The illustrated wavelet transform handbook: introductory gineering and Applied Mathematics at National Engineering School
theory and applications in science, engineering, medicine and finance. of Sfax (ENIS). Her research interests include iBrain (Artificial neural
CRC press, 2017. networks, Machine learning, Recurrent neural Network) and i-health
[38] H. Jaeger, Tutorial on training recurrent neural networks, covering (Autonomous Robots, Intelligent Control, Embedded Systems, medical
BPPT, RTRL, EKF and the” echo state network” approach. GMD- applications, EEG, ECG).
Forschungszentrum Informationstechnik Bonn, 2002, vol. 5.
[39] R. Fourati, C. Aouiti, and A. M. Alimi, “Improved recurrent neu-
ral network architecture for svm learning,” in Intelligent Systems
Design and Applications (ISDA), 2015 15th International Conference
on. IEEE, 2015, pp. 178–182.
11

Javier Sanchez-Medina (M13) received the


M.E. degree from Telecommunications Faculty in
2002 and the Ph.D. degree from the Computer
Science Department, Universidad de Las Pal-
mas de Gran Canaria, in 2008. He has authored
over 30 international conference and 15 interna-
tional journal papers. He is interested in the ap-
plication of evolutionary and parallel computing
techniques for intelligent transportation systems.
He is also very active as a Volunteer of the
IEEE ITS Society, where he has been serving
in a number of different positions. He is the Editor-in-Chief of the ITS
Podcast, the ITS Newsletter Vice President of the IEEE ITSSs Spanish
chapter, and the General Chair for the IEEE ITSC2015.

Adel M. Alimi (IEEE Student Member91, Mem-


ber96, Senior Member00).He was born in Sfax
(Tunisia) in 1966. He graduated in Electrical En-
gineering 1990, obtained a Ph.D. and then an
HDR both in Electrical and Computer Engineer-
ing in 1995 and 2000 respectively. He is now a
Professor in Electrical and Computer Engineer-
ing at the University of Sfax. His research inter-
est includes applications of intelligent methods
(neural networks, fuzzy logic, evolutionary algo-
rithms) to pattern recognition, robotic systems,
vision systems, and industrial processes. He focuses his research on
intelligent pattern recognition, learning, analysis and intelligent control of
large scale complex systems. He is an Associate Editor and Member of
the editorial board of many international scientific journals (e.g. Pattern
Recognition Letters, Neurocomputing, Neural Processing Letters, Inter-
national Journal of Image and Graphics, Neural Computing and Appli-
cations, International Journal of Robotics and Automation, International
Journal of Systems Science, etc.). He was a Guest Editor of several
special issues of international journals (e.g. Fuzzy Sets and Systems,
Soft Computing, Journal of Decision Systems, Integrated Computer
Aided Engineering, Systems Analysis Modeling and Simulations). He
is an IEEE senior member and member of IAPR, INNS and PRS

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy