Sensors 23 06212 v2
Sensors 23 06212 v2
Article
Speech Emotion Recognition Using Convolution Neural
Networks and Multi-Head Convolutional Transformer
Rizwan Ullah 1 , Muhammad Asif 2 , Wahab Ali Shah 3 , Fakhar Anjam 2 , Ibrar Ullah 4 , Tahir Khurshaid 5, * ,
Lunchakorn Wuttisittikulkij 1, *, Shashi Shah 1 , Syed Mansoor Ali 6 and Mohammad Alibakhshikenari 7, *
useful in online tutorials, language translation, intelligent driving, and therapy sessions. In
a few situations, humans can be substituted by computer-generated characters with the
ability to act naturally and communicate convincingly by expressing human-like emotions.
Machines need to interpret the emotions carried by speech utterances. Only with such
an ability can a completely expressive dialogue based on joint human–machine trust and
understanding be accomplished.
With dominance, pleasure, and excitement, one can nearly define all emotions; how-
ever, the implementation of such a deterministic system using DL is very challenging and
complex. Therefore, in DL, statistical models and the clustering of samples are used to
qualitatively classify emotions such as sadness, happiness, and anger. For the classification
and clustering of emotions, features must be extracted from speech, usually relying on
different types of prosody, voice quality, and spectral features [37]. The prosody features
usually include the fundamental frequency (F0), intensity, and speaking rate, but they
cannot confidently discriminate between angry and happy emotions. The features asso-
ciated with voice quality are usually the most successful in determining the emotions of
the same speaker. However, these features vary from speaker to speaker, making them
difficult to use in speaker-independent settings [38]. On the other hand, spectral features are
widely used to determine emotions from speech. These features can confidently distinguish
anger from happiness. However, the magnitudes and shifts of the formant frequencies for
identical emotions change across different vowels, which increases the complexity of the
speech emotion recognition system [39]. For all the feature types, there are several standard
representations of features. Prosody features are typically represented by F0 and measure
the speaking rates [40], whereas spectral features are defined by cepstrum-based feature
representations. Mel-frequency cepstral coefficients (MFCC) or linear prediction cepstral
coefficients (LPCC) are commonly used spectral features along with formants, and other
information can also be used [41]. Finally, the voice quality features usually include the
normalized amplitude quotient, shimmer, and jitter [42].
Feature extraction is a crucial step in many machine learning tasks, including speech
recognition, computer vision, and natural language processing. The goal of feature extrac-
tion is to transform raw data into a representation that captures the most salient information
for the task at hand. In speech recognition, features are typically extracted from the acous-
tic signal using techniques such as mel-frequency cepstral coefficients (MFCCs), which
have been widely used in the literature due to their effectiveness in capturing the spectral
envelope of a signal. Other popular techniques include perceptual linear predictive (PLP)
features, gamma tone features, and filterbank energies. In computer vision, features are
extracted from images using techniques such as SIFT, SURF, and HOG, which are effective
in capturing local visual patterns. In natural language processing, features are extracted
from text using techniques such as bag-of-words, n-grams, and word embeddings, which
capture the syntactic and semantic information in the text [43–48]. This study uses MFCCs
as input features for several reasons. First, (i) the MFCCs are used as a grayscale image
as a simultaneous input to the parallel CNNs and Transformer modules for spectral and
temporal feature extraction. (ii) MFCCs can capture the spectral envelopes of speech signals,
which is crucial in characterizing different emotional states. MFCCs are less sensitive to
variations in speaker characteristics, background noise, and channel distortions, making
them more robust for emotion recognition tasks. (iii) MFCCs are derived based on the hu-
man auditory system’s frequency resolution, which aligns well with how humans perceive
and differentiate sounds. By focusing on perceptually relevant information, MFCCs can
effectively capture the distinctive features related to emotions conveyed through speech.
(iv) MFCCs provide a compact representation of speech signals by summarizing the spec-
tral information into a smaller number of coefficients. This dimensionality reduction helps
to reduce the computational complexity and memory requirements of SER models while
still preserving the essential information needed for emotion classification. (v) By com-
puting MFCCs over short time frames and applying temporal analysis techniques such as
delta and delta–delta features, the dynamic changes in speech can be captured. Emotions
often manifest as temporal patterns in speech, and MFCCs enable the modeling of these
dynamics, enhancing the discriminative power of SER models.
We have studied and examined the recent speech processing literature and observed
that speech signals follow a hybrid structure, such as temporal features and spatial features,
where both feature representations contain essential cues for SER. The majority of the
existing SER systems lack parallel neural architectures to process speech signals and acquire
Sensors 2023, 23, 6212 4 of 20
The CNN-BLSTM-based SER method in [57] learns the spatial features and temporal
cues of speech symbols and increases the accuracy of the existing model. The SER extracts
spatial features and feeds them to the BLSTM in order to learn temporal cues for the
recognition of the emotional state. A DNN in [26] is used to compute the probability
distributions for various emotions given all segments. The DNN identifies emotions from
utterance-level feature representations, and, with the given features, ELM is used to classify
speech emotions. The CNN in [58] successfully detects emotions with 66.1% accuracy
when compared to the feature-based SVM. Meanwhile, the 1D-CNN in [59] reports 96.60%
classification accuracy for negative emotions. The CNN-based SER in [60] learns deep
features and employs a plain rectangular filter with a new pooling scheme to achieve more
effective emotion discrimination. A novel attention-based SER is proposed utilizing a
long attention process to link mel-spectrogram and interspeech-09 features to generate
the attention weights for a CNN. A deep CNN-based SER is constructed in [61] for the
ImageNet LSVRC-2010 challenge. The AlexNet trained with 1.2 million images and fine-
tuned with samples from the EMO-DB is used to recognize angry, sad, and happy emotions.
An end-to-end context-aware SER system in [62] classifies speech emotions using CNNs
followed by LSTM.
The difference compared to other deep learning SER frameworks lies in not using
the preselected features before network training and introducing raw input to the SER
system. The ConvLSTM-based SER in [63] adopted convolutional LSTM layers for the
state transitions so as to extract spatial cues. Four LFLBs are used for the extraction of
the spatiotemporal cues in the hierarchical correlational form of speech signals utilizing a
residual learning strategy. The BLSTM + CNN stacking-based SER in [64] matches the input
formats and recognizes emotions by using logistic regression. BC-LSTM relies on context-
aware utterance-level representations of features. This model captures the contextual cues
from utterances using a BLSTM layer. The SVM-DBN-based SER in [65] improves emotion
recognition via diverse feature representation. Gender-dependent and -independent results
show 80.11% accuracy. The deep-stride CNN-based SER in [66] uses raw spectrograms and
learns discriminative features from speech spectrograms. After learning the features, the
Softmax classifier is employed to classify speech emotions.
Attention mechanism-based deep learning for SER is another notable approach that
has achieved vast success; a complete review can be found in [67]. In classical DL-based
SER, all features in a given utterance receive the same attention. Nevertheless, emotions
are not consistently distributed over all localities in the speech samples. In attention-based
DL, attention is paid by the classifier to the given specific localities of the samples using
attention weights assigned to a particular locality of data. The SER system based on multi-
layer perceptron (MLP) and a dilated CNN in [68] uses channel and spatial attention to
extract cues from input tensors. Bidirectional LSTM with the weighted-polling scheme
in [69] learns more illustrative feature representations concerning speech emotions. The
model focuses more on the main emotional aspects of an utterance, whereas it ignores other
aspects of the utterance. The self-attention and multitasking learning CNN-BLSTM in [70]
improves the SER accuracy by 7.7% in comparison with the multi-channel CNN [71] when
applied to the IEMOCAP dataset. With speech spectrograms as input, gender classification
has been considered as a secondary task. The LSTM in [18] for SER demonstrates reduced
computational complexity by replacing the LSTM forget gate with an attention gate, where
attention is applied on the time and feature dimensions. The attention LSTM-based time-
delay SER in [72] extracts high-level feature representations from raw speech waveforms to
classify emotions.
The deep RNN-based SER in [73] learns emotionally related acoustic features and
aggregates them temporally into a compact representation at the utterance level. Another
deep CNN [74] is proposed for SER. In addition, a feature pooling strategy over time is
proposed, using local attention to focus on specific localities of a speech utterance that are
emotionally prominent. A self-attention mechanism utilizes a CNN via sequential learning
to generate the attention weights. Another attention-based SER is proposed that uses a
Sensors 2023, 23, 6212 6 of 20
fully connected neural network (FCNN). Frame- and utterance-level features are used for
emotion classification by applying MLP and attention processes to classify emotions. A
multi-hop attention model for SER in [75] uses two BLSTM streams to extract the hidden
cues from speech utterances. The multi-hop attention model is applied for the generation
of final weights for the classification of emotions. Other important research related to SER
includes fake news and sentiment analysis, as emotions can also be found in fake news,
negative sentiments, and hate speech [76–81]. A short summary of the related literature is
given in Table 1. Accuracy holds significant importance in the speech emotion recognition
(SER) system, where the primary goal is to predict emotions in speech utterances with
a high level of precision. Consequently, researchers in the field strive to enhance this
particular aspect. By examining Table 1, which is extracted from the aforementioned
literature, it becomes evident that models have made advancements in terms of accuracy.
However, there is still substantial room for further improvement. Simultaneously, the depth
of the model (its computational complexity) remains a crucial consideration for real-time
applications. Hence, our objective is to propose an SER model that achieves both high
accuracy and a compact size. To accomplish this, we present a novel approach distinct from
the models presented in the table, where CNNs combined with RNNs are predominantly
employed for SER. Instead, we incorporate Transformer encoders to obtain robust features
for network training, as they exhibit strong capabilities in capturing temporal features.
Figure 1. Proposed SER framework: Two parallel CNNs with Transformer encoder for feature
extraction. The extracted features are fed to the dense layer with a log Softmax classifier for emotional
state prediction.
Sensors 2023, 23, 6212 7 of 20
Figure 2. Feature extraction process from input speech signals to MFCC features.
The gradient becomes very small as the error approaches the prior layers in a very
deep architecture. Therefore, to preserve the gradient, skip connections are added to the
model as it has been observed that, in prior layers, the learned features correspond to
less information extracted from the input. Figure 3 presents the CNN architecture with
3 CNN layers where each block is max-pooled, as well as the skip connection (Figure 3).
The parallel CNNs have the same architectural structures as documented above.
Sensors 2023, 23, 6212 8 of 20
Figure 3. The architecture of a single CNN with skip connections. The proposed model is composed
of two parallel CNNs as illustrated in the given architecture.
QKT
Attention(Q, K, V) = so f tmax ( √ )V (1)
n
For the sequence output at time step t, the scaled dot product is scaled by dimension n
of hidden states. There are various self-attention strategies that can be used. As per [82],
the scaled dot product self-attention (Q, K, V) is computed over a number of representation
subspaces with a weight matrix specific to each query, key, and value. Multi-head self-
attention can compute an output term that is weighted differently based on a subspace of
the input sequence in this manner. Concatenating and multiplying the output from each
attention head with a weight matrix reduces the dimensions of the encoded state to that of
a single attention head. Conv-1D, which operates on the encoded latent space regardless of
the number of attention heads, is used as the Transformer encoder in this study in place of
a single feedforward layer. A Softmax prediction is computed from the weighted sum of all
layers in the multi-head attention architecture (shown in Figure 4) and is given as
Four identical stacked blocks of the Transformer encoder are used to classify various
emotions; each block is composed of one multi-head self-attention layer with a fully
connected feedforward layer. A skip connection and a normalization layer are included
subsequent to the multi-head self-attention layer. After the feedforward layer, a skip
connection is created, followed by normalization. With those output by the multi-head
self-attention layer, the skip connection adds the original embeddings. The normalization
layer is similar to batch normalization; however, unlike batch normalization, adapted to
sequential inputs, the norm layer is also applied during testing. The combined embeddings
from the residual connection are subjected to the norm layer. Figure 5 depicts the design of
the Transformer encoder, replacing the single feedforward layer with the Conv-1D layer.
Figure 5. Transformer encoder architecture with input and output feature dimensions.
Sensors 2023, 23, 6212 10 of 20
5. Experimentation
This section experimentally examines the proposed CTENet model for SER and demon-
strates its efficiency. We conducted extensive experiments by using the standard REVDESS
dataset, an acted speech emotions dataset for SER. In addition, the IEMOCAP dataset
was used to examine the performance across different databases. The performance of
the proposed CTENet model has been evaluated with other state-of-the-art (SOTA) SER
models that are reported in the recent literature. We also conducted an ablation learning
study to confirm the multi-head attention performance in the CTENet model for SER. A
complete description of the speech emotion datasets, model training/testing/validation,
and emotion recognition output with discussion is given in the following sections.
5.1. Datasets
The Ryerson Audiovisual Dataset of Emotional Song and Speech (RAVDESS) [83] is
a new English-language scripted emotional corpus, proposed in 2018. The RAVDESS is
the most popular emotional corpus and is largely used to recognize emotions in songs and
speech signals. This corpus is composed of 8 emotions recorded by 24 professionals of both
genders (12 females and 12 males) to produce scripts with changed emotions. Recently, the
speech part of the RAVDESS corpus has been frequently utilized in comparative analysis,
demonstrating the model’s generalization to many emotions. The RAVDESS speech corpus
contains 1440 audio files, which are recorded at a sampling rate of 48 kHz. Since the
RAVDESS speech corpus is small and is prone to overfitting, it is used exclusively with
highly parameterized DNN models such as the CTENet model. Therefore, we augmented
the RAVDESS speech corpus. Producing new samples is a difficult task, so we added white
noise to the speech signals. The addition of white noise not only masked the effect of
random noise present in the training set but also created pseudo-new training samples,
which counterbalanced the impact of inherent noise in the speech corpus. Moreover, the
RAVDESS corpus is extremely clean and this augmentation also evaluated the predictions
of the CTENet model on noisy speech data. Note that noise addition was applied for
training data only. No noise was added to the testing data on which we made emotional
predictions. The spectrograms of the speech utterances from the RAVDESS corpus after
adding white noise are shown in Figure 6. The details of the RAVDESS corpus are illustrated
in Table 2. Interactive Emotional Dyadic Motion Capture (IEMOCAP) [84] is a speech
emotions corpus provided in the English language and recorded at the University of
Southern California (SAIL LAB). The corpus was recorded by 10 professional actors in five
separate sessions, where each session was recorded by one male and one female actor. The
corpus is composed of audio–visual files of 12 h each, where each recorded utterance has a
3.5 s length on average, comprising different emotions. This study considers five emotions,
namely happiness, sadness, anger, calm, and fear, from the IEMOCAP corpus. Table 3 gives
details of the speech emotions, audio file quantity, and contribution rate of each emotion.
The spectrograms of various speech emotions, including happiness, sadness, anger, and
neutrality, are plotted in Figure 7.
Figure 6. Spectrograms after adding white noise: 10 dB, 15 dB, and 25 dB.
Sensors 2023, 23, 6212 11 of 20
Table 2. Details of the emotions, audio files, and percentage contributions of the RAVDESS database
for the CTENet model.
Table 3. Details of the emotions, audio files, and percentage contributions of the IEMOCAP database
for the CTENet model.
is zero-padded 1 to every convolutional layer to obtain the same tensor shape. At the
end of the first convolutional layer in each parallel CNN block, the output feature map is
max-pooled with a kernel of size (2 × 2) with stride 2, which takes MFCC pixels producing
a (20 × 141) output feature map. The non-overlapping max-pooling kernel reduces the
output dimension to the input dimension/kernel size. The output channel’s dimension
is then expanded to 16, creating an output (16 × 20 × 141) feature map. In the next two
convolutional layers of each CNN, the block has a max-pooling kernel size (4 × 4) with
stride 4. The feature maps at the end of the second and third convolutional layers are
(32 × 5 × 35) and (64 × 1 × 8), respectively. The output convolutional embedding length
for both parallel CNNs is (1 × 512). Complete details are provided in Table 4.
Table 4. CNN model architecture with input/output dimensions, filter size, and stride.
Layer Input Dim Padding Output Dim Filter Size Output Dim Maxpool, Stride Output Dim
1 (1 × 282 × 40) 1 (1 × 284 × 42) (1 × 3 × 3) (16 × 40 × 282) (2 × 2), 2 (16 × 20 × 141)
2 (16 × 20 × 141) 1 (16 × 22 × 143) (16 × 3 × 3) (32 × 20 × 141) (4 × 4), 4 (32 × 5 × 35)
3 (32 × 5 × 35) 1 (32 × 7 × 37) (32 × 3 × 3) (64 × 5 × 35) (4 × 4), 4 (64 × 1 × 8)
Flatten (64 × 1 × 8); final convolutional embedding length (1 × 512).
The input MFCC coefficient maps to the Transformer encoder are max-pooled (1 × 4)
with stride 4 to obtain a (1 × 40 × 70) output feature map. Therefore, the input to the
Transformer embedding is (40 × 70). The final Transformer embedding length is (1 × 40).
The fully connected dense layer concatenates the final embedding length from the convolu-
tional and Transformer blocks as (512 + 512 + 40) and is used as input to the dense layer
with 1064 nodes. The output from the final layer is a linear k-dimension array, which is
applied to the log Softmax layer to recognize emotions. The output for RAVDESS is an
8-d array, whereas, for IEMOCAP, it is a 5-d array. The final output R is fed to the fully
connected dense layer, followed by the log Softmax layer to calculate the probabilities of
emotion class C, given as
X = R + ReLU ( RWR ) + bR (4)
where ( P(k) ) ∈ RC and ẑ(k) ∈ R1 are the probabilities of each emotion class. In the training,
the cross-entropy loss function is used, given as
M
Loss = − ∑ yi,C
k k
log2 (yi,C ) (7)
i
Net [60] learns deep features and employs a plain rectangular filter with a new pooling
scheme to achieve more effective emotion discrimination. The other SOTA models include
GResNets [85]; SER using 1D-Dilated CNN, which is based on the multi-learning trick
(MLT) [86]; and the CNN-BLSTM-based SER method from [57].
Table 5. Speech emotion recognition in terms of accuracy (in %) for the RAVDESS and IEMOCAP corpora.
Tables 6 and 7 describe the experimental results of CTENet model prediction in terms
of overall model precision and the F1-score for the RAVDESS and IEMOCAP datasets. The
experimental results show that CTENet obtains improved F1 accuracy and precision in
the individual speech emotion recognition tasks for most instances, exclusively for the
happy and calm emotions, for both the RAVDESS (8-way) and IEMOCAP (5-way) datasets.
We confirmed the robustness of CTENet over the two standard datasets, and it achieved
78.75% precision for RAVDESS and 74.80% precision for IEMOCAP. Furthermore, CTENet
achieved 84.38% F1 for RAVDESS and 82.20% F1 for IEMOCAP, respectively. The CTENet
accuracy for the two datasets was 82.31% and 79.42%, respectively. Figure 9 visualizes the
complete performance of CTENet for both datasets in terms of precision, accuracy, and F1
scores, respectively.
Sensors 2023, 23, 6212 14 of 20
Figure 9. CTENet percentage performance: accuracy (Acc), precision (Prc), and F1 score using
RAVDESS and IEMOCAP datasets.
from 80.40% with the RAVDESS dataset to 84.37%, whereas it changes from 79.65% to
82.20% with the IEMOCAP dataset.
Table 8. CTENet prediction performance (in %) with and without multi-head attention Transformer.
DeepNet, Deep-BLSTM, and MLT-DNet for the IEMOCAP dataset. On the other hand,
for the RAVDESS dataset, the CTENet achieves an 84.37% F1 score, which is 7.37% higher
than that of Deep-BLSTM and 21.26% higher than GResNets. Figure 10 shows the detailed
performance of CTENet over the SOTA models [87].
Figure 10. CTENet performance over SOTA for RAVDESS and IEMOCAP datasets.
to further increase the recognition accuracy using modality cues. In addition, we will apply
recently introduced models to achieve state-of-the-art SER results.
Author Contributions: Conceptualization and methodology, R.U. and L.W.; supervision, R.U. and
M.A. (Muhammad Asif); software, W.A.S., F.A., T.K., M.A. (Mohammad Alibakhshikenari), S.M.A.
and I.U.; writing, R.U. and S.S.; review and editing, T.K., M.A. (Mohammad Alibakhshikenari),
S.M.A., L.W. and I.U. All authors have read and agreed to the published version of the manuscript.
Funding: This research project is supported by the Second Century Fund (C2F), Chulalongkorn Uni-
versity. Mohammad Alibakhshikenari acknowledges the support from the CONEXPlus programme
funded by Universidad Carlos III de Madrid and the European Union’s Horizon 2020 research and
innovation programme under the Marie Sklodowska-Curie grant agreement No. 801538. The authors
also sincerely appreciate funding from Researchers Supporting Project number (RSPD2023R699),
King Saud University, Riyadh, Saudi Arabia.
Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable.
Data Availability Statement: The datasets are available at IEMOCAP: https://sail.usc.edu/iemocap/,
accessed on date 21 January 2023, RAVDESS: https://www.kaggle.com/datasets/uwrfkaggler/ravdess-
emotional-speech-audio, accessed on date 21 January 2023.
Acknowledgments: This research project is supported by the Second Century Fund (C2F), Chula-
longkorn University. Mohammad Alibakhshikenari acknowledges the support from the CONEX-
Plus programme funded by Universidad Carlos III de Madrid and the European Union’s Horizon
2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement
No. 801538. The authors also sincerely appreciate funding from Researchers Supporting Project
number (RSPD2023R699), King Saud University, Riyadh, Saudi Arabia.
Conflicts of Interest: The authors declare no conflict of interest.
References
1. Liu, Z.T.; Xie, Q.; Wu, M.; Cao, W.H.; Mei, Y.; Mao, J.W. Speech emotion recognition based on an improved brain emotion learning
model. Neurocomputing 2018, 309, 145–156. [CrossRef]
2. Nwe, T.L.; Foo, S.W.; De Silva, L.C. Speech emotion recognition using hidden Markov models. Speech Commun. 2003, 41, 603–623.
[CrossRef]
3. Patel, P.; Chaudhari, A.; Kale, R.; Pund, M. Emotion recognition from speech with gaussian mixture models via boosted gmm.
Int. J. Res. Sci. Eng. 2017, 3, 294–297.
4. Chen, L.; Mao, X.; Xue, Y.; Cheng, L.L. Speech emotion recognition: Features and classification models. Digit. Signal Process. 2012,
22, 1154–1160. [CrossRef]
5. Koolagudi, S.G.; Rao, K.S. Emotion recognition from speech: A review. Int. J. Speech Technol. 2012, 15, 99–117. [CrossRef]
6. Akçay, M.B.; Oğuz, K. Speech emotion recognition: Emotional models, databases, features, preprocessing methods, supporting
modalities, and classifiers. Speech Commun. 2020, 116, 56–76. [CrossRef]
7. Latif, S.; Rana, R.; Khalifa, S.; Jurdak, R.; Qadir, J.; Schuller, B.W. Survey of deep representation learning for speech emotion
recognition. IEEE Trans. Affect. Comput. 2021, 14, 1634–1654. [CrossRef]
8. Fayek, H.M.; Lech, M.; Cavedon, L. Evaluating deep learning architectures for Speech Emotion Recognition. Neural Netw. 2017,
92, 60–68. [CrossRef]
9. Tuncer, T.; Dogan, S.; Acharya, U.R. Automated accurate speech emotion recognition system using twine shuffle pattern and
iterative neighborhood component analysis techniques. Knowl.-Based Syst. 2021, 211, 106547. [CrossRef]
10. Singh, P.; Srivastava, R.; Rana, K.P.S.; Kumar, V. A multimodal hierarchical approach to speech emotion recognition from audio
and text. Knowl.-Based Syst. 2021, 229, 107316. [CrossRef]
11. Magdin, M.; Sulka, T.; Tomanová, J.; Vozár, M. Voice analysis using PRAAT software and classification of user emotional state.
Int. J. Interact. Multimed. Artif. Intell. 2019, 5, 33–42. [CrossRef]
12. Huddar, M.G.; Sannakki, S.S.; Rajpurohit, V.S. Attention-based Multi-modal Sentiment Analysis and Emotion Detection in
Conversation using RNN. Int. J. Interact. Multimed. Artif. Intell. 2021, 6, 112–121. [CrossRef]
13. Wang, K.; An, N.; Li, B.N.; Zhang, Y.; Li, L. Speech emotion recognition using Fourier parameters. IEEE Trans. Affect. Comput.
2015, 6, 69–75. [CrossRef]
14. Mao, Q.; Dong, M.; Huang, Z.; Zhan, Y. Learning salient features for speech emotion recognition using convolutional neural
networks. IEEE Trans. Multimed. 2014, 16, 2203–2213. [CrossRef]
Sensors 2023, 23, 6212 18 of 20
15. Ho, N.H.; Yang, H.J.; Kim, S.H.; Lee, G. Multimodal approach of speech emotion recognition using multi-level multi-head fusion
attention-based recurrent neural network. IEEE Access 2020, 8, 61672–61686. [CrossRef]
16. Saleem, N.; Gao, J.; Khattak, M.I.; Rauf, H.T.; Kadry, S.; Shafi, M. Deepresgru: Residual gated recurrent neural network-augmented
kalman filtering for speech enhancement and recognition. Knowl.-Based Syst. 2022, 238, 107914. [CrossRef]
17. Zhao, J.; Mao, X.; Chen, L. Speech emotion recognition using deep 1D & 2D CNN LSTM networks. Biomed. Signal Process. Control
2019, 47, 312–323.
18. Xie, Y.; Liang, R.; Liang, Z.; Huang, C.; Zou, C.; Schuller, B. Speech emotion classification using attention-based LSTM. IEEE/ACM
Trans. Audio Speech Lang. Process. 2019, 27, 1675–1685. [CrossRef]
19. Wang, J.; Xue, M.; Culhane, R.; Diao, E.; Ding, J.; Tarokh, V. Speech emotion recognition with dual-sequence LSTM architecture.
In Proceedings of the ICASSP 2020—2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP),
Barcelona, Spain, 4–8 May 2020; pp. 6474–6478.
20. Zhao, H.; Xiao, Y.; Zhang, Z. Robust semisupervised generative adversarial networks for speech emotion recognition via
distribution smoothness. IEEE Access 2020, 8, 106889–106900. [CrossRef]
21. Shilandari, A.; Marvi, H.; Khosravi, H.; Wang, W. Speech emotion recognition using data augmentation method by cycle-
generative adversarial networks. Signal Image Video Process. 2022, 16, 1955–1962. [CrossRef]
22. Yi, L.; Mak, M.W. Improving speech emotion recognition with adversarial data augmentation network. IEEE Trans. Neural Netw.
Learn. Syst. 2020, 33, 172–184. [CrossRef]
23. Huang, C.; Gong, W.; Fu, W.; Feng, D. A research of speech emotion recognition based on deep belief network and SVM. Math.
Probl. Eng. 2014, 2014, 749604. [CrossRef]
24. Huang, Y.; Tian, K.; Wu, A.; Zhang, G. Feature fusion methods research based on deep belief networks for speech emotion
recognition under noise condition. J. Ambient. Intell. Humaniz. Comput. 2019, 14, 1787–1798. [CrossRef]
25. Schuller, B.W. Speech emotion recognition: Two decades in a nutshell, benchmarks, and ongoing trends. Commun. ACM 2018, 61, 90–99.
[CrossRef]
26. Guo, L.; Wang, L.; Dang, J.; Liu, Z.; Guan, H. Exploration of complementary features for speech emotion recognition based on
kernel extreme learning machine. IEEE Access 2019, 7, 75798–75809. [CrossRef]
27. Han, K.; Yu, D.; Tashev, I. Speech emotion recognition using deep neural network and extreme learning machine. In Proceedings
of the Interspeech, Singapore, 14–18 September 2014.
28. Tiwari, U.; Soni, M.; Chakraborty, R.; Panda, A.; Kopparapu, S.K. Multi-conditioning and data augmentation using generative
noise model for speech emotion recognition in noisy conditions. In Proceedings of the ICASSP 2020—2020 IEEE International
Conference on Acoustics, Speech and Signal Processing (ICASSP), Barcelona, Spain, 4–8 May 2014; pp. 7194–7198.
29. Badshah, A.M.; Ahmad, J.; Rahim, N.; Baik, S.W. Speech emotion recognition from spectrograms with deep convolutional neural
network. In Proceedings of the 2017 International Conference on Platform Technology and Service (PlatCon), Busan, Republic of
Korea, 13–15 February 2017; pp. 1–5.
30. Dong, Y.; Yang, X. Affect-salient event sequence modelling for continuous speech emotion recognition. Neurocomputing 2021, 458,
246–258. [CrossRef]
31. Chen, Q.; Huang, G. A novel dual attention-based BLSTM with hybrid features in speech emotion recognition. Eng. Appl. Artif.
Intell. 2021, 102, 104277. [CrossRef]
32. Atila, O.; Şengür, A. Attention guided 3D CNN-LSTM model for accurate speech based emotion recognition. Appl. Acoust. 2021,
182, 108260. [CrossRef]
33. Lambrecht, L.; Kreifelts, B.; Wildgruber, D. Gender differences in emotion recognition: Impact of sensory modality and emotional
category. Cogn. Emot. 2014, 28, 452–469. [CrossRef]
34. Fu, C.; Liu, C.; Ishi, C.T.; Ishiguro, H. Multi-modality emotion recognition model with GAT-based multi-head inter-modality
attention. Sensors 2020, 20, 4894. [CrossRef]
35. Liu, D.; Chen, L.; Wang, Z.; Diao, G. Speech expression multimodal emotion recognition based on deep belief network. J. Grid
Comput. 2021, 19, 22. [CrossRef]
36. Zhao, Z.; Li, Q.; Zhang, Z.; Cummins, N.; Wang, H.; Tao, J.; Schuller, B.W. Combining a parallel 2d cnn with a self-attention
dilated residual network for ctc-based discrete speech emotion recognition. Neural Netw. 2021, 141, 52–60. [CrossRef] [PubMed]
37. Gangamohan, P.; Kadiri, S.R.; Yegnanarayana, B. Analysis of emotional speech—A review. Towar. Robot. Soc. Believable Behaving
Syst. 2016, 1, 205–238.
38. Gobl, C.; Chasaide, A.N. The role of voice quality in communicating emotion, mood and attitude. Speech Commun. 2003, 40,
189–212. [CrossRef]
39. Vlasenko, B.; Philippou-Hübner, D.; Prylipko, D.; Böck, R.; Siegert, I.; Wendemuth, A. Vowels formants analysis allows
straightforward detection of high arousal emotions. In Proceedings of the 2011 IEEE International Conference on Multimedia and
Expo, Barcelona, Spain, 11–15 July 2011; pp. 1–6.
40. Lee, C.M.; Narayanan, S.S. Toward detecting emotions in spoken dialogs. IEEE Trans. Speech Audio Process. 2005, 13, 293–303.
41. Schuller, B.; Rigoll, G. Timing levels in segment-based speech emotion recognition. In Proceedings of the INTERSPEECH 2006,
Proceedings International Conference on Spoken Language Processing ICSLP, Pittsburgh, PA, USA, 17–21 September 2006.
Sensors 2023, 23, 6212 19 of 20
42. Lugger, M.; Yang, B. The relevance of voice quality features in speaker independent emotion recognition. In Proceedings of the
2007 IEEE International Conference on Acoustics, Speech and Signal Processing-ICASSP’07, Honolulu, HI, USA, 15–20 April
2007; Volume 4, p. IV-17.
43. Mutlag, W.K.; Ali, S.K.; Aydam, Z.M.; Taher, B.H. Feature extraction methods: A review. J. Phys. Conf. Ser. 2005, 1591, 012028.
[CrossRef]
44. Cavalcante, R.C.; Minku, L.L.; Oliveira, A.L. Fedd: Feature extraction for explicit concept drift detection in time series. In
Proceedings of the 2016 International Joint Conference on Neural Networks (IJCNN), Vancouver, BC, Canada, 24–29 July 2016;
pp. 740–747.
45. Phinyomark, A.; Quaine, F.; Charbonnier, S.; Serviere, C.; Tarpin-Bernard, F.; Laurillau, Y. Feature extraction of the first difference
of EMG time series for EMG pattern recognition. Comput. Methods Programs Biomed. 2014, 177, 247–256. [CrossRef]
46. Schneider, T.; Helwig, N.; Schütze, A. Automatic feature extraction and selection for classification of cyclical time series data. Tech.
Mess. 2017, 84, 198–206. [CrossRef]
47. Salau, A.O.; Jain, S. Feature extraction: A survey of the types, techniques, applications. In Proceedings of the 2019 International
Conference on Signal Processing and Communication (ICSC), Noida, India, 7–9 March 2019; pp. 158–164.
48. Salau, A.O.; Olowoyo, T.D.; Akinola, S.O. Accent classification of the three major nigerian indigenous languages using 1d cnn
lstm network model. In Advances in Computational Intelligence Techniques; Springer: Singapore, 2020; pp. 1–16.
49. Zamil, A.A.A.; Hasan, S.; Baki, S.M.J.; Adam, J.M.; Zaman, I. Emotion detection from speech signals using voting mechanism on
classified frames. In Proceedings of the 2019 International Conference on Robotics, Electrical and Signal Processing Techniques
(ICREST), Dhaka, Bangladesh, 10–12 January 2019; pp. 281–285.
50. Bhavan, A.; Chauhan, P.; Shah, R.R. Bagged support vector machines for emotion recognition from speech. Knowl.-Based Syst.
2019, 184, 104886. [CrossRef]
51. Huang, Z.; Dong, M.; Mao, Q.; Zhan, Y. Speech emotion recognition using CNN. In Proceedings of the 22nd ACM International
Conference on Multimedia, Orlando, FL, USA, 3–7 November 2014; pp. 801–804.
52. Latif, S.; Rana, R.; Younis, S.; Qadir, J.; Epps, J. Transfer learning for improving speech emotion classification accuracy. arXiv 2018,
arXiv:1801.06353.
53. Xie, B.; Sidulova, M.; Park, C.H. Robust multimodal emotion recognition from conversation with transformer-based crossmodality
fusion. Sensors 2021, 21, 4913. [CrossRef] [PubMed]
54. Ahmed, M.; Islam, S.; Islam, A.K.M.; Shatabda, S. An Ensemble 1D-CNN-LSTM-GRU Model with Data Augmentation for Speech
Emotion Recognition. arXiv 2021, arXiv:2112.05666.
55. Yu, Y.; Kim, Y.J. Attention-LSTM-attention model for speech emotion recognition and analysis of IEMOCAP database. Electronics
2020, 9, 713. [CrossRef]
56. Ohi, A.Q.; Mridha, M.F.; Safir, F.B.; Hamid, M.A.; Monowar, M.M. Autoembedder: A semi-supervised DNN embedding system
for clustering. Knowl.-Based Syst. 2020, 204, 106190. [CrossRef]
57. Sajjad, M.; Kwon, S. Clustering-based speech emotion recognition by incorporating learned features and deep BiLSTM. IEEE
Access 2020, 8, 79861–79875.
58. Bertero, D.; Fung, P. A first look into a convolutional neural network for speech emotion detection. In Proceedings of the 2017
IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), New Orleans, LA, USA, 5–9 March 2017;
pp. 5115–5119.
59. Mekruksavanich, S.; Jitpattanakul, A.; Hnoohom, N. Negative emotion recognition using deep learning for Thai language. In
Proceedings of the 2020 Joint International Conference on Digital Arts, Media and Technology with ECTI Northern Section
Conference on Electrical, Electronics, Computer and Telecommunications Engineering (ECTI DAMT & NCON), Pattaya, Thailand,
11–14 March 2020; pp. 71–74.
60. Anvarjon, T.; Kwon, S. Deep-net: A lightweight CNN-based speech emotion recognition system using deep frequency features.
Sensors 2020, 20, 5212. [CrossRef]
61. Zhang, S.; Zhang, S.; Huang, T.; Gao, W. Speech emotion recognition using deep convolutional neural network and discriminant
temporal pyramid matching. IEEE Trans. Multimed. 2017, 20, 1576–1590. [CrossRef]
62. Trigeorgis, G.; Ringeval, F.; Brueckner, R.; Marchi, E.; Nicolaou, M.A.; Schuller, B.; Zafeiriou, S. Adieu features? end-to-end speech
emotion recognition using a deep convolutional recurrent network. In Proceedings of the 2016 IEEE International Conference on
Acoustics, Speech and Signal Processing (ICASSP), Shanghai, China, 20–25 March 2016; pp. 5200–5204.
63. Kwon, S. CLSTM: Deep feature-based speech emotion recognition using the hierarchical ConvLSTM network. Mathematics 2020,
8, 2133.
64. Li, D.; Sun, L.; Xu, X.; Wang, Z.; Zhang, J.; Du, W. BLSTM and CNN Stacking Architecture for Speech Emotion Recognition.
Neural Process. Lett. 2021, 53, 4097–4115. [CrossRef]
65. Zhu, L.; Chen, L.; Zhao, D.; Zhou, J.; Zhang, W. Emotion recognition from Chinese speech for smart affective services using a
combination of SVM and DBN. Sensors 2017, 17, 1694. [CrossRef]
66. Kwon, S. A CNN-assisted enhanced audio signal processing for speech emotion recognition. Sensors 2019, 20, 183.
67. Lieskovská, E.; Jakubec, M.; Jarina, R.; Chmulík, M. A review on speech emotion recognition using deep learning and attention
mechanism. Electronics 2021, 10, 1163. [CrossRef]
Sensors 2023, 23, 6212 20 of 20
68. Kwon, S. Att-Net: Enhanced emotion recognition system using lightweight self-attention module. Appl. Soft Comput. 2021,
102, 107101.
69. Chen, S.; Zhang, M.; Yang, X.; Zhao, Z.; Zou, T.; Sun, X. The impact of attention mechanisms on speech emotion recognition.
Sensors 2021, 21, 7530. [CrossRef] [PubMed]
70. Li, Y.; Zhao, T.; Kawahara, T. Improved End-to-End Speech Emotion Recognition Using Self Attention Mechanism and Multitask
Learning. In Proceedings of the Interspeech 2019, Graz, Austria, 15–19 September 2019; pp. 2803–2807.
71. Yenigalla, P.; Kumar, A.; Tripathi, S.; Singh, C.; Kar, S.; Vepa, J. Speech Emotion Recognition Using Spectrogram Phoneme
Embedding. In Proceedings of the Interspeech 2018, Hyderabad, India, 2–6 September 2018; pp. 3688–3692.
72. Sarma, M.; Ghahremani, P.; Povey, D.; Goel, N.K.; Sarma, K.K.; Dehak, N. Emotion Identification from Raw Speech Signals Using
DNNs. In Proceedings of the Interspeech 2018, Hyderabad, India, 2–6 September 2018; pp. 3097–3101.
73. Mirsamadi, S.; Barsoum, E.; Zhang, C. Automatic speech emotion recognition using recurrent neural networks with local attention.
In Proceedings of the 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), New Orleans,
LA, USA, 5–9 March 2017; pp. 2227–2231.
74. Issa, D.; Demirci, M.F.; Yazici, A. Speech emotion recognition with deep convolutional neural networks. Biomed. Signal Process.
Control 2020, 59, 101894. [CrossRef]
75. Carta, S.; Corriga, A.; Ferreira, A.; Podda, A.S.; Recupero, D.R. A multi-layer and multi-ensemble stock trader using deep learning
and deep reinforcement learning. Appl. Intell. 2021, 51, 889–905. [CrossRef]
76. Zhang, J.; Xing, L.; Tan, Z.; Wang, H.; Wang, K. Multi-head attention fusion networks for multi-modal speech emotion recognition.
Comput. Ind. Eng. 2022, 168, 108078. [CrossRef]
77. Demilie, W.B.; Salau, A.O. Detection of fake news and hate speech for Ethiopian languages: A systematic review of the approaches.
J. Big Data 2022, 9, 66. [CrossRef]
78. Bautista, J.L.; Lee, Y.K.; Shin, H.S. Speech Emotion Recognition Based on Parallel CNN-Attention Networks with Multi-Fold Data
Augmentation. Electronics 2022, 11, 3935. [CrossRef]
79. Abeje, B.T.; Salau, A.O.; Ebabu, H.A.; Ayalew, A.M. Comparative Analysis of Deep Learning Models for Aspect Level Amharic
News Sentiment Analysis. In Proceedings of the 2022 International Conference on Decision Aid Sciences and Applications
(DASA), Chiangrai, Thailand, 23–25 March 2022; pp. 1628–1633.
80. Kakuba, S.; Poulose, A.; Han, D.S. Deep Learning-Based Speech Emotion Recognition Using Multi-Level Fusion of Concurrent
Features. IEEE Access 2022, 10, 125538–125551. [CrossRef]
81. Tao, H.; Geng, L.; Shan, S.; Mai, J.; Fu, H. Multi-Stream Convolution-Recurrent Neural Networks Based on Attention Mechanism
Fusion for Speech Emotion Recognition. Entropy 2022, 24, 1025. [CrossRef] [PubMed]
82. Kwon, S. MLT-DNet: Speech emotion recognition using 1D dilated CNN based on multi-learning trick approach. Expert Syst.
Appl. 2021, 167, 114177.
83. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Polosukhin, I. Attention is all you need. Adv. Neural
Inf. Process. Syst. 2017, 17, 1–11.
84. Livingstone, S.R.; Russo, F.A. The Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS): A dynamic,
multimodal set of facial and vocal expressions in North American English. PLoS ONE 2018, 13, e0196391. [CrossRef]
85. Busso, C.; Bulut, M.; Lee, C.C.; Kazemzadeh, A.; Mower, E.; Kim, S.; Narayanan, S.S. IEMOCAP: Interactive emotional dyadic
motion capture database. Lang. Resour. Eval. 2008, 42, 335–359. [CrossRef]
86. Zeng, Y.; Mao, H.; Peng, D.; Yi, Z. Spectrogram based multi-task audio classification. Multimed. Tools Appl. 2019, 78, 3705–3722.
[CrossRef]
87. Almadhor, A.; Irfan, R.; Gao, J.; Saleem, N.; Rauf, H.T.; Kadry, S. E2E-DASR: End-to-end deep learning-based dysarthric automatic
speech recognition. Expert Syst. Appl. 2023, 222, 119797. [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual
author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to
people or property resulting from any ideas, methods, instructions or products referred to in the content.