Remotesensing 16 00879 v2
Remotesensing 16 00879 v2
Review
A Comprehensive Survey of Unmanned Aerial Vehicles
Detection and Classification Using Machine Learning
Approach: Challenges, Solutions, and Future Directions
Md Habibur Rahman 1,2 , Mohammad Abrar Shakil Sejan 1,2 , Md Abdul Aziz 1,2 , Rana Tabassum 1,2 ,
Jung-In Baik 1,2 and Hyoung-Kyu Song 1,2, *
Abstract: Autonomous unmanned aerial vehicles (UAVs) have several advantages in various fields,
including disaster relief, aerial photography and videography, mapping and surveying, farming, as
well as defense and public usage. However, there is a growing probability that UAVs could be misused
to breach vital locations such as airports and power plants without authorization, endangering public
safety. Because of this, it is critical to accurately and swiftly identify different types of UAVs to prevent
their misuse and prevent security issues arising from unauthorized access. In recent years, machine
learning (ML) algorithms have shown promise in automatically addressing the aforementioned
concerns and providing accurate detection and classification of UAVs across a broad range. This
technology is considered highly promising for UAV systems. In this survey, we describe the recent
use of various UAV detection and classification technologies based on ML and deep learning (DL)
Citation: Rahman, M.H.; Sejan, algorithms. Four types of UAV detection and classification technologies based on ML are considered in
M.A.S.; Aziz, M.A.; Tabassum, R.; this survey: radio frequency-based UAV detection, visual data (images/video)-based UAV detection,
Baik, J.-I.; Song, H.-K. A acoustic/sound-based UAV detection, and radar-based UAV detection. Additionally, this survey
Comprehensive Survey of report explores hybrid sensor- and reinforcement learning-based UAV detection and classification
Unmanned Aerial Vehicles Detection using ML. Furthermore, we consider method challenges, solutions, and possible future research
and Classification Using Machine
directions for ML-based UAV detection. Moreover, the dataset information of UAV detection and
Learning Approach: Challenges,
classification technologies is extensively explored. This investigation holds potential as a study
Solutions, and Future Directions.
for current UAV detection and classification research, particularly for ML- and DL-based UAV
Remote Sens. 2024, 16, 879.
detection approaches.
https://doi.org/10.3390/rs16050879
Academic Editors: Alessandro Keywords: UAV detection and classification; machine learning-based detection; deep learning-based
Matese and Hamdi Zurqani detection
Received: 26 January 2024
Revised: 21 February 2024
Accepted: 26 February 2024
Published: 1 March 2024
1. Introduction
Unmanned aerial vehicles (UAVs), sometimes referred to as drones, have garnered
significant attention in recent years. Through the use of a remote controller, UAVs can be
operated without a pilot present from a distance of miles. UAVs are utilized in combat,
Copyright: © 2024 by the authors.
surveillance, airstrikes, investigations, and various other operations [1]. In addition, UAVs
Licensee MDPI, Basel, Switzerland.
are useful instruments in various industries, and they are currently being used for a
This article is an open access article
wide range of purposes. For instance, authorities utilize UAVs in disaster prevention [2],
distributed under the terms and
remote sensing [3], environmental monitoring [1], and so on. They are also employed
conditions of the Creative Commons
Attribution (CC BY) license (https://
by companies like Amazon, UPS Inc., and others for product delivery [4]. Additionally,
creativecommons.org/licenses/by/
UAVs play a crucial role in agriculture, aiding in crop observation [5] and the application
4.0/). of pesticides and fertilizers [3]. Furthermore, emergency personnel, along with emergency
medical services and enthusiasts, utilize UAVs for tasks such as rescue operations, medical
assistance, and recreational imaging [1].
However, instead of the emerging applications of UAVs, in recent years, challenges
regarding privacy and safety have been raised by the use of UAV systems [6]. The intro-
duction of recreational UAVs into national airspace has sparked worries about unqualified
and unlicensed pilots entering forbidden areas and interfering with aircraft operations.
Inadequate rules when buying UAVs may be part of the problem. For instance, a national
defense aircraft was struck by a private UAV just over two years ago [7]. The use of UAVs
for unlawful monitoring and terrorist attacks are the most worrisome issues [8]. To pre-
vent the aforementioned incidents, an anti-UAV technology that can identify, classify, and
neutralize unlicensed UAVs collecting data using various sensors is needed [9]. Recently,
for the classification and detection of UAVs, numerous studies have investigated ways to
identify UAVs utilizing a range of technological advances, such as thermal imaging, audio,
video, radio frequency (RF), and radar. Using these technologies, there are many traditional
methods to identify or detect unwanted UAVs, but most of the methods have failed to
provide an adequate prevention rate during the detection of UAVs.
In recent years, the fields of object detection [10], image segmentation [11,12], and
disease recognition [13] have undergone a dramatic transformation due to the emerging ad-
vantages of machine learning (ML) and deep learning (DL) approaches [14]. Consequently,
UAV detection [15] has gained popularity in the scientific community following the advent
of DL techniques. The emerging advantages of ML and DL for UAV detection include data
efficiency, decreased computational intensity, automatic feature learning, high-accuracy
UAV classification, and end-to-end learning capabilities. On the other hand, there are some
disadvantages of ML and DL, such as limited performance on more intricate UAV detection
tasks and DL models requiring large amounts of labeled data for training, which may be a
limitation in scenarios where obtaining labeled data is challenging or expensive. In [16],
the authors proposed a deep neural network (DNN) that classifies multirotor UAVs using
acoustic signature inputs. The primary focus of this research lies in ML-based detection
and classification methods. ML has demonstrated significant benefits in object detection
and classification across a range of domains due to its capacity to identify patterns without
the need for human intervention. Reducing the reliance on human intervention is desirable
for various reasons, including human limitations in identifying tiny or distant objects and
the potential for concentration deficits brought on by boredom or exhaustion. Instead, ML
can recognize patterns using paradigms that are entirely imperceptible to the human eye.
These include transmissions that are not detectable by human sensory systems, such as RF,
optical, and audio messages.
Given the aforementioned advances in object detection and classification using ML, in
this review, UAV detection and classification undergo extensive study regarding challenges,
solutions, and future research directions using ML and UAV detection by highlighting the
advantages and limitations of methods, and the improvement pathway is described in
detail. After that, a thorough critical analysis of the state of the art is presented. In addition,
an extensive review of dataset information is provided for UAV detection classification
technologies. In addition, reinforcement learning-based UAV detection and classification
with a detailed research direction are presented. Additionally, a review of hybrid sensor-
based UAV detection strategies provides detailed datasets and a clear research direction.
The most similar work to our proposed survey is in [17], where the authors present a report
with various literature references without in-depth analysis of each of the methods. In
our proposed survey, we include a detailed discussion regarding state-of-the-art literature,
and the important key difference from [17] is that our report provides detailed dataset
information for all the UAV detection and classification technology using ML and DL
algorithms, which will be helpful for both advanced and beginner researchers in the UAV
detection field. In addition, RL-based UAV detection and classification with a detailed
research direction are presented in the proposed survey report.
Remote Sens. 2024, 16, 879 3 of 42
In recent years, many surveys have been conducted on UAV systems, and most works
focus on object detection and classification using UAV-assisted images in the applications
of agricultural crop classification [18,19], vehicle detection [20,21], identification of plant
and crop diseases [22], forestry [23], crop disease detection [24], and so on. In contrast, this
survey report is focused on UAV or drone detection using five different technologies based
on ML and DL algorithms. A few survey reports have been published for UAV detection
and classification in the last few years, and the key differences between those reports and
our survey report are mentioned in Table 1.
Table 1. Contribution of this survey with other works for ML-based UAV classification and detection.
References Contribution
[17] Review on drone detection and classification using ML up to the year
2019. The study provides limited insights into the performance of original
detection approaches and references.
[25] Review on UAV-based communications using ML, focusing on resource
management, channel modeling, location, and security up to the year
2019.
[26] Review of the technical classification and implementation methods of
UAV detection and tracking in urban IoT environment and provided a
limited number of references covering up to the year 2023.
[27] Survey on DL-based UAV detection, with a focus on radar technologies
up to the year 2021.
[28] Survey on ML-based radar sensor networks for detecting and classifying
multirotor UAVs up to the year 2020.
[29] Survey on the detection of unauthorized UAVs up to the year 2022.
However, the study does not cover specific ML types and state-of-the-art
detection approaches.
[30] Review of drone detection strategies that emphasize the use of DL with
multisensor data up to the year 2019.
[31] Review of drone identification, neutralization, and detection, with less
emphasis on detection methods. The study primarily focuses on system
design from the regulatory viewpoint, excluding state-of-the-art detection
techniques and references covered up to the year 2021.
This survey Review on UAV detection and classification that provides an extensive
survey including suggested challenges, solutions, and future research di-
rections using ML (e.g., addressed technologies encompass radar, visual,
acoustic, and radio frequency sensing systems) up to the year of 2023.
This study covers UAV detection by highlighting the advantages and
limitations of methods and the improvement pathway. After that, a thor-
ough critical analysis of the state of the art is presented (e.g., including
different methodologies for different technologies, performance accuracy
with different matrix indexes, and machine learning model types). In
addition, an extensive review of dataset information is provided for UAV
detection and classification technologies (e.g., publicly available and own
experimental datasets with details such as classes, training, testing ratios,
and used experimental drones). In addition, reinforcement learning (RL)-
based UAV detection and classification with detailed research direction
are presented. Moreover, a review of hybrid sensor-based UAV detection
strategies provides detailed datasets and research direction.
In the remaining sections of this survey, classification of four UAV detection techniques
with ML is described in Section 2. Figure 1 illustrates the organization of this paper in
detail. Finally, the conclusion and discussion are presented in Section 3.
Remote Sens. 2024, 16, 879 4 of 42
Figure 2. The different categories of UAV classification and detection technologies and their corre-
sponding advantages and disadvantages.
Figure 3. The detection and classification mechanism of UAV based on RF signal analysis.
Table 2. Comparison summary of ML-based UAV classification and detection using RF technology.
Model
Reference Detection Target Machine Learning Method Performance
Types 1
[32] UAV detection using RF NN, ResNet50 Accuracy: 95% SL, DTL
[33] UAV detection using RF Extreme Gradient Boosting (XG- Accuracy: 99.6%, F-1 score: SL
Boost) 100%
[34] UAV detection and classifica- CNN, Logistic regression (LR), Accuracy: 100% for 2 classes SL
tion KNN and 98.0% for 10 classes
Remote Sens. 2024, 16, 879 6 of 42
Table 2. Cont.
Model
Reference Detection Target Machine Learning Method Performance
Types 1
[35] UAV detection using RF CNN Accuracy: 92.5%, F1-score: SL
93.5%
[36] UAV detection using RF Bayesian, SVM, MLP Accuracy: 99%, Recall: 99.5% SL
[37] UAV detection using RF ANN Accuracy: 82% within 3 km dis- SL
tance
[38] UAV classification from raw Markov-based naïve Bayes de- Accuracy: 95%, 96.84%, 88.15%, SL
RF fingerprints tection, KNN, DA, SVM, NN 58.49% with different models
[39] UAV controller detection from CNN N/A SL
transmitted control RF
[40] Detection of UAV type and DNN Accuracy: 99.7% for 2, 84.5% for SL
flight mode from raw RF sig- 4, and 46.8% for 10 classes, F1-
nals score: 99.5% for 2, 78.8% for 4,
and 43.0% for 10 classes
[41] UAV detection using RF End-to-End CNN Model Accuracy: 97.53%, Precision: SL
98.06%, Recall: 98.00%, and F1-
score: 98.00%
[42] UAV detection using RF XGBoost, AdaBoost, decision Accuracy: 100%, 99.6%, and SL
tree, random forest, KNN, and 99.3% for 2, 4, and 10 classes, F1-
MLP score: 100%, 99.6%, and 99.3%
for 2, 4, and 10 classes
[43] Swarm of UAV detection using PCA, ICA, UMAP, t-SNE K- Accuracy: 99% for the VRF USL
RF means, mean shift, and X-means dataset, 100% for the XBee
dataset, and 95% for Matrice
dataset
[44] UAV detection using RF YOLO-lite, Tiny-YOLOv2, Accuracy: YOLO-lite, Tiny- SL
DRNN YOLOv2, and DRNN were 97%,
98%, and 99%
[45] UAV detection using RF Residual CNN Accuracy: 99%, F1-score: 97.3 to SL
99.7%
[46] UAV detection using RF Hierarchical ML (KNN and XG- Accuracy: 99.20%, Precision: SL
Boost) 99.11%, F1-score: 99.10%
[47] UAV detection using RF CNN Accuracy: 99.80%, Precision: SL
99.85%, Recall: 99.55%, F1-score:
99.70%
[48] UAV detection using RF KNN Accuracy: 98.13% SL
[49] UAV detection using RF FNN, CNN Accuracy: 92.02%, Precision: SL
94.33%, Recall: 94.13%, F1-score:
94.14%
[50] UAV detection using RF Hierarchical ML (CNN, SVM) Accuracy: 99% SL
[51] UAV detection using RF signa- Autoencoder (AE), LSTM, CNN, Accuracy: 88.02%, Recall: SL
tures and CNN-LSTM hybrid model 99.01%, F1-score: 85.37%
[52] UAV detection using RF signa- Power spectrum density (PSD) Accuracy: 89% SL
tures with DNN model called PSD
[53] UAV detection using RF Multiscale-1D CNN Accuracy: 99.89% for 2 class, SL
98.56% for 4 class, 87.67% for
10 class; F-1 score: 99.96%
for 2 class, 98.87% for 4 class,
86.82% for 10 class
[54] UAV detection using RF MLP-based Model-URANUS Accuracy: 90.0% SL
[55] UAV detection using RF Hybrid (1DCNN + XGBoost Accuracy: 100% for 2 class, SL
classifier) 99.82% for 4 class, 99.51% for
10 class
[56] UAV detection using RF Residual network-based autoen- Accuracy: 100% SSL, USL
coder (DE-FEND)
1 SL = supervised learning, USL = unsupervised learning, DTL = deep transfer learning, SSL = semi-
supervised learning.
Remote Sens. 2024, 16, 879 7 of 42
In recent years, the use of RF-based UAV detection and classification has dramatically
increased. In the state of the art, many works have been completed using RF technology
for UAV detection and classification [42,57,63–65]. A DL approach based on RF was
proposed in [63] to detect multiple UAVs. To complete the objectives of detection and
classification, the authors suggested the use of a supervised DL model. For RF signal
preparation, they employed short-term Fourier transform (STFT). The higher efficiency of
their approach was largely due to the preparation of the data, which was first conducted
by using STFT. The authors in [64] introduced a model named RF-UAVNet, which was
designed with a convolutional network for UAV tracking systems that used RF signals
to recognize and classify the UAVs. In order to minimize the network dimensions and
operational expense, the recommended setup uses clustered convolutional layer structures.
This research took advantage of the publicly accessible dataset DroneRF [57] for RF-based
UAV detection techniques.
Remote Sens. 2024, 16, 879 8 of 42
The authors in [34] assessed the impact of real-world Bluetooth and Wi-Fi signal inter-
ference on UAV detection and classification by employing convolutional neural network
(CNN) feature extraction and machine learning classifiers logistic regression and k-nearest
neighbor (kNN). They used graphical representations in both the time and frequency
domains to evaluate two-class, four-class, and ten-class flying mode classification.
In a separate study, the authors in [35] proposed a drone detection system recognizing
various drone types and detecting drones. They designed a network structure using
multiple 1-dimensional layers of a sequential CNN to progressively learn the feature map
of RF signals of different sizes obtained from drones. The suggested CNN model was
trained using the DroneRF dataset, comprising three distinct drone RF signals along with
background noise.
Another investigation by the authors in [36] involved comparing three distinct classifi-
cation methods to identify the presence of airborne users in a network. These algorithms
utilized standard long-term evolution (LTE) metrics from the user equipment as input and
were evaluated using data collected from a car and a drone in flight equipped with mobile
phones. The results were analyzed, emphasizing the advantages and disadvantages of each
approach concerning various use cases and the trade-off between sensitivity and specificity.
Furthermore, in [37], the researchers explored the use of artificial neural networks
(ANNs) for feature extraction and classification from RF signals for UAV identification. This
study distinguished itself by employing the UAV communication signal as an identification
marker. Moreover, the research creatively extracted the slope, kurtosis, and skewness
of UAV signals in the frequency domain. Additionally, [38] proposed the detection and
classification of micro-UAVs using machine learning based on RF fingerprints of the signals
transmitted from the controller to the micro-UAV. During the detection phase, raw signals
were divided into frames and converted into the wavelet domain to reduce data processing
and eliminate bias from the signals. The existence of a UAV in each frame was detected
using a naïve Bayes approach based on independently constructed Markov models for
UAV and non-UAV classes.
The authors in [39] described their efforts to locate drone controllers using RF signals.
A signal spectrum monitor was used as an RF sensor array. From the sensor’s output, a
CNN was trained to anticipate the drone controller’s bearing on the sensor. By position-
ing two or more sensors at suitable distances apart, it became possible to determine the
controllers’ positions using these bearings.
In [40], the authors proposed a drone detection method aimed at creating a database for
RF signals emitted by different drones operating in various flight modes. They considered
multiple flight modes in simulations and utilized the RF database to develop algorithms
that detect and identify drone intrusions. Three DNNs were employed to identify drone
locations, types, and flight modes.
For recognizing and identifying UAVs based on their RF signature, [41] suggested an
end-to-end DL model. Different from previous research, this study employed multiscale
feature extraction methods without human intervention to extract enhanced features aiding
the model in achieving strong signal generalization capabilities and reducing computing
time for decisionmaking.
The study in [42] utilized a compressed sensing technique instead of the conventional
sampling theorem for data sampling. The researchers employed a multichannel random
demodulator to sample the signal and proposed a multistage DL-based method to detect
and classify UAVs, capitalizing on variations in communication signals between drones
and controllers under changing conditions. Additionally, the DroneRF dataset was utilized
in [42], The UAV was first identified by the DNN, and then it was further identified by a
CNN model. Nevertheless, it was not feasible to take into account additional signals that
appeared in the 2.4 GHz range when utilizing the DroneRF dataset [65].
In [43], the authors proposed a novel method based on RF signal analysis and multiple
ML techniques for drone swarm characterization and identification. They provided an
unsupervised strategy for drone swarm characterization using RF features extracted from
Remote Sens. 2024, 16, 879 9 of 42
2.1.1. Challenges and Solutions of RF-Based UAV Detection and Classification Using ML
• RF signal variability: Diverse RF signal characteristics due to variations in UAV
models, communication protocols, and flight dynamics. Develop robust feature
Remote Sens. 2024, 16, 879 10 of 42
faster and more accurate object detection using visual input, especially for visual-based
UAV detection and classification. The basic detection and classification of UAVs based
on image or video (visual data) using the ML algorithm is demonstrated in Figure 4. The
summary of related research on visual-based methods using ML for UAV detection and
classification is shown in Table 4. Furthermore, the dataset information of the current
research on visual-based methods using ML for UAV detection and classification is shown
in Table 5.
Figure 4. The detection and classification mechanism of UAV based on visual data analysis.
Table 4. Comparison summary of ML-based UAV classification and detection using visual data.
Model
Reference Detection Target Machine Learning Method Performance
Types 1
[73] Loaded and unloaded UAV de- YOLOv2 Accuracy: 80.34%, mean aver- SL
tection using image age precision (mAP): 74.97%
[74] Small UAV detection using im- pruned yolov4 Precision: 30.7%, Recall: 72.6%, SL, DTL
age mAP: 90.5%, F1-score: 45.2%
[75] Small UAV detection using the lightweight YOLOv3 Can Detect Multiple UAV SL
static wide-angle camera and a
lower-angle camera
[76] Flying UAV detection using CNN, SVM, and KNN Accuracy: CNN, SVM, and SL
fisheye camera images KNN of 93%, 88%, and 80%, Pre-
cision: 96%, 86%, and 74%, Re-
call: 91%, 91%, and 94%
[77] UAV detection using RGB im- YOLOv3 Precision: 95.10%, Recall: SL
ages 99.01%, mAP:74%
[78] Low-altitude UAV detection YOLOv4, YOLOv3 and SSD Accuracy: YOLOv4, YOLOv3, SL
and SSD with 89.32%, 89.14%,
and 79.52%, Recall: 92.48%,
89.27%, and 85.31%, mAP:
89.32%, 89.14%, and 76.84%
[79] UAV detection using images Transfer Learning with YOLOv3 Accuracy: confidence rate (CR) SL
within 60% to 100% and average
CR of 88.9%
[80] UAV Tracking using visual SSD, YOLOv3, and Faster mAP: 98% SL
data RCNN
[81] UAV detection using images YOLOv4 Precision: 0.95, Recall: 0.68, F1- SL
score: 0.79, and mAP: 74.36%
[82] UAV detection using images Fine-tuned YOLOv2 Precision and recall of 0.90 SL
[83] UAV detection using images VGG16 with Faster R-CNN mAP: 0.66 SL, DTL
[84] UAV detection using video im- YOLOv2 and Darknet19 Precision:88.35%, Recall: SL, DTL
ages 85.44%, F1-score: 73.3%
[85] UAV detection using images Faster RCNN Precision recall: 0.93 SL
Remote Sens. 2024, 16, 879 12 of 42
Table 4. Cont.
Model
Reference Detection Target Machine Learning Method Performance
Types 1
[86] UAV detection using images RetinaNet, SSD, YOLOv3, FPN, Grid R-CNN achieves best ac- SL, DTL
Faster R-CNN, RefineDet, Grid curacy 82.4% among all de-
R-CNN, and Cascade R-CNN tectors, while RefineDet 69.5%.
Among 2-stage models, Cas-
cade R-CNN achieved best ac-
curacy 79.4%, whereas Faster
R-CNN achieved worst 70.5%.
For 1-stage models, SSD512
78.7%and RetinaNet 77.9% both
perform well, whereas YOLOv3
achieved only 72.3%
[87] UAV detection using images YOLOv4 Accuracy: 83% Recall: 84%, SL
mAP: 84%, F1-score: 83%, and
Intersection over Union (IoU):
81%
[88] UAV detection using image YOLOv5 and v7 Precision: 95%, Recall: 95.6%, SL
data mAP: 96.7%
[89] UAV detection using image YOLOv4 Average precision: 34.63% SL
data
[90] UAV Vs. Bird detection using Cascade R-CNN, YOLOv5, and Detection results of Cascade R- SL
image data YOLOv3 CNN, YOLOv5, and YOLOv3
were 79.8%, 66.8%, and 80.0%
[91] UAV detection using image Deep clustering (YOLOv8 + t- Accuracy: 100% USL
SNE)
1 SL = supervised learning, USL=unsupervised learning, DTL=deep transfer learning.
Table 5. Datasets information of ML-based UAV classification and detection using visual data.
Table 5. Cont.
and the ZF model as transfer learning models. The Nvidia Quadro P6000 GPU was used
for the training, and a batch size of 64 was used with a fixed learning rate of 0.0001. They
used the Bird vs. UAV dataset, which is made up of five MPEG4-coded films with a total of
2727 frames and 1920 × 1080 pixel quality, shot during various sessions.
The study in [87] proposed a novel DL-based technique for effectively identifying
and detecting two different types of birds and drones. When the suggested method
was evaluated using a pre-existing image dataset, it outperformed the detection systems
currently utilized in the existing literature. Moreover, due to their similar appearance and
behavior, drones and birds were often mistaken for each other. The proposed technique
can discern and differentiate between two varieties of drones, distinguishing them from
birds. Additionally, it can determine the presence of drones in a given location.
To detect small UAVs, [88] utilized various iterations of state-of-the-art object detection
models (like YOLO models) using computer vision and DL techniques. They proposed
different image-processing approaches to enhance the accuracy of tiny UAV detection,
resulting in significant performance gains.
2.2.1. Challenges and Solutions of Visual Data-Based UAV Detection and Classification
Using ML
• Variability in visual data: Visual data captured by cameras vary due to factors like
lighting conditions, weather, angles, and distances, making consistent detection and
classification challenging. Employ robust preprocessing techniques (e.g., normaliza-
tion and augmentation) to standardize and enhance visual data quality.
• Limited annotated datasets: The lack of diverse and well-annotated datasets specific
to UAVs hampers the training of accurate ML models. Develop and curate com-
prehensive datasets encompassing various UAV types and scenarios for effective
model training.
• Real-time processing: Processing visual data in real time for swift and accurate UAV
detection and classification. Optimize algorithms and hardware configurations to
ensure real-time processing capabilities, potentially leveraging GPU acceleration or
edge computing.
• Scale and complexity: Scaling detection and classification algorithms to handle com-
plex scenes, multiple UAVs, or crowded environments. Explore advanced DL ar-
chitectures capable of handling complex visual scenes for improved detection and
classification accuracy.
• Adaptability to environmental changes: Adapting to environmental changes (e.g.,
varying weather conditions) affecting visual data quality and system performance.
Develop adaptive algorithms capable of adjusting to environmental variations for
robust and reliable detection.
2.2.2. Future Directions of Visual Data-Based UAV Detection and Classification Using ML
• Multimodal integration: Integrate visual data with other sensor modalities (e.g., RF
or LiDAR) for more comprehensive and reliable UAV detection systems [107].
• Semantic understanding and contextual information: Incorporate semantic under-
standing and contextual information in visual analysis to improve classification accu-
racy [108,109].
• Ethical and privacy concerns: Address privacy considerations by implementing
privacy-preserving techniques without compromising detection accuracy [110].
• Interpretability and explainability: Develop methods for explaining and interpreting
model decisions, enhancing trust and transparency in visual-based UAV detection
systems [111].
UAV. The capability to identify a UAV based on its auditory fingerprint, or even determine
its specific type, would be highly valuable. Figure 5 illustrates an example of a machine
learning-based approach for UAV detection and classification using the acoustic method.
Furthermore, Table 6 provides a summary of related research on acoustic-based methods
employing machine learning for UAV detection and classification. In addition, the dataset
information of the current research on acoustic-based methods using ML for UAV detection
and classification is shown in Table 7. In the realm of audio-based UAV identification, DL
techniques are commonly employed to extract features and achieve optimal UAV detection
performance. Recent studies [112–118] also demonstrated the efficacy of DL models in
extracting characteristics from UAV audio signals for UAV identification.
Figure 5. The detection and classification mechanism of UAV based on acoustic data analysis.
Table 6. Comparison summary of ML-based UAV classification and detection using acoustic data.
Model
Reference Detection Target Machine Learning Method Performance
Types 1
[119] UAV detection using acoustic CNN, RNN, and CRNN Accuracy: CNN, RNN, and SL
fingerprints CRNN 96.38%, 75.00%, 94.72%,
Precision:
96.24%, 75.92%, 95.02%/95.60%,
Recall: 95.60%, 68.01%, 93.08%,
F1-score:
95.90%, 68.38%, 93.93%
[112] UAV detection using audio fin- MFCC with CNN Accuracy: 94.5% SL
gerprints
[113] Amateur UAV detection using LWCNN + SVM Accuracy: 98.35%, Precision: SL
acoustic 98.50%, Recall: 98.20%, F1-score:
98.35%
[115] UAV detection using acoustic SVM Accuracy: 97.8%, Precision: SL
98.3%
[120] Amateur UAV detection using FTT and KNN method Precision: 83.0% SL
acoustic
[121] Amateur UAV detection using MFCC and LPCC with SVM Accuracy: 97.0%, Recall: 100% SL
sound
[114] UAV classification using sound GMM, CNN, and RNN Accuracy: RNN, CNN, GMM SL
0.8109, 0.5915, 0.6932, Precision,
recall of RNN (0.7953, 0.8066),
CNN and GMM precision
(CNN, GMM: 0.5346 < 0.9031)
and recall (CNN, GMM:
0.8019 > 0.3683), F1-score:
RNN > CNN > GMM:
0.8009 > 0.6415 > 0.5232
[122] UAV classification using acous- CNN Accuracy: 98.97% SL
tic STFT features
Remote Sens. 2024, 16, 879 17 of 42
Table 6. Cont.
Model
Reference Detection Target Machine Learning Method Performance
Types 1
[117] UAV detection using multiple SVM, CNN Accuracy of SVM with STFT SL
acoustic notes and MFCC of F1-score 78.70%
and 77.90%
[16] UAV detection using acoustic SVM, CNN Accuracy: 94.725%, F1-score: SL
94.815%
[118] UAV detection using acoustic Plotted image machine learning Accuracy of PIL and KNN of SL
(PIL) and KNN 83% and 61%
[123] UAV detection using acoustic To generate the artificial UAV Accuracy: 0.9564, Precision: SL
audio dataset, implemented a 0.9783, Recall: 0.9738, F1-score:
GAN model + CNN, RNN, and 0.9753
CRNN
[124] UAV detection using acoustic MFCCfeatures with SVM Accuracy: 99.9%, Recall: 99.8%, SL
signature Precision: 100%
[125] UAV detection using acoustic SVM Accuracy: 95.6% SL
signature
[126] UAV detection using acoustic SVM Accuracy: 93.53%, Recall: SL
90.95%, F1-score: 93.19%
[127] UAV detection using acoustic MFCC with concurrent neural Accuracy: 94.95%, Precision: SL
signal networks (CoNN) 93.00%, Recall: 89.00%, F1-score:
91.00%
[128] UAV detection using acoustic MFCC with multilayer percep- Accuracy of MLP and BRF were SL
data tion (MLP) and balanced ran- 0.83% and 0.75%
dom forest (BRF) algorithm
[129] UAV detection using acoustic MFCC with CNN model Accuracy: 80.00%, Precision: SL
features 90.9%, Recall: 66.7%, F1-score:
76.9%
[130] UAV detection using acoustic SVM Accuracy: 86.7% SL
features
[131] UAV detection using sound sig- MFCC, Mel, Contrast, Chroma, Accuracy: SVM, SL
nals and Tonnetz features with NN, GNB, KNN, NN of
SVM, a Gaussian naïve Bayes 100%, 95.9%, 98.9%, 99.7%, Pre-
(GNB), and, KNN cision: SVM, GNB, KNN, NN
of 100%, 95.3%, 99.5%, 99.5%,
Recall: SVM, GNB, KNN, NN
of 100%, 96.8%, 98.4%, 100%,
F1-score: SVM, GNB, KNN, NN
of 100%, 96.0%, 98.9%, 99.7%
[132] UAV detection using acoustic Lightweight CNN Accuracy: 93.33% SL
[133] UAV detection using acoustic Linear, MLP, RBFN, SVM, and Detection probability of error SL
Random Forest with 1m range between 20% and
30%
[134] UAV detection using acoustic Transformer-based CNN model F1-score of 88.4% SL
1 SL = supervised learning, DTL = deep transfer learning.
Table 7. Datasets information of ML-based UAV classification and detection using acoustic data.
Table 7. Cont.
The authors in [119] created spectrograms from audio samples and fed them into DL
models. The system extracted various characteristics from the spectrograms generated by
the audio sources and used them to train DL models. Additionally, the authors in [113]
employed an STFT to convert the audio signal into the Mel spectrum, creating a visual rep-
resentation. This image was then input into a specifically designed lightweight (LWCNN)
for identifying signal attributes and UAV detection.
To categorize the auditory signals as suggestive of UAV activity or not, the authors
in [112] employed Log Mel spectrograms and Mel frequency cepstral coefficients (MFCC)
Remote Sens. 2024, 16, 879 19 of 42
considered as input and fed them to the CNN model. Additionally, for amateur UAV
identification, the authors in [115] suggested a method that combines ML techniques with
acoustic inputs. Nevertheless, the distinction between things that can be mistaken for
ambient noise and other UAVs was not taken into account in their investigation.
A KNN-based method and Fast Fourier Transform (FFT) were presented in the
study [120] for UAV detection using auditory inputs. Using SVM and KNN based on
the auditory inputs, the signals are classified to determine whether the amateur UAV is
present or not. An amateur UAV is detected based on the similarities that the acquired
spectral pictures have to one another; nonetheless, the precision of this technique is only up
to 83%. In order to discriminate between the sounds of items such as UAVs, birds, aircraft,
and storms, the authors in [121] suggested an ML-based UAV identification system. The
MFCC and linear predictive cepstral coefficients (LPCC) feature extraction techniques are
used to extract the required characteristics from UAV sound. Then, SVM with different
kernels is used to precisely identify these sounds after feature extraction. The findings of
the experiment confirm that the SVM cubic kernel with MFCC performs better for UAV
identification than the LPCC approach, with an accuracy of about 97%.
The authors in [114] proposed a method for identifying the presence of a UAV within a
150-meter radius. They suggested employing classification techniques such as the Gaussian
mixture model (GMM), CNN, and RNN. To address the scarcity of acoustic data from
UAV flights, the authors recommended creating datasets by blending UAV sounds with
other ambient noises. One intriguing aspect of their research involves the use of diverse
UAV models for training and evaluating the classifiers. Their findings revealed that the
RNN classifier exhibited the highest performance at 0.8109, followed by the GMM model
at 0.6932, and the CNN model at 0.5915. However, in scenarios involving unidentified
information, the accuracy of all the predictors experienced a significant drop.
To produce 2-dimensional (2D) pictures from UAV audio data, the authors in [122]
suggested the normalization STFT for UAV detection. Firstly, the audio stream was split
into 50% overlapping 20 ms pieces. After that, a CNN network that had been created was
fed the normalization STFT as an input. Evaluations from outside using DJI Phantom 3
and Phantom 4 hovering were included in the dataset, and 41, 958 non-UAV frames and
68, 931 UAV audio frames were present in the datasets.
In [123], the authors provided a hybrid drone acoustic dataset, combining artificially
generated drone audio samples and recorded drone audio clips using GAN, a cutting-
edge DL technique. They explored the efficacy of drone audio in conjunction with three
distinct DL algorithms (CNN, RNN, and CRNN) for drone detection and identification and
investigated the impact of their suggested hybrid dataset on drone detection.
The author proposed an effective drone detection technique based on the audio
signature of drones in [124]. To identify the optimal acoustic descriptor for drone iden-
tification, five distinct aspects were examined and contrasted. These included MFCC,
Gammatone cepstral coefficients (GaCC), linear prediction coefficients, spectral roll-off,
and zero-crossing rate as chosen features. Several SVM classifier models were trained and
tested to assess the individual feature performance for effective drone identification. This
was completed using 10-fold and 20% data holdout cross-validation procedures on a large
heterogeneous database. The experimental outcome indicated that GaCCs were the most
effective features for acoustic drone detection.
In addition, AWGN was added to the dataset before conducting the testing. With a
detection rate (DR) of 98.97% and a false alarm rate (FAR) of 1.28, the best results were
obtained when training the CNN network with 100 epochs and low SNR ranges.
In [117], a method was proposed to optimize numerous acoustic nodes for extracting
STFT characteristics and MFCC features. Subsequently, the extracted characteristics dataset
was used to train two different types of supervised classifiers: CNN and SVM. In the case
of the CNN model, the audio signal was encoded as 2D images, incorporating dropout
and pooling layers alongside two fully connected and two convolution layers. In the initial
instance, the UAV operated at a maximum range of 20 m, hovering between 0 and 10 m
Remote Sens. 2024, 16, 879 20 of 42
above the six-node acoustic setup. The Parrot AR Drone 2.0 was one of the UAVs that
was put to the test. Numerous tests were carried out, and the outcomes show that the
combination of SVM and STFT characteristics produced the best outcomes, as expressed in
terms of color maps.
In addition, the authors in [16] explored the use of DL techniques for identifying UAVs
using acoustic data. They employed Mel spectrograms as input features to train DNN
models. Upon comparison with RNNs and convolutional (CRNNs), it was demonstrated
that CNNs exhibited superior performance. Furthermore, an ensemble of DNNs was
utilized to assess the final fusion techniques. This ensemble outperformed single models,
with the weighted soft voting process yielding the highest average precision of 94.725%.
In order to differentiate between the DJI Phantom 1 and 2 models, the authors in [118]
suggested KNN classifier techniques in conjunction with correlation analysis and spectrum
images derived from the audio data. They collected ambient sound from a YouTube
video, as well as various sound signals from both indoor settings (without propellers) and
outdoor environments, including a drone-free outdoor setting. Each sound was recorded
and subsequently divided into one-second frames. By utilizing image correlation methods,
they achieved an accuracy of 83%, while the KNN classifier yielded an accuracy of 61%.
Figure 6. The detection and classification mechanism of UAV based on radar data analysis.
Remote Sens. 2024, 16, 879 22 of 42
Table 8. Comparison summary of ML-based UAV classification and detection using radar data.
(radar with digital array receiver) dataset was developed by the microwave and radar
departments. Having a bandwidth of 500 MHz, the radar in operation operates on the
8.75 GHz base frequency range using frequency-modulated continuous wave technology.
In the CNN-32DC, the suggested CNN exhibits variation in terms of the number of filters,
combination layers, and the extraction of feature blocks. The selection process aimed to
achieve the most accurate result, which was then compared to various ML and classification
methods. The CNN-32DC demonstrates higher accuracy compared to similar networks
while requiring less computation in terms of time.
Table 9. Datasets information of ML-based UAV classification and detection using radar data.
Table 9. Cont.
In [148], the authors proposed a CNN model with a DL foundation that incorporates
MDSs, extensively employed in UAV detection applications. UAV radar returns and
their associated micro-Doppler fingerprints are often complex-valued. However, CNNs
typically neglect the phase component of these micro-Doppler signals, focusing solely on
the magnitude. Yet, crucial information that could enhance the accuracy of UAV detection
lies within this phase component. Therefore, this study introduced a unique complex-
valued CNN that considers both the phase and magnitude components of radar returns.
Furthermore, this research assessed the effectiveness of the proposed model using radar
returns with varying sampling frequencies and durations. Additionally, a comparison
was conducted regarding the model’s performance in the presence of noise. The complex-
valued CNN model suggested in this study demonstrated the highest detection precision,
achieving an impressive 93.80% accuracy, at a sampling rate of 16,000 Hz and a duration of
0.01 s. This indicates that the suggested model can effectively identify UAVs even when
they appear on the radar for very brief periods.
According to the study in [149], the authors proposed a novel lightweight DCNN
model called “DIAT-RadSATNet” for precise identification and classification of small
unmanned aerial vehicles (SUAVs) using the synthesis of micro-Doppler signals. The
design and testing of DIAT-RadSATNet utilized an open-field, continuous-wave (CW)
radar-based dataset of MDS recorded at 10 GHz. Equipped with 40 layers, 2.21 MB of
memory, 0.59 G FLOPs, 0.45 million trainable parameters, and a calculation time complexity
of 0.21 seconds, the DIAT-RadSATNet module was quite powerful. According to studies on
unidentified open-field datasets, “DIAT-RadSATNet” achieved a detection/classification
precision ranging between 97.1% and 97.3%, respectively.
In [150], the authors proposed a novel MDS-based approach, termed MDSUS, aimed at
tackling the detection, classification, and localization (including angle of arrival calculation)
of small UAVs. The synergistic utilization of a long short-term memory (LSTM) neural
network and the empirical mode decomposition (EMD) methodology effectively addressed
the blurring issue encountered in MDS within the low-frequency band. This approach
enables the monitoring of small UAVs by leveraging attributes extracted from the MDS.
In both short- and long-distance experiments, the LSTM neural network outperforms its
two main rivals, namely CNN and SVM. Notably, precision is enhanced by 1.3% and
1.2% in the short- and long-distance experiments, respectively, when compared to the
peak performance of the competing models, resulting in accuracies of 93.9% and 88.7%,
respectively.
In [151], the authors employed a frequency-modulated continuous wave (FMCW)
radar to generate a collection of micro-Doppler images, measuring dimensions of [3 × 3500].
These images corresponded to three different UAV models: DJI Inspire-1, DJI Inspire-2, and
DJI Spark. Subsequently, the authors proposed a CNN architecture for the identification and
categorization of these images. However, their research only encompassed one category
Remote Sens. 2024, 16, 879 25 of 42
class, and the maximum operational range of the targets was 412 meters. As a result, they
were constrained in the number of available train/test samples for each class. In [152], the
authors designed a three-layer CNN architecture for utilizing a generated micro-Doppler
image collection of a DJI Phantom-3 UAV, which measured dimensions of [1× 11,000]. The
time–frequency (T–F) images were captured using a pulse-Doppler radar operating in the
X-band with a 20 MHz bandwidth. To ensure an adequate number of train/test samples
for their study, the authors combined simulated and experimental data
The authors in [153] utilized a multistatic antenna array comprising one Tx/Rx and
two Rx arrays to independently acquire matching MDS signatures, measuring [6 × 1036],
while operating a DJI Phantom-vision 2+ UAV in two modes: hovering and flying. For
categorization, they employed a pre-trained AlexNet model. In [154], the authors gath-
ered a suitable MDS signature dataset of size [3 × 1440] using three different UAV types:
hexacopter, helicopter, and quadcopter. The categorization of SUAV targets often involves
employing the nearest neighbor with a three-sample (NN3) classifier. In [185], the authors
investigated the feasibility of using a K-band CW radar to concurrently identify numerous
UAVs. They used the cadence frequency spectrum as training data for a K-means classi-
fier, which was derived from the cadence–velocity diagram (CVD) after transforming the
time–frequency spectrogram. In their lab testing, they collected data for one, two, and all
UAVs using a helicopter, a hexacopter, and a quadcopter. They found that the average
precision outcomes for the categories of single UAVs, two UAVs, and three UAVs were
96.64%, 90.49%, and 97.8%, respectively.
In order to categorize two UAVs (Inspire 1 and F820), in [155], the authors examined
the pre-trained CNN (GoogLeNet) for UAV detection. The MDS was measured, and its CVD
was ascertained while in the air at two altitudes (50 and 100 meters) over a Ku-band FMCW
radar. The term ’merged Doppler image’ (MDI) refers to the combination of the MDS and
CVD pictures into a single image. Ten thousand images from measurements conducted
outside were created and fed into the CNN classifier using fourfold cross-validation. The
findings indicate that 100% accuracy in classifying the UAVs was possible. Remarkably,
trials conducted indoors in an anechoic environment showed worse categorization ability.
The authors in [186] proposed a UAV detection and classification system utilizing
sensor fusion, incorporating optical images, radar range-Doppler maps, and audio spectro-
grams. The fusion features were trained using three pre-trained CNN models: GoogLeNet,
ResNet-101, and DenseNet-201, respectively. During training, the parameters, including the
number of epochs, were set to 40, and the learning rate was set to 0.0001. The classification
F1-score accuracies of the three models were 95.1%, 95.3%, and 95.4%, respectively.
Using mmWave FMCW radar, the authors in [187] described a unique approach to UAV
location and activity classification. The suggested technique used vertically aligned radar
antennae to measure the UAV elevation angle of arrival from the base station. The calculated
elevation angle of arrival and the observed radial range were used to determine the height
of the UAV and its horizontal distance from the ground-based radar station. ML techniques
were applied to classify the UAV behavior based on MDS that was retrieved from outdoor
radar readings. Numerous lightweight classification models were examined to evaluate
efficiency, including logistic regression, SVM, light gradient boosting machine (GBM), and
a proprietary lightweight CNN. The results showed that 93% accuracy was achieved with
Light GBM, SVM, and logistic regression. A 95% accuracy rate in activity categorization
was also possible with the customized lightweight CNN. Pre-trained models (VGG16,
VGG19, ResNet50, ResNet101, and InceptionResNet) and the suggested lightweight CNN’s
efficiency were also contrasted.
In [188], the author introduced the inception-residual neural network (IRNN) for
target classification using MDS radar image data. By adjusting the hyperparameters,
the suggested IRNN technique was examined to find a balance between accuracy and
computational overhead. Based on experimental findings using the real Doppler radar
with digital array receiver (RAD-DAR) database, the proposed method can identify UAVs
with up to 99.5% accuracy. Additionally, in [189], the authors proposed employing a CNN
Remote Sens. 2024, 16, 879 26 of 42
to detect UAVs using data from radar images. The microwave and radar group developed
the real Doppler RAD-DAR radar technology, a range-Doppler system. They built and
evaluated the CNN by adjusting its hyperparameters using the RAD-DAR dataset. The
highest accuracy in terms of time was achieved when the number of filters was set to 32,
as per the experimental findings. With an accuracy of 97.63%, the network outperformed
similar image classifiers. The research team also conducted an ablation investigation to
examine and confirm the significance of individual neural network components.
The authors addressed the issue of UAV detection using RCS fingerprinting in their
study [190]. They conducted analyses on the RCS of six commercial UAVs in a chamber
with anechoic conditions. The RCS data were gathered for both vertical–vertical and
horizontal–horizontal polarizations at frequencies of 15 GHz and 25 GHz. Fifteen distinct
classification algorithms were employed, falling into three categories: statistical learning
(STL), ML, and DL. These algorithms were trained using the RCS signatures. The analysis
demonstrated that, while the precision of all the techniques for classification was improved
with SNR, the ML algorithm outperformed the STL and DL methods in terms of efficiency.
For instance, using the 15 GHz VV-polarized RCS data from the UAVs, the classification
tree ML model achieved an accuracy of 98.66% at 3dB SNR. Monte Carlo analysis was
employed, along with boxplots, confusion matrices, and classification plots, to assess
the efficiency of the classification. Overall, the discriminant analysis ML model and the
statistical models proposed by Peter Swerling exhibited superior accuracy compared to the
other algorithms. The study revealed that both the ML and STL algorithms outperformed
the DL methods (such as Squeezenet, GoogLeNet, Nasnet, and Resnet-101) in terms of
classification accuracy. Additionally, an analysis of processing times was conducted for
each program. Despite acceptable classification accuracy, the study found that the STL
algorithms required comparatively longer processing times than the ML and DL techniques.
The investigation also revealed that the classification tree yielded the fastest results, with
an average classification time of approximately 0.46 milliseconds.
A UAV classification technique for polarimetric radar, based on CNN and image
processing techniques, was presented by the authors in [191]. The suggested approach
increases the accuracy of drone categorization when the aspect angle MDS is extremely poor.
They suggested a unique picture framework for three-channel image classification CNN
in order to make use of the obtained polarimetric data. An image processing approach
and framework were presented to secure good classification accuracy while reducing
the quantity of data from four distinct polarizations. The dataset was produced using a
polarimetric Ku-band FMCW radar system for three different types of drones. For quick
assessment, the suggested approach was put to the test and confirmed in an anechoic
chamber setting. GoogLeNet, a well-known CNN structure, was employed to assess the
impact of the suggested radar preprocessing. The outcome showed that, compared to a
single polarized micro-Doppler picture, the suggested strategy raised precision from 89.9%
to 99.8%.
models capable of distinguishing subtle radar signal variations for precise classifica-
tion, potentially leveraging ensemble learning techniques.
• Real-time processing and computational complexity: Processing radar data in real
time while managing computational complexity for timely detection and response.
Optimize machine learning algorithms and hardware configurations for efficient real-
time processing, potentially utilizing parallel computing or hardware acceleration.
• Adverse weather conditions: Performance degradation in adverse weather condi-
tions (e.g., rain or fog) affects radar signal quality and detection accuracy. Develop
adaptive algorithms capable of compensating for weather-induced signal degradation
and maintaining robust detection capabilities.
• Security and interference mitigation: Vulnerability to interference and potential
security threats in radar-based systems. Implement interference mitigation techniques
and security measures (e.g., encryption and authentication) to safeguard radar signals
and system integrity.
safeguarded sky while not clashing the other objects in the neighborhood. Image depth is
considered for RL input and other scalar parameters like velocities, distance of the target,
and elevation angle. Another RL-based study was conducted in [200]; first, the drone is
detected by the YOLOv2 algorithm, and the drone is then tracked. The RL approach is used
by the follower drone to predict the action of the intruder/target drone by using image
frames. Later, a deep object detector and search area proposal algorithm are used to predict
target drones. Another study in [201] proposed a deep Q-network-based method to counter
drones in 3D space. The authors used EfficientNet-B0, a sub-version of EiffientNet, to detect
drones that can capture small objects. Nine models were proposed for countering drone
objects in 3D space, among which Model-1 and Model-2 were chosen as the best models
based on their training and testing performance. RL-based studies are partially used in
drone detection and classification for drone data. However, future research can be focused
on the classification and detection of drones by properly setting the RL environment.
Figure 7. The overview of the machine learning classification of hybrid sensor detection: (a) fusion of
sensor using compressed multisensor features to input into single detection and classification system;
(b) detection decisions using combined sensor fusion system.
Remote Sens. 2024, 16, 879 29 of 42
Table 10. Comparison summary of ML-based UAV classification and detection using hybrid sen-
sor data.
Table 11. Datasets information of ML-based UAV classification and detection using hybrid sen-
sor data.
The authors in [69] presented a detection system based on ANNs. This system pro-
cessed image data using a CNN and RF data using a DNN. A single prediction score for
drone presence was produced by concatenating the characteristics of the CNNs and DNNs
and then feeding them into another DNN. The feasibility of a hybrid sensing-based ap-
proach for UAV identification was demonstrated by the numerical results of the proposed
model, which achieved a validation accuracy of 75%.
The study [202] thoroughly described the process of developing and implementing
an automated multisensor UAV detection system (MSDDS) that utilizes thermal and
auditory sensors. The authors augmented the standard video and audio sensors with
a thermal infrared camera. They also discussed the constraints and potential of employing
GMM and YOLOv2 ML approaches in developing and implementing the MSDDS method.
Furthermore, the authors assembled a collection of 650 visible and infrared videos featuring
helicopters, airplanes, and UAVs. The visible videos have a resolution of 640 × 512 pixels,
Remote Sens. 2024, 16, 879 30 of 42
while the infrared videos are scaled to 320 × 256 pixels. The authors focused their analysis
on evaluating the system’s efficiency in terms of F1-score, recall, and accuracy.
The authors of [203] presented a system that continuously monitors a certain region
and produces audio and video feeds. The setup consisted of thirty cameras (visual sensors)
and three microphones (acoustic sensors). Following this, features were extracted from
the audio and video streams and sent to a classifier for detection. For the classification
and training of the datasets, they employed the popular SVM-based ML algorithm. The
efficiency of the visual detection approach was 79.54%, while the audio-assisted method
outperformed it significantly at 95.74%, as indicated by the findings.
A method for detecting tiny UAVs, which utilizes radar and audio sensors, was
presented by the authors in [204]. The system employs a customized radar called the
“Cantenna” to detect moving objects within a specified target region. An acoustic sensor
array is utilized to discern whether the object identified by the radar is a UAV. Furthermore,
the system incorporates a pre-trained DL model consisting of three MLP classifiers that
collectively vote based on auditory data to determine the presence or absence of a UAV.
When the system was evaluated using both field and collected data, it demonstrated
accurate identification of every instance in which a UAV was present, with very few false
positives and no false negatives.
The authors in [205] introduced a multimodal DL technique for combining and filtering
data from many unimodal UAV detection techniques. To conduct UAV identification
predictions, they used a combined set of data from three modalities. Specifically, an MLP
network was utilized to combine data from thermal imaging, vision, and 2D radar in the
form of range profile matrix data. To enhance the accuracy of deductions by combining
characteristics collected from unimodal modules, they provided a generic fusion NN
architecture. Multimodal features from both positive UAV and negative UAV detections
make up the training set. The system achieved precision, recall, and F1-scores of 99%, 100%,
and 95%, respectively.
The authors in [206] proposed a combined classification structure based on radar and
camera fusion. The camera network extracts the deep and complex characteristics from
the image, while the radar network collects the spatiotemporal data from the radar record.
Several field tests at various periods of the year were used to establish synchronized radar
and camera data. The field dataset was used to evaluate the performance of the combined
joint classification network, which incorporates camera detection and classification using
YOLOv5, as well as radar classification using a combination of interacting multiple model
(IMM)) filters and RNN. The study’s results demonstrated a significant enhancement in
classification accuracy, with birds and UAVs achieving 98% and 94% accuracy, respectively.
The authors in [143] introduced a multisensory detection technique for locating and
gathering information on UAVs operating in prohibited areas. This technique employed
a variety of methods, including video processing, IR imaging, radar, light detection and
ranging (LIDAR), audio pattern evaluation, radio signal analysis, and video synthesis. They
proposed a set of low-volume neural networks capable of parallel classification, which they
termed concurrent neural networks. This research focused on the detection and classifica-
tion of UAVs using two CNNs: a self-organizing map (SOM) for identifying objects in a
video stream and a multilayer perception (MLP) network for auditory pattern detection.
2.6.1. Challenges and Solutions of Hybrid Sensor-Based UAV Detection and Classification
Using ML
• Sensor data fusion and integration: Integrating heterogeneous data from various
sensors (e.g., radar, visual, and acoustic) with different characteristics, resolutions, and
modalities. Develop fusion techniques that align and synchronize data from multiple
sensors for holistic UAV detection and classification.
• Data synchronization and alignment: Aligning data streams from diverse sensors in
real time for accurate fusion and analysis. Implement synchronization methods to
align temporal and spatial information from different sensors for cohesive fusion.
Remote Sens. 2024, 16, 879 31 of 42
Table 12. Experimental comparison of different models with the highest accuracy.
Data Collection Models and Reference Accuracy Dataset Loss Function Special Feature
Technique
RF signal DNN/CNN [42] 100 % DroneRF MSE/Cross-entropy Compressive sensing-based data extrac-
Dataset [47] tion.
RF signal CNN, LR, KNN [34] 100% SDR Dataset [47] Unspecified Different deep learning architecture is
used for drone detection and identifica-
tion.
RF signal CNN [47] 99.7% DroneRF MSE Bluetooth and Wi-Fi signals are extracted
Dataset [47] for UAV detection.
Visual data CNN [80] 98.7% Aerial vehicle, Customized loss function Four methods have been evaluated to
Drone vs. Bird used make baseline for UAV detection.
Detection, Anti-
UAV [90,214,215]
Visual data YOLOv5 [88] 96.7% Kaggle [93] Adam Image processing phase was performed
before training.
Remote Sens. 2024, 16, 879 32 of 42
Data Collection Models and Reference Accuracy Dataset Loss Function Special Feature
Technique
Visual data CNN, SVM, NN [76] 93% Authors’ dataset Not specified Differentiate between drone and bird.
Acoustic data SVM [124] 99.9% Publicly available Different SVM kernels Five acoustic features were considered for
dataset used classification.
Acoustic data SVM,GNB, KNN, and 99.7% Authors’ dataset Different parameters are Feature extraction settings altered in order
NN [131] used for different models to maximize performance.
Acoustic data Lightweight CNN and 98.78% Sound effect For Adam, SVM liner, Two different models are added to in-
SVM [113] database Gaussian, and Cubic, ker- crease accuracy.
nel is used
Radar data CNN (GoogLeNet) [155] 100% Authors’ dataset RMSprop Micro-Doppler signature is used for train-
ing data.
Radar data CNN [164] 99.48% RDRD databse Adam Reduce the false alarm
Radar data CNN (GoogLeNet) [165] 99% micro-Doppler RMSprop Continuous wave spectrogram features of
spectrogram im- different drones obtained with low phase
ages noise investigated.
Indeed, highlighting the key areas of development in RF-based UAV detection, visual
data (images/video)-based UAV detection, acoustic/sound-based UAV detection, and
radar-based UAV detection will provide a comprehensive view of advancements in UAV
detection technologies.
One of the most widely used anti-UAV techniques is the RF-based UAV identifica-
tion framework, which utilizes the RF characteristics of UAVs to identify and categorize
them [75]. The aspects of emphasis regarding RF-based UAV detection are as follows:
advancements in RF signal processing techniques for improved detection accuracy; devel-
opment of machine learning algorithms to analyze and classify RF signatures of UAVs;
enhancement of multisensor fusion for combining RF data with other modalities for bet-
ter detection in complex environments; and research on countermeasures for RF-based
detection evasion techniques employed by UAVs.
Computer vision or visual techniques can be employed to identify UAVs without RF
transmission capabilities by utilizing inexpensive camera sensors. These sensors offer the
advantage of providing additional visual data, including the UAV model, dimensions, and
payload, which traditional UAV detection systems cannot deliver. The aspects of emphasis
regarding visual-based UAV detection are as follows: integration of deep learning models
for object detection and recognition in UAV imagery; improvement in real-time processing
capabilities for quick and accurate UAV identification; exploration of computer vision
algorithms to handle various environmental conditions and challenges, such as low light
or adverse weather; and research on the development of robust algorithms to differentiate
between UAVs and other objects in the visual spectrum.
Even in low-visibility situations, very inexpensive acoustic detection systems cat-
egorize certain UAV rotor audio patterns using a variety of auditory sensors or micro-
phones [17]. The aspects of emphasis regarding acoustic-based UAV detection are as
follows: advancements in sensor technologies for capturing and processing acoustic signals
associated with UAVs; integration of machine learning and pattern recognition techniques
to identify unique acoustic signatures of UAVs; research on mitigating challenges such as
background noise and signal interference; and exploration of distributed sensor networks
for triangulation and improved localization of UAVs using acoustic cues.
Radar is a conventional sensor that can reliably identify objects in the sky over ex-
tended distances and is nearly unaffected by adverse weather and light [30,216]. The
aspects of emphasis regarding radar-based UAV detection are as follows: development of
radar systems with enhanced sensitivity and resolution for UAV detection; integration of
machine learning algorithms to analyze radar returns and distinguish UAVs from other
objects; exploration of radar waveform diversity to improve detection performance in
different scenarios; and research on the development of radar-based tracking systems
for continuous monitoring and prediction of UAV movements. By emphasizing these
specific areas within each detection method, the development of UAV detection systems
Remote Sens. 2024, 16, 879 33 of 42
can be more targeted and effective. This approach ensures a comprehensive and nuanced
understanding of the challenges and opportunities within each domain.
The goal of data fusion, or hybridizing sensory data from numerous sensors, is to
integrate information from various modalities to draw conclusions that would be unattain-
able with just one sensor. Domains such as target surveillance and identification, traffic
management, UAV detection, remote sensing, road barrier detection, air pollution sensing,
complex equipment monitoring, robotics, biometric applications, and smart buildings all
benefit from this technique. Multisensor data fusion enables the identification of trends, the
extraction of insights, and the establishment of correlations between diverse sensor types
thanks to the wealth of information available in the real world. While multisensor fusion is
a viable strategy, designing systems to meet specific use cases requires thorough research
and experimental validation. The main drawbacks of sensor fusion include increased
deployment costs, computational complexity, and system intricacy. Synchronization and
latency issues may arise when integrating multiple sensors for joint detection. The recent
surge in the development of AI and DNNs has garnered significant attention for their
ability to represent multimodal data and address the challenges posed by hybrid sensor
detection scenarios [217]. Despite the above technologies of UAV detection, spectral [218]
and multispectral remote sensing imagery [219]-based techniques can be another research
scope for precision UAV classification and detection. In the context of spectral imagery, it
can be explored in the use of advanced spectral–spatial feature extraction methods, which
can enhance accuracy regarding the discriminative power of detection models.
In order to keep abreast of the most recent progress in UAV development and research
trends, researchers, developers, and practitioners might benefit greatly from consulting this
review article. This work adds invaluable insights for future research and development in
this dynamic field of UAVs, offering a thorough analysis that contributes significantly to the
scientific literature on DL- and ML-based UAV detection and classification technologies.
Author Contributions: Conceptualization, M.H.R. and M.A.S.S.; Methodology, M.H.R.; Software,
M.H.R., M.A.S.S., M.H.R. and M.A.A.; Validation, M.H.R. and M.A.S.S.; Formal analysis, M.H.R.,
M.A.S.S., M.A.A. and R.T.; Investigation, M.H.R., M.A.S.S., M.A.A., R.T. and J.-I.B.; Resources, H.-K.S.;
Data curation, M.H.R., M.A.S.S. and J.-I.B.; Writing—original draft preparation, M.H.R.; Writing—
review and editing, M.H.R., M.A.S.S., M.A.A., R.T., J.-I.B. and H.-K.S.; Visualization, M.H.R. and
M.A.S.S.; Supervision, H.-K.S.; Project administration, H.-K.S.; Funding acquisition, H.-K.S. All
authors have read and agreed to the published version of the manuscript.
Funding: This work was supported by the Institute of Information & Communications Technology
Planning & Evaluation (IITP) under the metaverse support program to nurture the best talents
(IITP-2023-RS-2023-00254529) grant funded by the Korea government (MSIT) and in part by the Basic
Science Research Program through the National Research Foundation of Korea (NRF) funded by
the Ministry of Education (2020R1A6A1A03038540) and in part by the MSIT, Korea under the ITRC
support program (IITP-2023-2021-0-01816) supervised by the IITP.
Data Availability Statement: Not applicable.
Conflicts of Interest: The authors declare no conflicts of interest.
References
1. Wilson, R.L. Ethical issues with use of drone aircraft. In Proceedings of the 2014 IEEE International Symposium on Ethics in
Science, Technology and Engineering, Chicago, IL, USA, 23 May 2014; pp. 1–4.
2. Coveney, S.; Roberts, K. Lightweight UAV digital elevation models and orthoimagery for environmental applications: Data
accuracy evaluation and potential for river flood risk modelling. Int. J. Remote Sens. 2017, 38, 3159–3180. [CrossRef]
3. Alsalam, B.H.Y.; Morton, K.; Campbell, D.; Gonzalez, F. Autonomous UAV with vision based on-board decision making for
remote sensing and precision agriculture. In Proceedings of the 2017 IEEE Aerospace Conference, Big Sky, MT, USA, 4–11 March
2017; pp. 1–12.
4. Amazon. Amazon Prime Air Drone Delivery Fleet Gets FAA Approval. Available online: https://www.cnbc.com/2020/08/31
/amazon-prime-now-drone-delivery-fleet-gets-faa-approval.html (accessed on 18 October 2023).
5. Reedha, R.; Dericquebourg, E.; Canals, R.; Hafiane, A. Transformer neural network for weed and crop classification of high
resolution UAV images. Remote Sens. 2022, 14, 592. [CrossRef]
Remote Sens. 2024, 16, 879 34 of 42
6. Bisio, I.; Garibotto, C.; Haleem, H.; Lavagetto, F.; Sciarrone, A. On the localization of wireless targets: A drone surveillance
perspective. IEEE Netw. 2021, 35, 249–255. [CrossRef]
7. Civilian. Civilian Drone Crashes into ARMY Helicopter. Available online: https://nypost.com/2017/09/22/army-helicopter-
hit-by-drone/ (accessed on 18 October 2023).
8. Medaiyese, O.O.; Ezuma, M.; Lauf, A.P.; Guvenc, I. Wavelet transform analytics for RF-based UAV detection and identification
system using machine learning. Pervasive Mob. Comput. 2022, 82, 101569. [CrossRef]
9. Birch, G.C.; Griffin, J.C.; Erdman, M.K. Uas Detection Classification and Neutralization: Market Survey 2015; Technical Report, Sandia
National Lab. (SNL-NM): Albuquerque, NM, USA, 2015. [CrossRef]
10. Zhao, Z.Q.; Zheng, P.; Xu, S.t.; Wu, X. Object detection with deep learning: A review. IEEE Trans. Neural Netw. Learn. Syst. 2019,
30, 3212–3232. [CrossRef] [PubMed]
11. Khrissi, L.; El Akkad, N.; Satori, H.; Satori, K. Clustering method and sine cosine algorithm for image segmentation. Evol. Intell.
2022, 15, 669–682. [CrossRef]
12. Khrissi, L.; El Akkad, N.; Satori, H.; Satori, K. An efficient image clustering technique based on fuzzy c-means and cuckoo search
algorithm. Int. J. Adv. Comput. Sci. Appl. 2021, 12, 423–432. [CrossRef]
13. Ali, S.N.; Shuvo, S.B.; Al-Manzo, M.I.S.; Hasan, A.; Hasan, T. An end-to-end deep learning framework for real-time denoising of
heart sounds for cardiac disease detection in unseen noise. IEEE Access 2023, 11, 87887–87901. [CrossRef]
14. Li, X.Q.; Song, L.K.; Bai, G.C. Deep learning regression-based stratified probabilistic combined cycle fatigue damage evaluation
for turbine bladed disks. Int. J. Fatigue 2022, 159, 106812. [CrossRef]
15. McCoy, J.; Rawal, A.; Rawat, D.B.; Sadler, B.M. Ensemble Deep Learning for Sustainable Multimodal UAV Classification. IEEE
Trans. Intell. Transp. Syst. 2023, 24, 15425–15434. [CrossRef]
16. Casabianca, P.; Zhang, Y. Acoustic-based UAV detection using late fusion of deep neural networks. Drones 2021, 5, 54. [CrossRef]
17. Taha, B.; Shoufan, A. Machine learning-based drone detection and classification: State-of-the-art in research. IEEE Access 2019,
7, 138669–138682. [CrossRef]
18. Bouguettaya, A.; Zarzour, H.; Kechida, A.; Taberkit, A.M. Deep learning techniques to classify agricultural crops through UAV
imagery: A review. Neural Comput. Appl. 2022, 34, 9511–9536. [CrossRef] [PubMed]
19. Shahi, T.B.; Xu, C.Y.; Neupane, A.; Guo, W. Machine learning methods for precision agriculture with UAV imagery: A review.
Electron. Res. Arch. 2022, 30, 4277–4317. [CrossRef]
20. Srivastava, S.; Narayan, S.; Mittal, S. A survey of deep learning techniques for vehicle detection from UAV images. J. Syst. Archit.
2021, 117, 102152. [CrossRef]
21. Bouguettaya, A.; Zarzour, H.; Kechida, A.; Taberkit, A.M. Vehicle detection from UAV imagery with deep learning: A review.
IEEE Trans. Neural Netw. Learn. Syst. 2021, 33, 6047–6067. [CrossRef]
22. Bouguettaya, A.; Zarzour, H.; Kechida, A.; Taberkit, A.M. A survey on deep learning-based identification of plant and crop
diseases from UAV-based aerial images. Clust. Comput. 2023, 26, 1297–1317. [CrossRef]
23. Diez, Y.; Kentsch, S.; Fukuda, M.; Caceres, M.L.L.; Moritake, K.; Cabezas, M. Deep learning in forestry using uav-acquired rgb
data: A practical review. Remote Sens. 2021, 13, 2837. [CrossRef]
24. Shahi, T.B.; Xu, C.Y.; Neupane, A.; Guo, W. Recent Advances in Crop Disease Detection Using UAV and Deep Learning
Techniques. Remote Sens. 2023, 15, 2450. [CrossRef]
25. Bithas, P.S.; Michailidis, E.T.; Nomikos, N.; Vouyioukas, D.; Kanatas, A.G. A survey on machine-learning techniques for
UAV-based communications. Sensors 2019, 19, 5170. [CrossRef] [PubMed]
26. Yan, X.; Fu, T.; Lin, H.; Xuan, F.; Huang, Y.; Cao, Y.; Hu, H.; Liu, P. UAV Detection and Tracking in Urban Environments Using
Passive Sensors: A Survey. Appl. Sci. 2023, 13, 11320. [CrossRef]
27. Geng, Z.; Yan, H.; Zhang, J.; Zhu, D. Deep-learning for radar: A survey. IEEE Access 2021, 9, 141800–141818. [CrossRef]
28. Coluccia, A.; Parisi, G.; Fascista, A. Detection and classification of multirotor drones in radar sensor networks: A review. Sensors
2020, 20, 4172. [CrossRef]
29. Khan, M.A.; Menouar, H.; Eldeeb, A.; Abu-Dayya, A.; Salim, F.D. On the detection of unauthorized drones—Techniques and
future perspectives: A review. IEEE Sens. J. 2022, 22, 11439–11455. [CrossRef]
30. Samaras, S.; Diamantidou, E.; Ataloglou, D.; Sakellariou, N.; Vafeiadis, A.; Magoulianitis, V.; Lalas, A.; Dimou, A.; Zarpalas, D.;
Votis, K.; et al. Deep learning on multi sensor data for counter UAV applications—A systematic review. Sensors 2019, 19, 4837.
[CrossRef]
31. Park, S.; Kim, H.T.; Lee, S.; Joo, H.; Kim, H. Survey on anti-drone systems: Components, designs, and challenges. IEEE Access
2021, 9, 42635–42659. [CrossRef]
32. Zhang, H.; Li, T.; Li, Y.; Li, J.; Dobre, O.A.; Wen, Z. RF-based drone classification under complex electromagnetic environments
using deep learning. IEEE Sens. J. 2023, 23, 6099–6108. [CrossRef]
33. Medaiyese, O.O.; Syed, A.; Lauf, A.P. Machine learning framework for RF-based drone detection and identification system.
In Proceedings of the 2021 2nd International Conference On Smart Cities, Automation & Intelligent Computing Systems
(ICON-SONICS), Tangerang, Indonesia, 12–13 October 2021; pp. 58–64.
34. Swinney, C.J.; Woods, J.C. RF detection and classification of unmanned aerial vehicles in environments with wireless interference.
In Proceedings of the 2021 International Conference on Unmanned Aircraft Systems (ICUAS), Athens, Greece, 15–18 June 2021;
pp. 1494–1498.
Remote Sens. 2024, 16, 879 35 of 42
35. Akter, R.; Doan, V.S.; Tunze, G.B.; Lee, J.M.; Kim, D.S. RF-based UAV surveillance system: A sequential convolution neural
networks approach. In Proceedings of the 2020 International Conference on Information and Communication Technology
Convergence (ICTC), Jeju, Republic of Korea, 21–23 October 2020;; pp. 555–558.
36. Amorim, R.; Wigard, J.; Nguyen, H.; Kovacs, I.Z.; Mogensen, P. Machine-learning identification of airborne UAV-UEs based on
LTE radio measurements. In Proceedings of the 2017 IEEE Globecom Workshops (GC Wkshps), Singapore, 4–8 December 2027;
pp. 1–6.
37. Zhang, H.; Cao, C.; Xu, L.; Gulliver, T.A. A UAV detection algorithm based on an artificial neural network. IEEE Access 2018,
6, 24720–24728. [CrossRef]
38. Ezuma, M.; Erden, F.; Anjinappa, C.K.; Ozdemir, O.; Guvenc, I. Micro-UAV detection and classification from RF fingerprints
using machine learning techniques. In Proceedings of the 2019 IEEE Aerospace Conference, Big Sky, MT, USA, 2–9 March 2019;
pp. 1–13.
39. Shorten, D.; Williamson, A.; Srivastava, S.; Murray, J.C. Localisation of Drone Controllers from RF Signals Using a Deep Learning
Approach. In Proceedings of the International Conference on Pattern Recognition and Artificial Intelligence, Union NJ USA,
15–17 August 2018; pp. 89–97.
40. Al-Sa’d, M.F.; Al-Ali, A.; Mohamed, A.; Khattab, T.; Erbad, A. RF-based drone detection and identification using deep learning
approaches: An initiative towards a large open source drone database. Future Gener. Comput. Syst. 2019, 100, 86–97. [CrossRef]
41. Alam, S.S.; Chakma, A.; Rahman, M.H.; Bin Mofidul, R.; Alam, M.M.; Utama, I.B.K.Y.; Jang, Y.M. RF-Enabled Deep-Learning-
Assisted Drone Detection and Identification: An End-to-End Approach. Sensors 2023, 23, 4202. [CrossRef]
42. Mo, Y.; Huang, J.; Qian, G. Deep learning approach to UAV detection and classification by using compressively sensed RF signal.
Sensors 2022, 22, 3072. [CrossRef]
43. Ashush, N.; Greenberg, S.; Manor, E.; Ben-Shimol, Y. Unsupervised Drones Swarm Characterization Using RF Signals Analysis
and Machine Learning Methods. Sensors 2023, 23, 1589. [CrossRef]
44. Basak, S.; Rajendran, S.; Pollin, S.; Scheers, B. Combined RF-based drone detection and classification. IEEE Trans. Cogn. Commun.
Netw. 2021, 8, 111–120. [CrossRef]
45. Basak, S.; Rajendran, S.; Pollin, S.; Scheers, B. Drone classification from RF fingerprints using deep residual nets. In Proceedings
of the 2021 International Conference on COMmunication Systems & NETworkS (COMSNETS), Bangalore, India, 5–9 January
2021; pp. 548–555. [CrossRef]
46. Nemer, I.; Sheltami, T.; Ahmad, I.; Yasar, A.U.H.; Abdeen, M.A. RF-based UAV detection and identification using hierarchical
learning approach. Sensors 2021, 21, 1947. [CrossRef]
47. Al-Emadi, S.; Al-Senaid, F. Drone detection approach based on radio-frequency using convolutional neural network. In
Proceedings of the 2020 IEEE International Conference on Informatics, IoT, and Enabling Technologies (ICIoT), Doha, Qatar, 2–5
February 2020; pp. 29–34.
48. Ezuma, M.; Erden, F.; Anjinappa, C.K.; Ozdemir, O.; Guvenc, I. Detection and classification of UAVs using RF fingerprints in the
presence of Wi-Fi and Bluetooth interference. IEEE Open J. Commun. Soc. 2019, 1, 60–76. [CrossRef]
49. Basan, E.S.; Tregubenko, M.D.; Mudruk, N.N.; Abramov, E.S. Analysis of artificial intelligence methods for detecting drones
based on radio frequency activity. In Proceedings of the 2021 XV International Scientific-Technical Conference on Actual Problems
Of Electronic Instrument Engineering (APEIE), Novosibirsk, Russia, 19–21 November 2021; pp. 238–242.
50. Zhao, X.; Wang, L.; Wang, Q.; Wang, J. A hierarchical framework for drone identification based on radio frequency machine
learning. In Proceedings of the 2022 IEEE International Conference on Communications Workshops (ICC Workshops), Seoul,
Republic of Korea, 16–20 May 2022; pp. 391–396.
51. Sohal, R.S.; Grewal, V.; Singh, K.; Kaur, J. Deep learning approach for investigation of temporal radio frequency signatures of
drones. Int. J. Commun. Syst. 2023, 36, e5377. [CrossRef]
52. Liu, H. Unmanned Aerial Vehicle Detection and Identification Using Deep Learning. In Proceedings of the 2021 International
Wireless Communications and Mobile Computing (IWCMC), Harbin City, China, 28 June–2 July 2021; pp. 514–518.
53. Mandal, S.; Satija, U. Time–Frequency Multiscale Convolutional Neural Network for RF-Based Drone Detection and Identification.
IEEE Sens. Lett. 2023, 7, 1–4. [CrossRef]
54. Lofù, D.; Gennaro, P.D.; Tedeschi, P.; Noia, T.D.; Sciascio, E.D. URANUS: Radio Frequency Tracking, Classification and
Identification of Unmanned Aircraft Vehicles. IEEE Open J. Veh. Technol. 2023, 4, 921–935. [CrossRef]
55. Inani, K.N.; Sangwan, K.; Dhiraj. Machine Learning based framework for Drone Detection and Identification using RF signals. In
Proceedings of the 2023 4th International Conference on Innovative Trends in Information Technology (ICITIIT), Kottayam, India,
11–12 February 2023; pp. 1–8.
56. Basak, S.; Rajendran, S.; Pollin, S.; Scheers, B. Autoencoder based framework for drone RF signal classification and novelty
detection. In Proceedings of the 2023 25th International Conference on Advanced Communication Technology (ICACT),
Pyeongchang, Republic of Korea, 19–22 February 2023; pp. 218–225.
57. Allahham, M.S.; Al-Sa’d, M.F.; Al-Ali, A.; Mohamed, A.; Khattab, T.; Erbad, A. DroneRF dataset: A dataset of drones for RF-based
detection, classification and identification. Data Brief 2019, 26, 104313. [CrossRef] [PubMed]
58. Ezuma, M.; Erden, F.; Anjinappa, C.K.; Ozdemir, O.; Guvenc, I. Drone Remote Controller RF Signal Dataset. IEEE DataPort 2020.
[CrossRef]
Remote Sens. 2024, 16, 879 36 of 42
59. Instruments, N. USRP Software Defined Radio Reconfigurable Device. Available online: https://www.ni.com/ko-kr/shop/
model/usrp-2943.html(accessed on 24 December 2023).
60. Medaiyese, O.; Ezuma, M.; Lauf, A.; Adeniran, A. Cardinal RF (CardRF): An Outdoor UAV/UAS/Drone RF Signals with
Bluetooth and WiFi Signals Dataset. IEEE DataPort 2022. [CrossRef]
61. Uzundurukan, E.; Dalveren, Y.; Kara, A. A database for the radio frequency fingerprinting of Bluetooth devices. Data 2020, 5, 55.
[CrossRef]
62. Soltani, N.; Reus-Muns, G.; Salehi, B.; Dy, J.; Ioannidis, S.; Chowdhury, K. RF fingerprinting unmanned aerial vehicles with
non-standard transmitter waveforms. IEEE Trans. Veh. Technol. 2020, 69, 15518–15531. [CrossRef]
63. Sazdić-Jotić, B.; Pokrajac, I.; Bajčetić, J.; Bondžulić, B.; Obradović, D. Single and multiple drones detection and identification
using RF based deep learning algorithm. Expert Syst. Appl. 2022, 187, 115928. [CrossRef]
64. Huynh-The, T.; Pham, Q.V.; Nguyen, T.V.; Da Costa, D.B.; Kim, D.S. RF-UAVNet: High-performance convolutional network for
RF-based drone surveillance systems. IEEE Access 2022, 10, 49696–49707. [CrossRef]
65. Medaiyese, O.O.; Ezuma, M.; Lauf, A.P.; Adeniran, A.A. Hierarchical learning framework for UAV detection and identification.
IEEE J. Radio Freq. Identif. 2022, 6, 176–188. [CrossRef]
66. Shi, Z.; Huang, M.; Zhao, C.; Huang, L.; Du, X.; Zhao, Y. Detection of LSSUAV using hash fingerprint based SVDD. In Proceedings
of the 2017 IEEE International Conference on Communications (ICC), Paris, France, 21–25 May 2017; pp. 1–5.
67. Zhao, C.; Chen, C.; Cai, Z.; Shi, M.; Du, X.; Guizani, M. Classification of small UAVs based on auxiliary classifier Wasserstein
GANs. In Proceedings of the 2018 IEEE Global Communications Conference (GLOBECOM), Abu Dhabi, United Arab Emirates,
9–13 December 2018; pp. 206–212.
68. Liang, Y.; Zhao, M.; Liu, X.; Jiang, J.; Lu, G.; Jia, T. Image Splicing Compression Algorithm Based on the Extended Kalman Filter
for Unmanned Aerial Vehicles Communication. Drones 2023, 7, 488. [CrossRef]
69. Aledhari, M.; Razzak, R.; Parizi, R.M.; Srivastava, G. Sensor fusion for drone detection. In Proceedings of the 2021 IEEE 93rd
Vehicular Technology Conference (VTC2021-Spring), Helsinki, Finland, 25–28 April 2021; pp. 1–7.
70. Gumaei, A.; Al-Rakhami, M.; Hassan, M.M.; Pace, P.; Alai, G.; Lin, K.; Fortino, G. Deep Learning and Blockchain with Edge
Computing for 5G-Enabled Drone Identification and Flight Mode Detection. IEEE Netw. 2021, 35, 94–100. [CrossRef]
71. Hickling, T.; Aouf, N.; Spencer, P. Robust Adversarial Attacks Detection Based on Explainable Deep Reinforcement Learning for
UAV Guidance and Planning. IEEE Trans. Intell. Veh. 2023, 8, 4381–4394. [CrossRef]
72. Rana, B.; Singh, Y. Internet of things and UAV: An interoperability perspective. In Unmanned Aerial Vehicles for Internet of Things
(IoT) Concepts, Techniques, and Applications; Wiley: Hoboken, NJ, USA, 2021; pp. 105–127.
73. Seidaliyeva, U.; Alduraibi, M.; Ilipbayeva, L.; Almagambetov, A. Detection of Loaded and Unloaded UAV Using Deep Neural
Network. In Proceedings of the 2020 Fourth IEEE International Conference on Robotic Computing (IRC), Taichung, Taiwan, 9–11
November 2020; pp. 490–494.
74. Liu, H.; Fan, K.; Ouyang, Q.; Li, N. Real-time small drones detection based on pruned yolov4. Sensors 2021, 21, 3374. [CrossRef]
75. Unlu, E.; Zenou, E.; Riviere, N.; Dupouy, P.E. Deep learning-based strategies for the detection and tracking of drones using
several cameras. IPSJ Trans. Comput. Vis. Appl. 2019, 11, 1–13. [CrossRef]
76. Mahdavi, F.; Rajabi, R. Drone Detection Using Convolutional Neural Networks. In Proceedings of the 2020 6th Iranian Conference
on Signal Processing and Intelligent Systems (ICSPIS), Mashhad, Iran, 23–24 December 2020; pp. 1–5.
77. Behera, D.K.; Raj, A.B. Drone Detection and Classification Using Deep Learning. In Proceedings of the 2020 4th International
Conference on Intelligent Computing and Control Systems (ICICCS), Madurai, India, 13–15 May 2020; pp. 1012–1016.
78. Shi, Q.; Li, J. Objects Detection of UAV for Anti-UAV Based on YOLOv4. In Proceedings of the 2020 IEEE 2nd International
Conference on Civil Aviation Safety and Information Technology ICCASIT, Weihai, China, 14–16 October 2020; pp. 1048–1052.
[CrossRef]
79. Wei Xun, D.T.; Lim, Y.L.; Srigrarom, S. Drone detection using YOLOv3 with transfer learning on NVIDIA Jetson TX2. In
Proceedings of the 2021 Second International Symposium on Instrumentation, Control, Artificial Intelligence, and Robotics
(ICA-SYMP), Bangkok, Thailand, 20–22 January 2021; pp. 1–6. [CrossRef]
80. Isaac-Medina, B.K.; Poyser, M.; Organisciak, D.; Willcocks, C.G.; Breckon, T.P.; Shum, H.P. Unmanned aerial vehicle visual
detection and tracking using deep neural networks: A performance benchmark. In Proceedings of the IEEE/CVF International
Conference on Computer Vision, Virtual, 11–17 October 2021; pp. 1223–1232.
81. Singha, S.; Aydin, B. Automated drone detection using YOLOv4. Drones 2021, 5, 95. [CrossRef]
82. Aker, C.; Kalkan, S. Using deep networks for drone detection. In Proceedings of the 2017 14th IEEE International Conference on
Advanced Video and Signal Based Surveillance (AVSS), Lecce, Italy, 29 August–1 September 2017; pp. 1–6.
83. Saqib, M.; Khan, S.D.; Sharma, N.; Blumenstein, M. A Study on Detecting Drones Using Deep Convolutional Neural Networks.
In Proceedings of the 2017 14th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS), Lecce,
Italy, 29 August–1 September 2017; pp. 1–5.
84. Wu, M.; Xie, W.; Shi, X.; Shao, P.; Shi, Z. Real-time drone detection using deep learning approach. In Proceedings of the Machine
Learning and Intelligent Communications: Third International Conference, MLICOM 2018, Hangzhou, China, 6–8 July 2018;
Proceedings 3; Springer: Berlin/Heidelberg, Germany, 2018; pp. 22–32.
Remote Sens. 2024, 16, 879 37 of 42
85. Chen, Y.; Aggarwal, P.; Choi, J.; Kuo, C.C.J. A deep learning approach to drone monitoring. In Proceedings of the 2017 Asia-Pacific
Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC), Kuala Lumpur, Malaysia, 12–15
December 2017; pp. 686–691.
86. Zheng, Y.; Chen, Z.; Lv, D.; Li, Z.; Lan, Z.; Zhao, S. Air-to-air visual detection of micro-UAVs: An experimental evaluation of deep
learning. IEEE Robot. Autom. Lett. 2021, 6, 1020–1027. [CrossRef]
87. Samadzadegan, F.; Dadrass Javan, F.; Ashtari Mahini, F.; Gholamshahi, M. Detection and recognition of drones based on a deep
convolutional neural network using visible imagery. Aerospace 2022, 9, 31. [CrossRef]
88. Dewangan, V.; Saxena, A.; Thakur, R.; Tripathi, S. Application of Image Processing Techniques for UAV Detection Using Deep
Learning and Distance-Wise Analysis. Drones 2023, 7, 174. [CrossRef]
89. Phung, K.P.; Lu, T.H.; Nguyen, T.T.; Le, N.L.; Nguyen, H.H.; Hoang, V.P. Multi-model deep learning drone detection and
tracking in complex background conditions. In Proceedings of the 2021 International Conference on Advanced Technologies for
Communications (ATC), Ho Chi Minh City, Vietnam, 14–16 October 2021; pp. 189–194.
90. Coluccia, A.; Fascista, A.; Schumann, A.; Sommer, L.; Dimou, A.; Zarpalas, D.; Méndez, M.; De la Iglesia, D.; González, I.;
Mercier, J.P.; et al. Drone vs. bird detection: Deep learning algorithms and results from a grand challenge. Sensors 2021, 21, 2824.
[CrossRef] [PubMed]
91. Hamadi, R.; Ghazzai, H.; Massoud, Y. Image-based Automated Framework for Detecting and Classifying Unmanned Aerial
Vehicles. In Proceedings of the 2023 IEEE International Conference on Smart Mobility (SM), Thuwal, Saudi Arabia, 19–21 March
2023; pp. 149–153.
92. Kostadinovshalon. UAV Detection Tracking Benchmark. Available online: https://github.com/KostadinovShalon/
UAVDetectionTrackingBenchmark (accessed on 24 December 2023).
93. Kaggle. Kaggle Datsets. Available online: https://www.kaggle.com/datasets (accessed on 24 December 2023).
94. USC. Usc Drone Dataset. Available online: https://github.com/chelicynly/A-Deep-Learning-Approach-to-Drone-Monitoring
(accessed on 24 December 2023).
95. Det-Fly. Det-Fly Dataset. Available online: https://github.com/Jake-WU/Det-Fly (accessed on 24 December 2023).
96. Drone. Drone Dataset. Available online: https://www.kaggle.com/datasets/dasmehdixtr/drone-dataset-uav (accessed on 24
December 2023).
97. Li, J.; Ye, D.H.; Chung, T.; Kolsch, M.; Wachs, J.; Bouman, C. Multi-target detection and tracking from a single camera in
Unmanned Aerial Vehicles (UAVs). In Proceedings of the 2016 IEEE/RSJ International Conference on Intelligent Robots and
Systems (IROS), Daejeon, Republic of Korea, 9–14 October 2016; pp. 4992–4997.
98. Dwibedi, D.; Misra, I.; Hebert, M. Cut, paste and learn: Surprisingly easy synthesis for instance detection. In Proceedings of the
IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 1301–1310.
99. Ashraf, M.W.; Sultani, W.; Shah, M. Dogfight: Detecting drones from drones videos. In Proceedings of the IEEE/CVF Conference
on Computer Vision and Pattern Recognition, Virtual Conference, 19–25 June 2021; pp. 7067–7076.
100. Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv 2014, arXiv:1409.1556.
101. Girshick, R.; Donahue, J.; Darrell, T.; Malik, J. Rich feature hierarchies for accurate object detection and semantic segmentation.
In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp.
580–587.
102. He, K.; Zhang, X.; Ren, S.; Sun, J. Spatial pyramid pooling in deep convolutional networks for visual recognition. IEEE Trans.
Pattern Anal. Mach. Intell. 2015, 37, 1904–1916. [CrossRef] [PubMed]
103. Ren, S.; He, K.; Girshick, R.; Sun, J. Faster r-cnn: Towards real-time object detection with region proposal networks. Adv. Neural
Inf. Process. Syst. 2015, 28 pp. 91–99. [CrossRef] [PubMed]
104. Long, J.; Shelhamer, E.; Darrell, T. Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE
Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 11–12 June 2015; pp. 3431–3440.
105. Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You only look once: Unified, real-time object detection. In Proceedings of the
IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 26 June–1 July 2016; pp. 779–788.
106. Liu, W.; Anguelov, D.; Erhan, D.; Szegedy, C.; Reed, S.; Fu, C.Y.; Berg, A.C. Ssd: Single shot multibox detector. In Proceedings of
the Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, 11–14 October 2016; Proceedings,
Part I 14; Springer: Berlin/Heidelberg, Germany, 2016; pp. 21–37.
107. Rizzoli, G.; Barbato, F.; Caligiuri, M.; Zanuttigh, P. SynDrone-Multi-Modal UAV Dataset for Urban Scenarios. In Proceedings of
the IEEE/CVF International Conference on Computer Vision, Paris, France, 2–6 October 2023; pp. 2210–2220.
108. de Curtò, J.; de Zarzà, I.; Calafate, C.T. Semantic scene understanding with large language models on unmanned aerial vehicles.
Drones 2023, 7, 114. [CrossRef]
109. Dilshad, N.; Ullah, A.; Kim, J.; Seo, J. LocateUAV: Unmanned aerial vehicle location estimation via contextual analysis in an IoT
environment. IEEE Internet Things J. 2022, 10, 4021–4033. [CrossRef]
110. Ahmad, K.; Maabreh, M.; Ghaly, M.; Khan, K.; Qadir, J.; Al-Fuqaha, A. Developing future human-centered smart cities: Critical
analysis of smart city security, Data management, and Ethical challenges. Comput. Sci. Rev. 2022, 43, 100452. [CrossRef]
111. Iqbal, D.; Buhnova, B. Model-based approach for building trust in autonomous drones through digital twins. In Proceedings of
the 2022 IEEE International Conference on Systems, Man, and Cybernetics (SMC), Prague, Czech Republic, 9–12 October 2022; pp.
656–662.
Remote Sens. 2024, 16, 879 38 of 42
112. Dong, Q.; Liu, Y.; Liu, X. Drone sound detection system based on feature result-level fusion using deep learning. Multimed. Tools
Appl. 2023, 82, 149–171. [CrossRef]
113. Aydın, İ.; Kızılay, E. Development of a new Light-Weight Convolutional Neural Network for acoustic-based amateur drone
detection. Appl. Acoust. 2022, 193, 108773. [CrossRef]
114. Jeon, S.; Shin, J.W.; Lee, Y.J.; Kim, W.H.; Kwon, Y.; Yang, H.Y. Empirical study of drone sound detection in real-life environment
with deep neural networks. In Proceedings of the 2017 25th European Signal Processing Conference (EUSIPCO), Kos, Greece, 28
August–2 September 2017; pp. 1858–1862.
115. Bernardini, A.; Mangiatordi, F.; Pallotti, E.; Capodiferro, L. Drone detection by acoustic signature identification. Electron. Imaging
2017, 2017, 60–64. [CrossRef]
116. He, Y.; Ahmad, I.; Shi, L.; Chang, K. SVM-based drone sound recognition using the combination of HLA and WPT techniques in
practical noisy environment. KSII Trans. Internet Inf. Syst. (TIIS) 2019, 13, 5078–5094.
117. Yang, B.; Matson, E.T.; Smith, A.H.; Dietz, J.E.; Gallagher, J.C. UAV detection system with multiple acoustic nodes using machine
learning models. In Proceedings of the 2019 Third IEEE International Conference on Robotic Computing (IRC), Naples, Italy,
25–27 February 2019; pp. 493–498.
118. Kim, J.; Park, C.; Ahn, J.; Ko, Y.; Park, J.; Gallagher, J.C. Real-time UAV sound detection and analysis system. In Proceedings of
the 2017 IEEE Sensors Applications Symposium (SAS), Glassboro, NJ, USA, 13–15 March 2017; pp. 1–5.
119. Al-Emadi, S.; Al-Ali, A.; Mohammad, A.; Al-Ali, A. Audio based drone detection and identification using deep learning. In
Proceedings of the 2019 15th International Wireless Communications & Mobile Computing Conference (IWCMC), Tangier,
Morocco, 24–28 June 2019; pp. 459–464.
120. Uddin, Z.; Altaf, M.; Bilal, M.; Nkenyereye, L.; Bashir, A.K. Amateur Drones Detection: A machine learning approach utilizing
the acoustic signals in the presence of strong interference. Comput. Commun. 2020, 154, 236–245. [CrossRef]
121. Anwar, M.Z.; Kaleem, Z.; Jamalipour, A. Machine learning inspired sound-based amateur drone detection for public safety
applications. IEEE Trans. Veh. Technol. 2019, 68, 2526–2534. [CrossRef]
122. Seo, Y.; Jang, B.; Im, S. Drone detection using convolutional neural networks with acoustic STFT features. In Proceedings of the
2018 15th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS), Auckland, New Zealand,
27–30 November 2018; pp. 1–6.
123. Al-Emadi, S.; Al-Ali, A.; Al-Ali, A. Audio-based drone detection and identification using deep learning techniques with dataset
enhancement through generative adversarial networks. Sensors 2021, 21, 4953. [CrossRef]
124. Salman, S.; Mir, J.; Farooq, M.T.; Malik, A.N.; Haleemdeen, R. Machine learning inspired efficient audio drone detection using
acoustic features. In Proceedings of the 2021 International Bhurban Conference on Applied Sciences and Technologies (IBCAST),
Islamabad, Pakistan, 12–16 January 2021; pp. 335–339.
125. Baron, V.; Bouley, S.; Muschinowski, M.; Mars, J.; Nicolas, B. Drone localization and identification using an acoustic array and
supervised learning. In Proceedings of the Artificial Intelligence and Machine Learning in Defense Applications, Strasbourg,
France, 9–12 September 2019; Volume 11169, pp. 129–137.
126. Ohlenbusch, M.; Ahrens, A.; Rollwage, C.; Bitzer, J. Robust drone detection for acoustic monitoring applications. In Proceedings
of the 2020 28th European Signal Processing Conference (EUSIPCO), Amsterdam, The Netherlands, 18–21 January 2021; pp. 6–10.
127. Dumitrescu, C.; Minea, M.; Costea, I.M.; Cosmin Chiva, I.; Semenescu, A. Development of an acoustic system for UAV detection.
Sensors 2020, 20, 4870. [CrossRef]
128. Ahmed, C.A.; Batool, F.; Haider, W.; Asad, M.; Hamdani, S.H.R. Acoustic Based Drone Detection via Machine Learning. In
Proceedings of the 2022 International Conference on IT and Industrial Technologies (ICIT), Chiniot, Pakistan, 3–4 October 2022;
pp. 1–6.
129. Alaparthy, V.; Mandal, S.; Cummings, M. A comparison of machine learning and human performance in the real-time acoustic
detection of drones. In Proceedings of the IEEE Aerospace, Big Sky, MT, USA, 6–13 March 2021.
130. Mandal, S.; Chen, L.; Alaparthy, V.; Cummings, M.L. Acoustic detection of drones through real-time audio attribute prediction.
In Proceedings of the AIAA Scitech 2020 Forum, Orlando, FL, USA, 6–10 January 2020; p. 0491.
131. Wang, Y.; Fagian, F.E.; Ho, K.E.; Matson, E.T. A feature engineering focused system for acoustic uav detection. In Proceedings of
the 2021 Fifth IEEE International Conference on Robotic Computing (IRC), Taichung, Taiwan, 15–17 November 2021; pp. 125–130.
132. AN, W.; Jha, A.; Kumar, A.; Cenkeramaddi, L.R. Estimation of number of unmanned aerial vehicles in a scene utilizing acoustic
signatures and machine learning. J. Acoust. Soc. Am. 2023, 154, 533–546.
133. Tejera-Berengue, D.; Zhu-Zhou, F.; Utrilla-Manso, M.; Gil-Pita, R.; Rosa-Zurera, M. Acoustic-Based Detection of UAVs Using
Machine Learning: Analysis of Distance and Environmental Effects. In Proceedings of the 2023 IEEE Sensors Applications
Symposium (SAS). IEEE, Ottawa, ON, Canada, 18–20 July 2023; pp. 1–6.
134. Anidjar, O.H.; Barak, A.; Ben-Moshe, B.; Hagai, E.; Tuvyahu, S. A Stethoscope for Drones: Transformers-Based Methods for UAVs
Acoustic Anomaly Detection. IEEE Access 2023, 11, 33336–33353. [CrossRef]
135. Al-Emadi, S. DroneAudioDataset. Available online: https://github.com/saraalemadi/DroneAudioDataset (accessed on 24
December 2023).
136. Piczak, K.J. ESC: Dataset for environmental sound classification. In Proceedings of the 23rd ACM international conference on
Multimedia, New York, NY, USA, 26–30 October 2015; pp. 1015–1018.
137. Warden, P. Speech commands: A dataset for limited-vocabulary speech recognition. arXiv 2018, arXiv:1804.03209.
Remote Sens. 2024, 16, 879 39 of 42
164. Roldan, I.; del Blanco, C.R.; Duque de Quevedo, Á.; Ibañez Urzaiz, F.; Gismero Menoyo, J.; Asensio López, A.; Berjón, D.;
Jaureguizar, F.; García, N. DopplerNet: A convolutional neural network for recognising targets in real scenarios using a persistent
range–Doppler radar. IET Radar Sonar Navig. 2020, 14, 593–600. [CrossRef]
165. Rahman, S.; Robertson, D.A. Multiple drone classification using millimeter-wave CW radar micro-Doppler data. In Proceedings
of the Radar Sensor Technology XXIV. SPIE, Anaheim, CA, USA, 27 April–9 May 2020; Volume 11408, pp. 50–57.
166. Haifawi, H.; Fioranelli, F.; Yarovoy, A.; van der Meer, R. Drone Detection & Classification with Surveillance ‘Radar On-The-
Move’and YOLO. In Proceedings of the 2023 IEEE Radar Conference (RadarConf23), San Antonio, TX, USA, 1–5 May 2023; pp.
1–6.
167. Dale, H.; Baker, C.; Antoniou, M.; Jahangir, M. An initial investigation into using convolutional neural networks for classification
of drones. In Proceedings of the 2020 IEEE International Radar Conference (RADAR), Washington, DC, USA, 28–30 April 2020;
pp. 618–623.
168. Gérard, J.; Tomasik, J.; Morisseau, C.; Rimmel, A.; Vieillard, G. Micro-Doppler signal representation for drone classification by
deep learning. In Proceedings of the 2020 28th European Signal Processing Conference (EUSIPCO), Amsterdam, The Netherlands,
18–21 January 2021; pp. 1561–1565.
169. Yang, Y.; Yang, F.; Sun, L.; Xiang, T.; Lv, P. Echoformer: Transformer architecture based on radar echo characteristics for UAV
detection. IEEE Sens. J. 2023, 23, 8639–8653. [CrossRef]
170. Raval, D.; Hunter, E.; Hudson, S.; Damini, A.; Balaji, B. Convolutional neural networks for classification of drones using radars.
Drones 2021, 5, 149. [CrossRef]
171. Liu, J.; Xu, Q.Y.; Chen, W.S. Classification of bird and drone targets based on motion characteristics and random forest model
using surveillance radar data. IEEE Access 2021, 9, 160135–160144. [CrossRef]
172. Fraser, B.; Perrusquía, A.; Panagiotakopoulos, D.; Guo, W. Hybrid Deep Neural Networks for Drone High Level Intent
Classification using Non-Cooperative Radar Data. In Proceedings of the 2023 3rd International Conference on Electrical,
Computer, Communications and Mechatronics Engineering (ICECCME), Tenerife, Canary Islands, Spain, 9–21 July 2023; pp. 1–6.
173. Rojhani, N.; Passafiume, M.; Sadeghibakhi, M.; Collodi, G.; Cidronali, A. Model-Based Data Augmentation Applied to Deep
Learning Networks for Classification of Micro-Doppler Signatures Using FMCW Radar. IEEE Trans. Microw. Theory Tech. 2023,
71, 2222–2236. [CrossRef]
174. Hasan, M.M.; Chakraborty, M.; Raj, A.A.B. A Hyper-Parameters-Tuned R-PCA+SVM Technique for sUAV Targets Classification
Using the Range-/Micro-Doppler Signatures. IEEE Trans. Radar Syst. 2023, 1, 623–631. [CrossRef]
175. Gomez, S.; Johnson, A.; P, R.B. Classification of Radar Targets Based on Micro-Doppler Features Using High Frequency High
Resolution Radar Signatures. In Proceedings of the 2023 International Conference on Network, Multimedia and Information
Technology (NMITCON), Bengaluru, India, 1–2 September 2023; pp. 1–5.
176. Kumawat, H.C.; Chakraborty, M.; Raj, A.A.B.; Dhavale, S.V. DIAT-µSAT: Small aerial targets’ micro-Doppler signatures and their
classification using CNN. IEEE Geosci. Remote Sens. Lett. 2021, 19, 1–5. [CrossRef]
177. Hügler, P.; Roos, F.; Schartel, M.; Geiger, M.; Waldschmidt, C. Radar taking off: New capabilities for UAVs. IEEE Microw. Mag.
2018, 19, 43–53. [CrossRef]
178. Semkin, V.; Haarla, J.; Pairon, T.; Slezak, C.; Rangan, S.; Viikari, V.; Oestges, C. Analyzing radar cross section signatures of diverse
drone models at mmWave frequencies. IEEE Access 2020, 8, 48958–48969. [CrossRef]
179. Database, R.D. Real Doppler RAD-DAR Database. Available online: https://www.kaggle.com/datasets/iroldan/real-doppler-
raddar-database (accessed on 30 December 2023).
180. Whelan, J.; Sangarapillai, T.; Minawi, O.; Almehmadi, A.; El-Khatib, K. UAV Attack Dataset. IEEE DataPort 2020. [CrossRef]
181. Keipour, A.; Mousaei, M.; Scherer, S. Alfa: A dataset for uav fault and anomaly detection. Int. J. Robot. Res. 2021, 40, 515–520.
[CrossRef]
182. Street, M. Drone Identification and Tracking. Available online: https://www.kaggle.com/competitions/icmcis-drone-tracking
(accessed on 24 December 2023).
183. Rodrigues, T.; Patrikar, J.; Choudhry, A.; Feldgoise, J.; Arcot, V.; Gahlaut, A.; Lau, S.; Moon, B.; Wagner, B.; Scott Matthews, H.;
et al. Data Collected with Package Delivery Quadcopter Drone; Carnegie Mellon University: Pittsburgh, PA, USA, 2020; pp. 1–15.
184. Perrusquia, A.; Tovar, C.; Soria, A.; Martinez, J.C. Robust controller for aircraft roll control system using data flight parameters.
In Proceedings of the 2016 13th International Conference on Electrical Engineering, Computing Science and Automatic Control
(CCE), Mexico City, Mexico, 26–30 September 2016; pp. 1–5.
185. Zhang, W.; Li, G. Detection of multiple micro-drones via cadence velocity diagram analysis. Electron. Lett. 2018, 54, 441–443.
[CrossRef]
186. Lee, H.; Han, S.; Byeon, J.I.; Han, S.; Myung, R.; Joung, J.; Choi, J. CNN-Based UAV Detection and Classification Using Sensor
Fusion. IEEE Access 2023, 11, 68791–68808. [CrossRef]
187. Rai, P.K.; Idsøe, H.; Yakkati, R.R.; Kumar, A.; Khan, M.Z.A.; Yalavarthy, P.K.; Cenkeramaddi, L.R. Localization and activity
classification of unmanned aerial vehicle using mmWave FMCW radars. IEEE Sens. J. 2021, 21, 16043–16053. [CrossRef]
188. Le, H.; Doan, V.S.; Nguyen, H.H.; Huynh-The, T.; Le-Ha, K.; Hoang, V.P.; Le, D.P. Micro-Doppler-radar-based UAV detection
using inception-residual neural network. In Proceedings of the 2020 International Conference on Advanced Technologies for
Communications (ATC), Nha Trang, Vietnam, 2–6 May 2020; pp. 177–181.
Remote Sens. 2024, 16, 879 41 of 42
189. Aouto, A.; Jun, T.; Lee, J.M.; Kim, D.S. DopeNet: Range–Doppler Radar-based UAV Detection Using Convolutional Neural
Network. In Proceedings of the 2023 Fourteenth International Conference on Ubiquitous and Future Networks (ICUFN), Paris,
France, 4–7 July 2023; pp. 889–893.
190. Ezuma, M.; Anjinappa, C.K.; Semkin, V.; Guvenc, I. Comparative analysis of radar cross section based UAV classification
techniques. arXiv 2021, arXiv:2112.09774.
191. Kim, B.K.; Kang, H.S.; Lee, S.; Park, S.O. Improved Drone Classification Using Polarimetric Merged-Doppler Images. IEEE Geosci.
Remote Sens. Lett. 2021, 18, 1946–1950. [CrossRef]
192. Ezuma, M.C. Uav Detection and Classification Using Radar, Radio Frequency and Machine Learning Techniques; North Carolina State
University: Raleigh, NC, USA, 2022.
193. Schwalb, J.; Menon, V.; Tenhundfeld, N.; Weger, K.; Mesmer, B.; Gholston, S. A Study of Drone-Based AI for Enhanced Human-AI
Trust and Informed Decision Making in Human-AI Interactive Virtual Environments. In Proceedings of the 2022 IEEE 3rd
International Conference on Human-Machine Systems (ICHMS), Orlando, FL, USA, 17–19 November 2022; pp. 1–6.
194. Bajracharya, R.; Shrestha, R.; Kim, S.; Jung, H. 6G NR-U based wireless infrastructure UAV: Standardization, opportunities,
challenges and future scopes. IEEE Access 2022, 10, 30536–30555. [CrossRef]
195. Samaras, S.; Magoulianitis, V.; Dimou, A.; Zarpalas, D.; Daras, P. UAV classification with deep learning using surveillance radar
data. In Proceedings of the International Conference on Computer Vision Systems; Springer: Berlin/Heidelberg, Germany, 2019; pp.
744–753. [CrossRef]
196. Arulkumaran, K.; Deisenroth, M.P.; Brundage, M.; Bharath, A.A. Deep reinforcement learning: A brief survey. IEEE Signal
Process. Mag. 2017, 34, 26–38. [CrossRef]
197. Azar, A.T.; Koubaa, A.; Ali Mohamed, N.; Ibrahim, H.A.; Ibrahim, Z.F.; Kazim, M.; Ammar, A.; Benjdira, B.; Khamis, A.M.;
Hameed, I.A.; et al. Drone deep reinforcement learning: A review. Electronics 2021, 10, 999. [CrossRef]
198. Masadeh, A.; Alhafnawi, M.; Salameh, H.A.B.; Musa, A.; Jararweh, Y. Reinforcement learning-based security/safety uav system
for intrusion detection under dynamic and uncertain target movement. IEEE Trans. Eng. Manag. 2022. . [CrossRef]
199. Çetin, E.; Barrado, C.; Pastor, E. Counter a drone in a complex neighborhood area by deep reinforcement learning. Sensors 2020,
20, 2320. [CrossRef] [PubMed]
200. Akhloufi, M.A.; Arola, S.; Bonnet, A. Drones chasing drones: Reinforcement learning and deep search area proposal. Drones 2019,
3, 58. [CrossRef]
201. Çetin, E.; Barrado, C.; Pastor, E. Countering a Drone in a 3D Space: Analyzing Deep Reinforcement Learning Methods. Sensors
2022, 22, 8863. [CrossRef]
202. Svanström, F.; Englund, C.; Alonso-Fernandez, F. Real-time drone detection and tracking with visible, thermal and acoustic
sensors. In Proceedings of the 2020 25th International Conference on Pattern Recognition (ICPR), Milan, Italy, 10–15 January 2021;
pp. 7265–7272.
203. Liu, H.; Wei, Z.; Chen, Y.; Pan, J.; Lin, L.; Ren, Y. Drone detection based on an audio-assisted camera array. In Proceedings of
the 2017 IEEE Third International Conference on Multimedia Big Data (BigMM), Laguna Hills, CA, USA, 19–21 April 2017; pp.
402–406.
204. Park, S.; Shin, S.; Kim, Y.; Matson, E.T.; Lee, K.; Kolodzy, P.J.; Slater, J.C.; Scherreik, M.; Sam, M.; Gallagher, J.C.; et al. Combination
of radar and audio sensors for identification of rotor-type unmanned aerial vehicles (uavs). In Proceedings of the 2015 IEEE
SENSORS , Busan, Republic of Korea, 1–4 November, 2015; pp. 1–4.
205. Diamantidou, E.; Lalas, A.; Votis, K.; Tzovaras, D. Multimodal deep learning framework for enhanced accuracy of UAV detection.
In Proceedings of the Computer Vision Systems: 12th International Conference, ICVS 2019, Thessaloniki, Greece, 23–25 September
2019; Proceedings 12; Springer: Berlin/Heidelberg, Germany, 2019; pp. 768–777.
206. Mehta, V.; Dadboud, F.; Bolic, M.; Mantegh, I. A Deep Learning Approach for Drone Detection and Classification Using Radar
and Camera Sensor Fusion. In Proceedings of the 2023 IEEE Sensors Applications Symposium (SAS), Ottawa, ON, Canada, 18–20
July 2023; pp. 1–6.
207. Kim, J.; Lee, D.; Kim, Y.; Shin, H.; Heo, Y.; Wang, Y.; Matson, E.T. Deep Learning Based Malicious Drone Detection Using Acoustic
and Image Data. In Proceedings of the 2022 Sixth IEEE International Conference on Robotic Computing (IRC), Italy, 5–7 December
2022; pp. 91–92.
208. DroneDetectionThesis. Drone Detection Dataset. Available online: https://github.com/DroneDetectionThesis/Drone-detection-
dataset (accessed on 30 December 2023).
209. Rahimi, A.M.; Ruschel, R.; Manjunath, B. Uav sensor fusion with latent-dynamic conditional random fields in coronal plane
estimation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition , Las Vegas, NV, USA, 27–30
June 2016; pp. 4527–4534.
210. Xie, X.; Yang, W.; Cao, G.; Yang, J.; Zhao, Z.; Chen, S.; Liao, Q.; Shi, G. Real-time vehicle detection from UAV imagery. In
Proceedings of the 2018 IEEE Fourth International Conference on Multimedia Big Data (BigMM), Xi’an, China, 13–16 September
2018; pp. 1–5.
211. Ntizikira, E.; Lei, W.; Alblehai, F.; Saleem, K.; Lodhi, M.A. Secure and Privacy-Preserving Intrusion Detection and Prevention in
the Internet of Unmanned Aerial Vehicles. Sensors 2023, 23, 8077. [CrossRef] [PubMed]
212. Silva, S.H.; Rad, P.; Beebe, N.; Choo, K.K.R.; Umapathy, M. Cooperative unmanned aerial vehicles with privacy preserving deep
vision for real-time object identification and tracking. J. Parallel Distrib. Comput. 2019, 131, 147–160. [CrossRef]
Remote Sens. 2024, 16, 879 42 of 42
213. Lee, E.; Seo, Y.D.; Oh, S.R.; Kim, Y.G. A Survey on Standards for Interoperability and Security in the Internet of Things. IEEE
Commun. Surv. Tutor. 2021, 23, 1020–1047. [CrossRef]
214. Rodriguez-Ramos, A.; Rodriguez-Vazquez, J.; Sampedro, C.; Campoy, P. Adaptive inattentional framework for video object
detection with reward-conditional training. IEEE Access 2020, 8, 124451–124466. [CrossRef]
215. Jiang, N.; Wang, K.; Peng, X.; Yu, X.; Wang, Q.; Xing, J.; Li, G.; Zhao, J.; Guo, G.; Han, Z. Anti-UAV: A large multi-modal
benchmark for UAV tracking. arXiv 2021, arXiv:2101.08466.
216. Park, J.; Kim, D.H.; Shin, Y.S.; Lee, S.h. A comparison of convolutional object detectors for real-time drone tracking using a PTZ
camera. In Proceedings of the 2017 17th International Conference on Control, Automation and Systems (ICCAS), Jeju, Republic
of Korea, 18–21 October 2017; pp. 696–699.
217. Baltrušaitis, T.; Ahuja, C.; Morency, L.P. Multimodal machine learning: A survey and taxonomy. IEEE Trans. Pattern Anal. Mach.
Intell. 2018, 41, 423–443. [CrossRef]
218. Wang, P.; Wang, L.; Leung, H.; Zhang, G. Super-resolution mapping based on spatial-spectral correlation for spectral imagery.
IEEE Trans. Geosci. Remote Sens. 2020, 59, 2256–2268. [CrossRef]
219. Huang, B.; Zhao, B.; Song, Y. Urban land-use mapping using a deep convolutional neural network with high spatial resolution
multispectral remote sensing imagery. Remote Sens. Environ. 2018, 214, 73–86. [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual
author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to
people or property resulting from any ideas, methods, instructions or products referred to in the content.