Futureinternet 16 00032 v2
Futureinternet 16 00032 v2
Review
A Holistic Review of Machine Learning Adversarial Attacks in
IoT Networks
Hassan Khazane 1 , Mohammed Ridouani 1 , Fatima Salahdine 2, * and Naima Kaabouch 3, *
1 RITM Laboratory, CED Engineering Sciences, ENSEM, Hassan II University, Casablanca 20000, Morocco;
hassan.khazane-etu@etu.univh2c.ma (H.K.); mohammed.ridouani@etu.univh2c.ma (M.R.)
2 Department of Electrical and Computer Engineering, University of North Carolina at Charlotte,
Charlotte, NC 28223, USA
3 School of Electrical and Computer Science, University of North Dakota, Grand Forks, ND 58202, USA
* Correspondence: fsalahdi@uncc.edu (F.S.); naima.kaabouch@und.edu (N.K.)
Abstract: With the rapid advancements and notable achievements across various application domains,
Machine Learning (ML) has become a vital element within the Internet of Things (IoT) ecosystem.
Among these use cases is IoT security, where numerous systems are deployed to identify or thwart
attacks, including intrusion detection systems (IDSs), malware detection systems (MDSs), and device
identification systems (DISs). Machine Learning-based (ML-based) IoT security systems can fulfill
several security objectives, including detecting attacks, authenticating users before they gain access
to the system, and categorizing suspicious activities. Nevertheless, ML faces numerous challenges,
such as those resulting from the emergence of adversarial attacks crafted to mislead classifiers. This
paper provides a comprehensive review of the body of knowledge about adversarial attacks and
defense mechanisms, with a particular focus on three prominent IoT security systems: IDSs, MDSs,
and DISs. The paper starts by establishing a taxonomy of adversarial attacks within the context of IoT.
Then, various methodologies employed in the generation of adversarial attacks are described and
classified within a two-dimensional framework. Additionally, we describe existing countermeasures
for enhancing IoT security against adversarial attacks. Finally, we explore the most recent literature
on the vulnerability of three ML-based IoT security systems to adversarial attacks.
Numerous
Figure
Figure 1. 1.
Generic surveys
Generic process
process have
of of been
adversarial
adversarial published
attack.
attack. that explore how adversarial attacks affect
the performance of ML-based systems in diverse domains, including, but not limited to,
Numerous surveys have been published that explore how adversarial attacks affect
computer visionsurveys
Numerous [16–19], natural
have been language
published processing
that explore[20,21], and speech
how adversarial recognition
attacks affect [22].
the performance of ML-based systems in diverse domains, including, but not limited to,
the performance
Thecomputer
majorityvision of ML-based
of existing systems in diverse domains, including, but not
surveys are related to adversarial attacks against ML in limited to, the do-
[16–19], natural language processing [20,21], and speech recognition [22].
computer
main vision [16–19],
visionnatural language processingnetwork
[20,21], and speech[23,24].
recognition [22].
Theof computer
majority of existing [16–18]
surveys and
are traditional
related security
to adversarial attacks against ML in theHowever,
domain these
The majority
attacks of existing
have received surveys are
less attention related
in theto adversarial
field of attacks
IoT network against ML in
security.thesethe
Figuredo-2a illus-
of computer vision [16–18] and traditional network security [23,24]. However, attacks
main of computer vision [16–18] and traditional network security [23,24]. However, these
trates
havethe growing
received less focus of the
attention research
in the community
field of IoT on adversarial
network security. Figure 2aattacks. In contrast,
illustrates the
attacks have received less attention in the field of IoT network security. Figure 2a illus-
growing
Figure focus of the
2b highlights research
the community
low number on adversarial
of published attacks.
research in theIncontext
contrast,
of Figure 2b
IoT ML-based
trates the growing focus of the research community on adversarial attacks. In contrast,
highlights the low number of published research in the context of IoT ML-based security.
security.
Figure 2b highlights the low number of published research in the context of IoT ML-based
security.
(a)
(a) (b) (b)
Figure
Figure 2.
Figure
Total
2. 2.
Total number
number
Total numberofof
of
papers
papers
papers
related
related toto
related
toAdversarial
Adversarial
Adversarial Attacks
Attacks published
published
Attacks in in
published
in years:
recent recent
recent
years:
(a)
years: all(a)
In In
(a) all
In all
domains;
domains; (b) In the IoT domain only. The row data source is from [25] and it is completed
(b) In the IoT domain only. The row data source is from [25] and it is completed based on based on
domains; (b) In the IoT domain only. The row data source is from [25] and it is completed based on
ourresearch
our research findings
findingsininthe
theIoT domain
IoT domainfromfrom to July
20192019 2023.2023.
to July The forecast was projected
The forecast throughthrough
was projected
our research findings in the IoT domain from 2019 to July 2023. The forecast was projected through
quadratic
quadratic curve modeling.
curve modeling.
quadratic curve modeling.
In the field of traditional network security, the authors of [24] presented a survey of
In In
the field
the fieldofoftraditional networksecurity,
traditional network security, the
the authors
authors of [24]
of [24] presented
presented a survey
a survey of of
the current research landscape regarding the ML vulnerability to adversarial attacks. The
thethe current
current researchlandscape
research landscape regarding
regardingthe theML
MLvulnerability to adversarial
vulnerability attacks.
to adversarial The The
attacks.
survey reviewed different varieties of adversarial attacks encompassing evasion attacks
survey
survey revieweddifferent
reviewed different varieties
varieties of
ofadversarial
adversarialattacks encompassing
attacks encompassingevasion attacks
and poisoning attacks and discussed their impact on various traditional networkevasion
security attacks
andand poisoning
poisoning attacks and discussed their impact on various traditional network security
ML-based attacks and discussed their impact on
models such as IDSs and MDSs. The study also outlined various defensivesecurity
various traditional
ML-based models such as IDSs and MDSs. The study also outlined various defensive
network
ML-based models such as IDSs and MDSs. The study also outlined various defensive
mechanisms that have been suggested to minimize the effects of adversarial attacks. How-
Future Internet 2024, 16, 32 3 of 41
ever, the survey’s main focus was on traditional network security, while the security of IoT
networks was very briefly discussed in a very short paragraph with a unique reference in
the IoT context literature. Jmila, H et al. [23] provided a comparative study of ML-based IDS
vulnerability to adversarial attacks and paid more attention to the so-called shallow models
(non-deep learning models). The authors assessed the resilience of seven shallow ML-based
and one Deep Neural Network (DNN), against a variety of adversarial attacks commonly
employed in state-of-the-art datasets using NSL-KDD [26] and UNSW-NB15 [27]. The
survey paid minimal attention to adversarial attacks in the field of IoT security, offering
only four references without any accompanying discussion. Alatwi et al. [28] discussed
adversarial black-box attacks against IDS and provided a survey of recent research on
traditional network security and Software-defined Networking (SDN). Within its scope, the
survey focused solely on reviewing research studies that employed adversarial generation
attacks using different variants of Generative Adversarial Networks (GAN). Meanwhile,
it overlooked the most widely used adversarial attack methods and defense strategies.
Furthermore, limiting this survey to the black-box attacks was of interest, as it closely
aligns with the most realistic circumstances for the adversary. However, studying the
white-box attacks could be more interesting and beneficial for IDS’s manufacturers who
have complete access to their system and seek to assess its resilience against adversarial
attacks, as well as in the scenario of insider attacks [29,30], where the attackers can have
access to sensitive resources and system information, the protection against white-box
attacks can be more challenging.
In the IoT network context, only a handful of published surveys have discussed
adversarial attacks against ML-based security systems. For instance, in the survey in [30],
the authors’ primary focus was to review and categorize the existing body of information on
adversarial attacks and defense techniques in IoT scholarly articles, with a unique emphasis
on insider adversarial attacks. The authors presented a taxonomy of adversarial attacks,
from an internal perspective, targeting ML-based systems in an IoT context. Additionally,
they offered real-world application examples to illustrate this concept. The article also
discussed defensive measures that can be used to resist these kinds of attacks in IoT.
However, the external (black-box) adversarial attacks, which represent a realistic scenario,
are not discussed, hence the Model Extraction attacks were not covered in the survey as the
insider adversary usually has full knowledge of the ML model. In [31], the authors surveyed
existing IDSs used for securing IoT-based smart environments such as Network Intrusion
Detection Systems (NIDS) and Hybrid Intrusion Detection Systems (HIDS). They provided
benefits and drawbacks of diverse anomaly-based intrusion detection methods, such as
signal processing model, protocol model, payload model, rule-based model, machine
learning, and others, where machine learning techniques require a brief overview without
discussing the vulnerability of those ML-based systems to adversarial attacks. The study
in [32] presented a thorough examination of ML-based attacks on IoT networks, offering a
classification of these attacks based on the employed ML algorithm. The authors sought
to explore a range of cyberattacks that integrated machine learning algorithms. However,
adversarial attacks received only a brief discussion as one category of ML-based attacks,
with mention of three adversarial attacks: the Jacobian-based Saliency Map Attack (JSMA),
DeepFool, and the Carlini and Wagner (C&W) attack, as well as defense methods but
they lack in-depth discussion. In [33], Li et al. surveyed adversarial threats that exist
within the context of Cyber-Physical Systems (CPS). CPS is a subset of IoT, where the
connection between cyberspace and physical space is provided by actuators and sensors.
As a result, the work presented in [33] was limited to sensor-based threats only, which are a
subset of network-based and side-channel attacks in the attack taxonomy of IoT networks.
He et al. [34] explored the disparity in adversarial learning within the fields of Network
Intrusion Detection Systems (NIDS) and Computer Vision. They accomplished this by
reviewing the literature on adversarial attacks and defenses against IDS, with a special
focus on IDS in traditional networks. The authors limited their study to evasion attacks
only, considering that NIDS are typically created in secure environments, in which case the
Future Internet 2024, 16, 32 4 of 41
external attackers lack access to the training data set. Furthermore, the authors provided a
taxonomy related to NIDS and not to adversarial attacks themselves.
In light of the information presented above and summarized in Table 1, there is a
notable scarcity of published surveys specifically addressing adversarial attacks against
ML-based security systems in IoT networks. The limited number of existing surveys tend
to have a narrow focus on the issue, with some solely concentrating on ML-based IDSs,
while disregarding the wider scope, which encompasses ML-based MDSs and ML-based
DISs. Also, some have been focusing primarily on insider threats while neglecting external
ones. Additionally, certain surveys exclusively examine black-box attacks, overlooking
white-box attacks.
To bridge these gaps, this survey offers a comprehensive review of the current research
landscape regarding adversarial attacks on IoT networks, with a special emphasis on explor-
ing the vulnerabilities of ML-based IDSs, MDSs, and DISs. The survey also describes and
classifies various adversarial attack generation methods and adversarial defense methods.
To the best of our knowledge, this survey will be the first attempt of its kind to
comprehensively discuss the holistic view of adversarial attacks against ML-based IDSs,
MDSs, and DISs in the context of IoT, making a significant contribution to the field. This
paper’s contributions are outlined as follows:
1. Revising and redefining the adversarial attack taxonomy for ML-based IDS, MDS,
and DIS in the IoT context.
2. Proposing a novel two-dimensional-based classification of adversarial attack genera-
tion methods.
3. Proposing a novel two-dimensional-based classification of adversarial defense
mechanisms.
4. Providing intriguing insights and technical specifics on state-of-the-art adversarial
attack methods and defense mechanisms.
5. Conducting a holistic review of the recent literature on adversarial attacks within
three prominent IoT security systems: IDSs, MDSs, and DISs.
The rest of this paper is organized as follows: Section 2 gives background about IoT
network architecture and its privacy and security perspective. Section 3 redefines the
threat model taxonomy in the IoT network context. Section 4 gives an overview of the
most popular adversarial attack generation methods. Section 5 elaborates on the existing
adversarial defense methods. Section 6 discusses the recent studies related to adversarial
attacks against ML-based security systems in IoT networks. Section 7 ends the paper with
challenges and directions for future works, and Section 8 concludes the paper.
Future Internet 2024, 16, 32 5 of 41
Table 1. Cont.
2. Background
2. Background
2.1.Security
2.1. Securityand andPrivacy
PrivacyOverview
Overview
InInthe
thelast
lasttwenty
twentyyears,
years,thethepotential
potentialapplications
applicationsof ofIoT
IoThave
havebeenbeensteadily
steadilymulti-
multi-
plyingacross
plying acrossvarious
various sectors
sectors paving
paving the
theway
wayforfornew
newbusiness
business prospects
prospects [2,37,38].
[2,37,38]. Yet,Yet,
the
emergence
the emergence of IoT hashas
of IoT simultaneously
simultaneously presented
presentedmanufacturers
manufacturers andandconsumers
consumers withwith
new
challenges
new challenges[2,3,39]. OneOne
[2,3,39]. of the principal
of the challenges
principal lies lies
challenges in safeguarding
in safeguarding the the
security and
security
and privacy of both the IoT objects and the data they produce. Ensuring the securitynet-
privacy of both the IoT objects and the data they produce. Ensuring the security of IoT of
works
IoT is a complicated
networks and arduous
is a complicated task due
and arduous taskto due
the inherent intricacies
to the inherent within the
intricacies IoT the
within net-
work
IoT characterized
network by the interconnection
characterized of multiple
by the interconnection heterogeneous
of multiple devices devices
heterogeneous from different
from
locationslocations
different and exchanging information
and exchanging with eachwith
information other through
each various network
other through technolo-
various network
gies. As a result,
technologies. As aIoT systems
result, are notably
IoT systems vulnerable
are notably to privacy
vulnerable and security
to privacy threats.threats.
and security
Beforedelving
Before delvinginto intothose
thosesecurity
securitythreats
threatsininthe
theIoT
IoTlandscape,
landscape,ititisispivotal
pivotaltotoexplore
explore
itssecurity
its security andand privacy features.
features.Overlooking
Overlooking these security
these security measures
measures cancan introduce
introducevul-
vulnerabilities into the framework. Through a thorough review of
nerabilities into the framework. Through a thorough review of the literature on IoT secu-the literature on IoT
security [40–43],
rity [40–43], these
these features
features have
have beenpinpointed.
been pinpointed.Figure
Figure33 encapsulates
encapsulates the key keysecurity
security
and
andprivacy
privacyfeatures
featuresofofthetheIoTIoTinfrastructure.
infrastructure.
Figure3.3.Key
Figure Keysecurity
securityand
andprivacy
privacyfeatures
featuresofofIoT
IoTnetwork.
network.
Traditionalsecurity
Traditional securitymethods,
methods,which
whichemploy
employaapredefined
predefinedset setof
ofstrategies
strategiesand
andrules,
rules,
have exhibited several drawbacks when implementing specific features.
have exhibited several drawbacks when implementing specific features. They often over- They often over-
look new varieties of attacks and are restricted to pinpointing certain
look new varieties of attacks and are restricted to pinpointing certain types of threats. types of threats.
Hence,the
Hence, theemergence
emergence of advanced securitysecuritysolutions
solutionssuch
suchasassolutions
solutionspowered
powered bybyartifi-
ar-
cial intelligence.
tificial intelligence. TheThe
utilization of ML
utilization algorithms
of ML has has
algorithms the potential to offer
the potential security
to offer solu-
security
tions for for
solutions IoT IoT
networks, ultimately
networks, improving
ultimately improvingtheirtheir
reliability and accessibility.
reliability ML-based
and accessibility. ML-
security
based models
security can process
models large large
can process amounts of data
amounts in real
of data in time and continuously
real time and continuously learn
learn
from from generated
generated training
training anddata,
and test test which
data, which increases
increases their accuracy
their accuracy as wellasaswell as
enables
enables
them tothem to proactively
proactively anticipate
anticipate new attacksnewby attacks by drawing
drawing insights
insights from from previous
previous incidents.
incidents.
Our survey Ourwill
survey
limitwill
thelimit
studythetostudy to contemporary
contemporary researchresearch
on theon the vulnerability
vulnerability of
of three
three ML-based
ML-based IoT security
IoT security systems:
systems: Intrusion
Intrusion Detection
Detection System
System (IDS),(IDS), Malware
Malware Detection
Detection Sys-
System
tem (MDS),(MDS), andand Device
Device Identification
Identification System
System (DIS).
(DIS).
2.2.
2.2.Internet
InternetofofThings
ThingsOverview
Overview
The IoT is one
The IoT is one of the
of cutting-edge technologies
the cutting-edge in Industry
technologies 4.0, where
in Industry 4.0,the term “Things”
where the term
refers to smart devices or objects interconnected through wireless networks
“Things” refers to smart devices or objects interconnected through wireless networks [44,45]. These
“Things” range from everyday household objects to advanced industrial instruments
[44,45]. These “Things” range from everyday household objects to advanced industrial capable
ofinstruments
sensing, gathering,
capable oftransmitting, and analyzing
sensing, gathering, data. Such
transmitting, and capabilities facilitate
analyzing data. Suchsmart
capa-
decision-making and services enhancing both human life quality and industrial production.
bilities facilitate smart decision-making and services enhancing both human life quality
At present, there is no agreed-upon structure for IoT architecture. The fundamental
and industrial production.
framework of IoT comprises three layers: the perception layer, the network layer, and the
application layer [46]. Yet, based on the requirements for data processing and making
At present, there is no agreed-upon structure for IoT architecture. The fundam
Future Internet 2024, 16, 32 framework of IoT comprises three layers: the perception layer, the network 8 oflayer,
41 an
application layer [46]. Yet, based on the requirements for data processing and makin
telligent decisions, a support or middleware layer, positioned between the network
intelligent
application decisions,
layers, was alater
support or middleware
deemed layer, positioned
to be essential between the
[47]. Different network and are uti
technologies
application layers, was later deemed to be essential [47]. Different technologies are utilized
withinwithin
eacheach
of these layers, introducing various challenges and security concerns [
of these layers, introducing various challenges and security concerns [2,48].
FigureFigure
4 shows thethe
4 shows four-layered IoTarchitecture
four-layered IoT architecture showing
showing variousvarious devices, technolo
devices, technologies,
and applications along
and applications alongwith
with possible security
possible security threats
threats at eachatlayer.
each layer.
FigureFigure 4. Four-layered
4. Four-layered IoTarchitecture
IoT architecture and corresponding
and security
corresponding issues. issues.
security
• Perception layer: The bottom layer of any IoT framework involves “things” or endpoint
Perception layer:
objects that serveTheas thebottom layer of
bridge between theany IoTand
physical framework involves
the digital worlds. “things” or
The percep-
pointtion
objects
or sensingthatlayer
serve astothe
refers the bridge
physical between the physical
layer, encompassing sensorsand the digital wo
and actuators
capable of gathering information from the real environment and transmitting it through
The perception or sensing layer refers to the physical layer, encompassing se
wireless or wired connections. This layer can be vulnerable to security threats such as
and actuators
insertion of capable
fake data, of nodegathering
capturing,information
malicious code,from the real
side-channel environment
attacks, jamming and t
mitting it through
attacks, sniffing orwireless
snooping, or wired
replay connections.
attacks, This layer
and sleep deprivation can be vulnerable
attacks.
•
curityNetwork
threatslayer: such It is
asknown as the of
insertion second
fakelayerdata,connecting the perception
node capturing, layer and code,
malicious
middleware layer. It is also called the communication layer because it acts as a
channel attacks, jamming attacks, sniffing or snooping, replay attacks, and sleep
communication bridge, enabling the transfer of data acquired in the perception layer
rivation attacks.
to other interconnected devices or a processing unit, conversely. This transmission
Network
utilizeslayer:
various It is known
network as the second
technologies like LTE,layer connecting
5G, Wi-Fi, infrared,theetc.perception
The data layer
transfer is executed securely, ensuring the confidentiality
middleware layer. It is also called the communication layer because it acts as a of the obtained information.
Nonetheless, persistent security vulnerabilities can manifest as data transit attacks,
munication
phishing,bridge, enabling theand
identity authentication, transfer of data
encryption attacks,acquired in the denial-of-
and distributed perception lay
otherservice
interconnected
(DDoS/DoS) devices attacks. or a processing unit, conversely. This transmission
•
lizes Middleware
various network layer: It is also commonlylike
technologies known LTE,as the
5G,support
Wi-Fi,layer or processing
infrared, etc. Thelayer.
data tra
It is the brain of the IoT ecosystem, and its primary functions are data processing,
is executed securely, ensuring the confidentiality of the obtained information. N
storage, and intelligent decision-making. The middleware layer is the best candidate
theless, persistent
to implement security
advanced IoT vulnerabilities
security mechanisms, cansuch
manifest
as ML-basedas data transit
security attacks, p
systems,
ing, identity
thanks to authentication,
its high computation and encryption
capacity. Therefore, attacks,
it is alsoand distributed
a target denial-of-se
of adversarial at-
tacks and
(DDoS/DoS) attacks. other various attacks such as SQL injection attacks, cloud malware injection,
insider attacks, signature wrapping attacks, man-in-the-middle attacks, and cloud
Middleware layer: It is also commonly known as the support layer or proce
flooding attacks.
layer.
• It is the layer:
Application brainIt is ofthethe IoT ecosystem,
uppermost layer withinand the IoTitsarchitecture.
primary functions
It serves as theare data
cessing,
user storage,
interface toand intelligent
monitor IoT devicesdecision-making.
and observe data through The middleware
various applicationlayer is the
services and tools, such as dashboards and mobile
candidate to implement advanced IoT security mechanisms, such as ML-based applications, as well as applying
various control activities by the end user. There are various use cases for IoT applica-
rity systems,
tions such thanks
as smart to its high
homes computation
and cities, smart logisticscapacity. Therefore,and
and transportation, it is also a targ
smart
adversarial
agricultureattacks and other various
and manufacturing. attacks
This layer is alsosuch
subject astoSQL injection
various securityattacks,
threats cloud
ware injection, insider attacks, signature wrapping attacks, man-in-the-midd
tacks, and cloud flooding attacks.
Application layer: It is the uppermost layer within the IoT architecture. It serv
the user interface to monitor IoT devices and observe data through various app
Future Internet 2024, 16, 32 9 of 41
such as sniffing attacks, service interruption attacks, malicious code attacks, repro-
gramming attacks, access control attacks, data breaches, application vulnerabilities,
and software bugs.
because they operate under the assumption that the attacker can only leverage system
interfaces that are readily accessible for typical use.
• Testing phase: In testing, adversarial attacks do not alter the training data or directly
interfere with the model. Instead, they seek to make the model produce incorrect
results by maliciously modifying input data. In addition to the level of information
at the adversary’s disposal and, the attacker’s knowledge, the efficacy of these at-
tacks depends on three main capabilities: adaptive attack, non-adaptive attack, and
strict attack.
• Adaptive Attack: The adversary is crafting an adaptive malicious input that exploits
the weak points of the ML model to mistakenly classify the malicious samples as
benign. The adaptiveness can be achieved either by meticulously designing a sequence
of input queries and observing their outputs in a black-box scenario or through
accessing the ML model information and altering adversarial example methods that
maximize the error rate in case of a white-box scenario.
• Non-adaptive attack: The adversary’s access is restricted solely to the training data
distribution of the target model. The attacker starts by building a local model, choosing
a suitable training procedure, and training it using samples from data distribution to
mimic the target classifier’s learned model. Leveraging this local model, the adversary
creates adversarial examples and subsequently applies these manipulated inputs
against the target model to induce misclassifications.
• Strick Attack: The attacker lacks access to the training dataset and is unable to dy-
namically alter the input request to monitor the model’s response. If the attacker
attempts to request valid input samples and introduces slight perturbations to observe
the output label, this activity most probably will be flagged by the target ML model as
a malicious attack. Hence, the attacker is constrained to perform a restricted number
of closely observed queries, presuming that the target ML system will only detect the
malicious attacks after a specific number of attempts.
• Deployment phase: Adversarial attacks during the deployment or production phase
represent the most realistic scenario where the attacker’s knowledge of the target
model is limited to its outputs, which correspond to a black-box scenario. Hence,
the attack’s success during deployment time relies on two main capabilities, the pre-
sumption of transferability or the feedback to inquiries. Consequently, the attacker’s
capability during the deployment phase can be categorized into two distinct groups,
namely transfer-based attack and query-based attack.
• Transfer-based Attack: The fundamental concept underlying transfer-based attack
revolves around the creation of adversarial examples on local surrogate models in
such a way that these adversarial examples can effectively deceive the remote target
model as well. The transferability propriety encompasses two types: task-specific
transferability which applies to scenarios where both the remote victim model and the
local model are concerned with the same task, for instance, classification. Cross-task
transferability arises when the remote victim model and the local model are engaged
in diverse tasks, such as classification and detection.
• Query-based Attack: The core idea behind query-based attacks lies in the direct
querying of the target model and leveraging the outputs to optimize adversarial
samples. To do this, the attacker queries the target model’s output by providing inputs
and observing the corresponding results, which can take the form of class labels or
score values. Consequently, query-based attacks can be further categorized into two
distinct types: decision-based and score-based.
Figure
Figure 6. Classification 6. Classification
of adversarial of adversarial
attack generationattack generation methods.
methods.
Future Internet 2024, 16, 32 14 of 41
where ε represents a small value and ∇ denotes the gradient of loss function J relative
to the original input data (i.e, image) X, the original input class label Y, and the model
parameters θ.
The FGSM algorithm can be summarized in three steps. The first step computes the
gradient of the loss relative to the inputs, the second step scales the gradient to have a
maximum magnitude of ε, and the third step adds the scaled gradient to the input data
(i.e., image) X to create the adversarial example Xadversarial .
Although this method is fast for generating AEs, its effectiveness is lower than that of
other state-of-the-art methods for generating adversarial attacks because it generates only
one AE per input data point and may not be able to explore the full space of possible AEs.
Additionally, being a white-box attack is that it assumes full knowledge of the targeted
model. This requirement limits its applicability in scenarios where the adversary possesses
restricted access to the model’s internal details, but it remains useful for manufacturers to
assess the resilience of their ML models against adversarial attacks as well as in scenarios
of insider attacks [36].
perturbance for each pixel. The formula can be summarized by the following Expression (2).
n o
X0adv = X, XNadv adv adv
+1 = Clip X +ε X N + α.Sign ∇ x J X N , Y (2)
where J denotes the loss function, X is the original input data (i.e., image), Y is the original
input class label, N denotes the iteration count and α is the constant that controls the
magnitude of the disturbance. The Clip {} function guarantees that the crafted AE remains
within the space of both the ε ball (i.e., [x − ε, x + ε]) and the input space.
The BIM algorithm involves starting with clean data (i.e., image) as the initial input.
The gradient of the loss function is computed relative to the input, and a small perturbation
is added along the gradient direction, scaled by a defined step size. The perturbed input is
then clipped to ensure it stays within a valid range. These steps are iterated until a desired
condition is met or for a set number of iterations.
Although this method is simple to generate AEs, it might demand an extensive series
of iterations to find the most effective and optimal AEs, and this may be computationally
expensive and may not converge for all functions or initial assumptions.
here Cε is constraint set where Cε = {z:d( x, z) < ε}, ∏Cε denotes projection onto the set Cε ,
and α is the step size. For example, the projection ∏Cε (z) for d( x, z) = k x − zk∞ is given
by clipping z to [x − ε, x + ε]. J dedenotes the loss function of the model, X is the original
input data (i.e., image), Y is the original input class label, N denotes the iteration count and
α is constant to regulate the perturbation magnitude.
PGD ensures that the solution falls within the feasible space, making it suitable for
solving constrained optimization problems. However, the projection step can be computa-
tionally expensive, particularly for complex constraint sets.
By formalizing the optimization problem depicted in Equation (4), where the primary
aim is to minimize the perturbations r introduced to the original input (i.e., image) while
considering the L2 distance.
here, X denotes the original input data (i.e., image), r is the perturbation simple within
the input domain D, f is the classifier’s loss function and l is the incorrect predicted label
(l 6= h( X )) of the adversarial example X’ = X + r.
By optimizing for the L2 distance and prioritizing precision over speed, the L-BFGS
attack aims to generate perturbations that result in small changes across all dimensions of
the input, rather than focusing on maximizing the change in a single dimension. Hence,
this method excels in generating AEs, yet its feasibility is limited by a computationally
demanding algorithm to explore an optimal solution.
s.t. f X 0 = f ( X + δx ) = l 0
Arg minδx kδx k (5)
calculating the positive derivative for a given input sample X, the Jacobian matrix is
computed as expressed by the following Formula (6):
δ f j (X)
∂ f (X)
J f (X) = = (6)
∂X δxi i ∈1...M; j∈1...N
When compared to FGSM, this technique demands more computational power due to
the computation of saliency values. Nonetheless, it significantly limits the number of
perturbed features, resulting in the generation of AEs that appear to be more similar to the
original sample.
here r is the minimal perturbation, δ is the robustness of the affine classifier f to the original
input X for f ( x ) = W T .x + b where W is the weight of the affine classifier and b is the bias
of the affine classifier.
As white-box attack, the DFA method offers an efficient and precise approach to assess
the resilience of ML models. It achieves this by generating adversarial samples with smaller
perturbation sizes compared to those generated by FGSM and JSMA methods while having
higher deception ratios. However, it is more computationally expensive than both.
where F(X’) ∈ RK is the probability distribution of the back-box output, K is the number of
classes and k ≥ 0 serves as a tuning parameter to enhance attack transferability.
The approximated gradients, defined as ĝi , are computed using the finite differences
method called also symmetric difference quotient as per the Expression (12).
with h being a small constant and ei represents the i-th component of the standard basis
vector. ZOO can be used in Newton’s method with Hessian estimate ĥi as per the following
Expression (13).
∂2 f ( x ) f ( x + hei ) + 2 f ( x ) − f ( x − hei )
ĥi := ≈ (13)
∂xii2 h2
Although this method has proven its efficacity in estimating the gradient and Hessian
while resulting in a similar performance to the C&W attack, without the requirement of
training substitute models or information on the target classifier; however, it necessitates a
considerable number of queries to the model, which can add to significant computational
costs and time requirements and may cause detection of the attacker in real scenarios.
authors used a different approach by modifying the constraint to restrict the quantity of
pixels that can be modified. The equation is slightly changed to the Expression (15)
where gk is the margin constraints impacted by xc and defined by the Expression (17).
here, α represents the dual variables of the SVM, which correspond to each training data
point. Qss denotes the margin support vector submatrix of Q.
The authors use the gradient ascent technique to iteratively optimize the non-convex
objective function L( xc ). This optimization procedure presupposes the initial selection
(0)
of an attack point location xc and in each iteration updates the attack point using the
p p −1
formula xc = xc − tu, where p is the ongoing iteration, u is a norm-1 vector indicating
the attack direction, and t denotes the magnitude of the step.
Although this method is a first-order optimization algorithm that only requires the
gradient of the objective function calculation, it is sensitive to the starting parameter
settings. In case the initial values are too far from the optimal values, the algorithm will
most probably converge to a local maximum than a global maximum, or will slowly cover
an optimal solution especially, when the objective function is highly non-convex.
Future Internet 2024, 16, 32 20 of 41
where p g ( x ) is the generator’s distribution over data x, pz (z) is a prior on input noise
variables. D ( x ) corresponds to the probability that xcomes from the original dataset
rather than from the generated distribution p g . G z, θ g is a differentiable representation
embodied by a multilayer perceptron parameterized by θ g . The objective is to train D to
maximize the probability of correctly labeling the training samples, while simultaneously
training G to minimize it.
Since its introduction in 2014 by Goodfellow et al. [64], GAN has spawned numer-
ous variants and extensions. These variants address various challenges and limitations
associated with the original GAN formulation. For instance, Radford et al. [65] proposed
Deep Convolutional GANs (DCGANs) to produce high-quality images compared to fully
connected networks, and Mirza et al. [66] introduced a Conditional GAN (C-GAN) frame-
work that can produce images conditioned on class labels. Arjovsky et al. [67] proposed
Wasserstein GAN (WGAN) with a new loss function leveraging on the Wasserstein distance
to better estimate the difference between the real and synthetic sample distributions. Since
2014, more than 500 papers presenting different variants of GANs have been published in
the literature and can be all found in [68].
Future Internet 2024, 16, 32 21 of 41
Although GAN methods excel at generating realistic samples different from once used
in training this can help to evaluate the ML systems against adversarial attacks as well
as help in data augmentation in scenarios where the available training dataset is limited.
However, training GANs are typically characterized by high computational demands and
can exhibit considerable instability.
fore, by obfuscating or hiding gradients it makes it harder for attackers to craft effective
adversarial samples.
Folz et al. [79] proposed a gradient-masking method based on a defense mechanism,
called the Structure-to-Signal Network (S2SNet). It comprises an encoder and a decoder
framework where the encoder retains crucial structural details and refines the decoder
using the target model’s gradient, rendering it resistant to gradient-based adversarial
examples. Lyu et al. [80] proposed a technique based on gradient penalty into the loss
function of the network to defend against L-BFGS and FGSM. The study conducted by
Nayebi et al. [81] demonstrated how gradient masking can be achieved by saturating the
sigmoid network, leading to a reduced gradient impact and rendering gradient-based
attacks less effective. The authors compelled the neural networks to operate within a
nonlinear saturating system. Nguyen et al. [82] propose a new gradient masking approach
to protect against C&W attacks. Their method involves adding noise to the logit layer
of the network. Jiang et al. [83] introduce a defense method that modifies the model’s
gradients by altering the oscillation pattern, effectively obscuring the original training
gradients and confusing attackers by using gradients from “fake” neurons to generate
invalid adversarial samples.
theoretically formulate and provide proof through the perspective of robust optimization
for DL. Researchers have displayed a notable level of interest in this area of study. This led
to multiple contributions proposing several variants of adversarial training method trying
to overcome the limitations of this method, such as the data generalization and overfitting,
as well as the decreased efficiency to the black-box attacks and the cost can be substantial
due to the iterative nature of training the model with adversarial examples.
For large models and data sets, Kurakin et al. [50] made suggestions for adversarial
training. Building on the idea that brute force training regularizes the network and reduces
overfitting, Miyato et al. [92] proposed the ‘Virtual Adversarial Training’ approach to
smooth the outcome distributions of the neural networks. Zheng et al. [93] proposed the
‘stability training’ method to improve the resilience of neural networks against small distor-
tions. In their work, Tramèr et al. [94] put forth the Ensemble Adversarial Training (EAT) to
augment the diversity of adversarial samples. Song et al. [95] proposed a method known as
Multi-strength Adversarial Training (MAT), which integrates adversarial training samples
and diverse levels of adversarial strength. Kannan et al. [96] proposed the Mixed-minibatch
PGD (M-PGD) adversarial training approach, which combines clean and adversarial exam-
ples. Their approach includes a logit pairing strategy with two methods: pairing clean with
adversarial samples and pairing clean with clean samples. In the training process, Wang
et al. [97] propose to take into consideration the distinctive impact of misclassified clean
examples using the so-called Misclassification Aware adveRsarial Training (MART) method.
In the objective to solve the generalization issue, Farnia et al. [98] suggested a spectral
normalization-based regularization for adversarial training. Wang et al. [99] proposed a
bilateral adversarial training method, which involves perturbing the input images and their
labels during the training process. In their work, Shafahi et al. [100] proposed the Universal
Adversarial Training (UAT) method that produces robust models with only two tile the cost
of natural training. Vivek and Babu [101] also introduced a dropout scheduling approach
to enhance the effectiveness of adversarial training by using a single-step method. For the
overall generalization of adversarially trained models, Song et al. [102] suggested Robust
Local Features for Adversarial Training (RLFAT) that involves randomly reshuffling a block
of the input during training. Pang et al. [103] propose the integration of a hypersphere
method. This method ensures that features are regularized onto a compact manifold.
thors introduced a defensive framework called MagNet, which solely interprets the results
of the final layer of the target classifier as a black-box to detect adversarial samples. For that
reason, the MagNet framework is composed of two modules: a Detector and a Reformer.
The detector assesses the disparity or distance between a provided test sample and the
manifold. If this distance surpasses a predefined limit, the detector rejects the sample. Some
adversarial examples might be very close to the manifold of normal examples and are not
detected by the Detector. Then, the role of the Reformer is to receive samples classified
as normal by the Detector and eliminate minor perturbations that the Detector may have
missed. The output from the Reformer is subsequently fed into the target classifier, which
will conduct classification within this subset of normal samples.
Another family of defense approaches uses nearest neighbors. Cohen et al. [119]
introduced an innovative method for detecting adversarial attacks by leveraging influence
functions along with k-nearest Neighbor (k-NN)-based metrics. The influence function
is used to evaluate the impact of slight weight adjustments on a particular training data
point within the model’s loss function, with respect to the loss of the corresponding test
data point. On the other hand, the k-NN method is applied to explore the sorting of these
supportive training examples in the deep neural network’s embedding space. Notably,
these examples exhibit a robust correlation with the closest neighbors among normal
inputs, whereas the correlation with adversarial inputs is considerably diminished. As a
result, this combined approach effectively identifies and detects adversarial examples. In
another work, Paudice et al. [120] introduced a data sanitization approach geared towards
removing poisoning samples within the training dataset. The technique addresses label-
flipping attacks by utilizing k-NN to detect poisoned samples that have a substantial
deviation from the decision boundary of SVM and reassign appropriate labels to data
points in the training dataset. Shahid et al. [121] developed an extension of the k-NN-based
defense mechanism presented by Paudice et al. [120] to evaluate its efficacy against Label-
flipping attacks in the context of a wearable Human Activity Recognition System. The
authors showed that this enhanced mechanism not only detects malicious training data
with altered labels but also accurately predicts their correct labels.
Abusnaina et al. [122] proposed a cutting-edge adversarial example detection method
pioneering a graph-based detection approach. The method creates a Latent Neighborhood
Graph (LNG) centered on an input example to determine whether the input is adversarial
or not. Hence the problem detection of adversarial attacks is reformulated as a graph classi-
fication problem. The process starts with the generation of an LNG for every individual
input instance, after which a GNN is employed to discern the distinction between benign
and adversarial examples, focusing on the relationships among the nodes within the Neigh-
borhood Graph. To guarantee optimal performance in detecting adversarial examples,
the authors optimize the parameters of both GNN and LNG node connections. Then, the
Graph Attention Network (GAT) is employed to determine whether LNG originates from
an adversarial or benign input instance. By employing GAT, the model focuses on the
relevant nodes and their connections within the LNG to make an informed decision about
the adversarial nature of the input example.
Table 2. Summary of research works related to adversarial attacks in ML-based security systems of IoT networks.
[123] 2019 IoT IDS FNN, SNN Bot-IoT FGSM, PGD, BIM - Evasion - White-box - Feature Normalization
- Poisoning
[127] 2021 IoT IDS SVM ANNs Bot-IoT LFA, FGSM - White-Box 5
- Evasion
- Adversarial Training
[115] 2022 IoT IDS CNN-LSTM Bot-IoT C-GAN - Poisoning - White-box
by C-GAN
- Adversarial Training
[113] 2022 IIoT IDS DRL DS2OS GAN - Poisoning - White-box
by GAN
Table 2. Cont.
[138] 2022 IoT DIS GAP FCN, CNNs IoT-Trace CAM, Grad-CAM++ - Poisoning - Black-box 5
[140] 2019 IoT MDS CFG-CNN CFG dataset GEA - Evasion - White-box 5
- Adversarial Training
[112] 2023 IoT MDS GNNs CMaldroid, Drebin VGAE-MalGAN - Evasion - White-box
by VGAE-MalGAN
Future Internet 2024, 16, 32 30 of 41
Qiu et al. [128] studied adversarial attack against a novel state-of-the-art Kitsune IDS
within the scenario of black-box access in the IoT network. The authors designed a method
leveraging model extraction to create a shadow model with the same behaviors as the
target black-box model using a limited quantity of training data. Then, the saliency map
technique is used to identify the critical features and to reveal the influence of each attribute
of the packet on the detection outcomes. Consequently, the authors granularly modified the
critical features using iterative FGSM to generate adversarial samples. Using the Kitsune
(Mirai) [142] dataset in their experiments, the authors demonstrated that using their novel
technique to perturb less than 0.005% of bytes in the data packets secure an average attack
success rate of 94.31% which significantly diminishes the ability of the Kitsune IDS to
distinguish between legitimate and malicious packets.
Fu et al. [129] conducted an experiment to assess the efficiency of LSTM, CNN, and
Gated Recurrent Unit (GRU) models against adversarial attacks created by FGSM. The
evaluation was performed on the CSE-CIC-IDS2018 dataset [143], utilizing three distinct
training configurations: training with normal samples, training with adversarial samples,
and a hybrid approach involving pretraining with normal samples followed by training
with adversarial samples. The results revealed that adversarial training enhanced the
robustness of the models, with LSTM showing the most significant enhancement. How-
ever, it was observed that adversarial training also led to a reduction in the accuracy of
the models when dealing with normal examples. This phenomenon occurred because
adversarial training makes the models’ decision boundaries more adaptable to adversarial
examples, but at the same time, it results in a more fragile decision boundary for normal
samples. As a result, the ability of the models to correctly classify normal examples was
relatively undermined.
Pacheco et al. [130] assessed the efficiency of the popular adversarial attacks, JSMA,
FGSM, and C&W against various ML-based IDSes, such as SVM, Decision Tree (DT), and
Random Forest (RF), using multi-class contemporary datasets, BoT-IoT [125] and UNSW-
NB15 [27], that represents the contemporary IoT network environment. The study’s agenda
is to reveal how those several attacks can effectively degrade the detection performance of
the three selected target models in comparison to the baseline model Multilayer Perceptron
(MLP), and how the performance results vary over the two datasets. The results of the
experiment validated the potency of the aforementioned adversarial attacks to decrease the
overall effectiveness of SVM, DT, and RF classifiers, respectively for both datasets. However,
the decrease in all metrics was less pronounced in the UNSW-NB15 dataset when compared
to the Bot-IoT dataset. The limited feature set of Bot-IoT renders it more vulnerable to
adversarial attacks. Regarding the attacks, C&W proved to be the most impactful when
used with the UNSW-NB15 dataset. In contrast, the FGSM technique displayed robust
effectiveness on the Bot-IoT dataset. However, the JSMA had a lesser impact on both
datasets. From the classifier’s model robustness perspective, the SVM classifier experienced
the most significant impact, resulting in an accuracy reduction of roughly 50% in both
datasets. Conversely, the RF classifier demonstrated remarkable robustness compared to
other classifiers, with only a 21% decrease in accuracy.
Anthi et al. [131] proposed to evaluate the vulnerability of ML-based IDSes in an
IoT smart home network. Various pre-trained supervised ML models, namely J48 DT, RF,
SVM, and Naïve Bayes (NB) are proposed for DoS attack detection. Using a Smart Home
Testbed dataset [144], the authors suggested a Rule-based method to create indiscriminate
adversarial samples. For adversarial exploratory attack, the authors proposed to use
the Information Gain Filter [145], a feature importance ranking method, to select the
crucial features that best distinguish malicious from benign packets. Then, the adversary
proceeded to manually manipulate the values of these features, together and one at a time,
to force IDSes to wrongly classify the incoming packet. The experiential outcomes revealed
that the performance of all IDSes models was impacted by the presence of adversarial
packets, resulting in a maximum decrease of 47.2%. On the flip side, the use of adversarial
training defense by injecting 10% of generated adversarial samples into the original dataset
Future Internet 2024, 16, 32 31 of 41
improved the models’ robustness against adversarial attacks by 25% in comparison to the
performance results in the absence of adversarial defense. The approach proposed in this
study is restricted to the generation of adversarial examples specifically for DoS attacks,
with an exclusive focus on supervised ML-based IDSes.
Husnoo et al. [132] suggested a pioneering image restoration defense mechanism to
answer the problem of high susceptibility and fragility of modern DNNs to the state-of-
the-art OnePixel adversarial attacks within IIoT IDSes. The authors argue that the existing
solutions either result in image quality degradation through the removal of adversarial
pixels or outright rejection of the adversarial sample. This can have a substantial impact on
the accuracy of DNNs and might result in a hazard for some critical IoT use cases, such
as healthcare and self-driving vehicles. The proposed defense mechanism leverages on
Accelerated Proximal Gradient approach to detect the malicious pixel within an adversarial
image and subsequently restore the original image. In their demonstration experiments,
the researchers chose two DNNs-based IDS, LeNet [146] and ResNet [147], and they trained
them using the CIFAR-10 [148] and MNIST [149] datasets. The experimental outcomes
revealed a high efficacy of the suggested defensive approach against One-Pixel attacks,
achieving detection and mitigation accuracy of 98.7% and 98.2%, respectively, on CIFAR-10
and MNIST datasets.
Benaddi et al. [115] suggested an adversarial training approach to enhance the effi-
ciency of hybrid CNNLSTM-based IDS by leveraging C-GAN. The authors introduce the
C-GAN in the training pipeline to handle classes with limited samples and address the
data imbalance of the BoT-IoT dataset [125]. First, the IDS model is trained on the BoT-IoT
dataset, and specific classes with low performance, often those with sparse samples, are
identified. Subsequently, C-GAN is trained using these identified classes, and the generator
from C-GAN is utilized to retrain the IDS model, thereby improving the performance
of the identified classes. The authors plan to further enhance their model by exploring
strategies to defend against adversarial attacks to improve the CNNLSTM-based IDS’s
robustness. In their other work, the authors conducted a similar approach to enhance the
robustness and effectiveness of IDS in the IIoT [113]. The study suggests the application
of DRL in conjunction with a GAN to boost the IDS’s efficiency. By using the Distributed
Smart Space Orchestration System (DS2OS) dataset [150], the author’s experiments showed
that the proposed DRL-GAN model outperforms standard DRL in detecting anomalies in
imbalanced dataset within the IIoT. However, the proposed model demands substantial
computational resources during the training phase.
Jiang et al. [133] introduced an innovative framework called Feature Grouping and
Multi-model Fusion Detector (FGMD) for IDS against adversarial attacks in IoT networks.
The framework integrates different models, with each model processing unique subsets
of the input data or features to better resist the effects of adversarial attacks. The authors
used two existing IoT datasets, MedBIoT [151] and IoTID [152], to validate their model
in comparison with three baseline models DT, LSTM, and Recurrent Neural Network
(RNN) against adversarial examples which are generated based on a rule-based approach
that selects, alters and modifies the features of data samples. The experimental outcomes
validated the efficacy of FGMD in countering adversarial attacks, exhibiting a superior
detection rate when compared to the baseline models.
Zhou et al. [134] introduced a state-of-the-art adversarial attack generation approach
called the Hierarchical Adversarial Attack (HAA). This approach aims to implement
a sophisticated, level-aware black-box attack strategy against GNN-based IDS in IoT
networks while operating within a defined budget constraint. In their approach, the authors
used a saliency map method to create adversarial instances by detecting and altering
crucial feature complements with minimal disturbances. Then, a hierarchical node selection
strategy based on the Random Walk with Restart (RWR) algorithm is used to prioritize
the nodes with higher attack vulnerability. Using the UNSW-SOSR2019 dataset [153], the
authors assessed their HAA method on two standard GNN models, specifically the Graph
Convolutional Network (GCN) [154] and Jumping Knowledge Networks (JK-Net) [155],
Future Internet 2024, 16, 32 32 of 41
and considering three baseline methodologies, Improved Random Walk with Restart
(iRWR) [156], Resistive Switching Memory (RSM) [157] and Greedily Corrected Random
Walk (GCRW) [158] when compromising the targeted GNN models. The experiment
results proved that the classification precision of both GNN models can be reduced by
more than 30% under the adversarial attacks-based HAA method. However, the authors
did not examine the effectiveness of their HAA method in the presence of an adversarial
defense technique.
Fan et al. [135] argued the limitation of existing evaluation methods that use gradient-
based adversarial attacks to assess the Adversarial Training (AdvTrain) defense mecha-
nism [15,51,159]. The authors suggested an innovative adversarial attack method called
Non-Gradient Attack (NGA) and introduced a novel assessment criterion named Com-
posite Criterion (CC) involving both accuracy and attack success rate. The NGA method
involves employing a search strategy to generate adversarial examples outside the decision
boundary. These examples are iteratively adjusted toward the original data points while
maintaining their misclassification properties. The researchers carried out their experi-
ments on two commonly utilized datasets, CIFAR-10 and CIFAR-100 [148], to systematically
assess the efficiency of the AdvTrain mechanism. In this evaluation, NGA with CC serves
as the main method to measure the effectiveness of AdvTrain in comparison with four
gradient-based benchmark methods, FGSM, BIM, PGD, and C&W. The study deduced
that the robustness of DNNs-based IDSes of IoT networks might have been overestimated
previously. By employing NGA and CC, the reliability of DNNs-based IDSes can be more
accurately assessed in both normal and AdvTrain defense mechanism scenarios. At the
end of this study, the authors recognized their proposed NGA method drawback related to
convergence speed and promised to optimize it in their future works.
In the context of Device Identification Systems (DISes), Hou et al. [136] suggested a
novel method called IoTGAN, designed to tamper with an IoT device’s network traffic to
evade ML-based IoT DIS. Inspired by GANs, IoTGAN employs a substitute neural network
model in black-box scenarios as its discriminative model. Meanwhile, the generative
model is trained to inject adversarial perturbations into the device’s traffic to deceive the
substitute model. The efficiency of the IoTGAN attack method is evaluated against five
target ML-based DIS models: RF, DT, SVM, k-NN, and Neural Networks (NNs) proposed
in [160]. The experiments are conducted using the UNSW IoT Trace dataset [161], which
is collected within an authentic real-world setting, encompassing data from 28 distinct
IoT devices. The experiment outcomes showed that IoTGAN was successful in evading
the five target DIS models with a success rate of over 90%. The authors proposed a
defense technique called Device Profiling to countermeasure against IoTGAN attacks. This
technique leverages unique hardware-based features of IoT devices’ wireless signals such
as frequency drifting, phase shifting, amplitude attenuation, and angle of arrival. When
tested, Device Profiling maintained a high identification rate (around 95%), even under
IoTGAN attacks, indicating its resilience against such adversarial strategies.
Likewise, Bao et al. [137] assessed the susceptibility of ML-based DIS against adver-
sarial attacks in IoT networks. The study aims to evaluate the impact of state-of-the-art
adversarial attacks on the identification of specific wireless IoT devices based on received
signals. For that, the authors launch a single-step attack technique, FGSM, along with three
iterative attack techniques, i.e., BIM, PGD, and MIM (Momentum Iterative Method) in
targeted and non-targeted scenarios on CNN-based DIS leveraging on a Complex Value
Neural Network (CVNN) model [162]. In their experiments, the authors created a gener-
ated dataset that contains four main features: Signal Source, Power Amplifier, Channel
Attenuation, and Receiver Device. The generated dataset will serve as the foundation for
training the CVNN model, which will then be applied for device identification purposes.
Leveraging a combined set of evaluation criteria to better assess the model’s performance,
the study finds that iterative attack methods typically perform better than one-step at-
tacks in fooling ML-based DIS models. However, as perturbation levels increase, their
Future Internet 2024, 16, 32 33 of 41
success rate becomes stable. The outcomes also revealed the ML models’ susceptibility to
targeted attacks.
Kotak et al. [138] suggested a novel method to produce real-time adversarial examples
using heatmaps from Class Activation Mapping (CAM) and Grad-CAM++. They explored
the vulnerabilities of ML-based IoT DISes using payload-based IoT identification models
such as Fully Connected Neural Network (FCN), CNNs, and Global Average Pooling (GAP).
Using a portion of the publicly accessible IoT Trace dataset [161], these models processed
the first 784 bytes within the TCP payload and converted them into a 28 × 28 greyscale
image. Experiments involved manipulating unauthorized IoT device data and altering a
specific number of bytes to see how these adversarial examples perform when exposed to
the target models. Surprisingly, adversarial examples were transferable to varied model
architectures. The GAP model displayed unique behavior against these samples, hinting at
its defensive potential. Despite vulnerabilities in the target models, advanced architecture
like Vision Transformer [163] might resist these adversarial attacks better.
The researchers in [139] delved deep into the performance of ML-based IoT DIS using
hardware behavior identification. Therefore, the authors proposed a combined LSTM and
CNN (LSTM-1DCNN) model for IoT DIS and evaluated its robustness against adversarial
attacks where adversaries alter device environmental and contextual conditions such as
temperature changes, CPU load, and device rebooting to hinder its proper identification.
To assess the effectiveness of LSTM-1DCNN, the model was trained and tested using the
LwHBench dataset [164] and exposed to various adversarial attacks like FGSM, BIM, MIM,
PGD, JSMA, Boundary Attack, and C&W. The LSTM-CNN model showcased superior
performance, achieving F1-Score of 0.96 in average, identifying all devices with a True
Positive Rate (TPR) of 0.80 as threshold for device identification. When exposed to various
evasion adversarial attacks, the model remained resilient to temperature-based attacks.
However, certain evasion techniques such as FGSM, BIM, and MIM were successful in
fooling the identification process. In response, the researchers employed adversarial
training and model distillation as defense mechanisms. These mechanisms enhanced
the model’s robustness. The combination of adversarial training and model distillation
provides strong protection against various evasion attacks.
7. Challenges
7.1. Dataset
The scarcity of publicly accessible IoT datasets is evident. Most recent studies have
relied on the Bot-IoT [125], Kitsune [142], and CIFAR-10 [148] datasets. Thus, it is essential
to create an up-to-date dataset that captures the varied nature of recent IoT applications and
considers the newest emerging threats. This would enable a more accurate assessment of
IoT ML-based security systems against adversarial attacks in scenarios closely resembling
real-world use cases.
Another challenge related to the dataset is unbalanced classes. The procedure to train
an IoT ML-based security model involves feeding a specific ML algorithm with a training
dataset for learning purposes. Consequently, there is a risk when using datasets such as
BoT-IoT [125], UNSW-NB15 [27], and NSL-KDD [26], which are unbalanced with a larger
representation of benign data. Such datasets can cause the model to have a bias towards the
dominant classes, leading to the “accuracy paradox” problem. For an effective performance
evaluation of IoT ML-based security against adversarial attacks it must start by choosing
a well-balanced dataset. However, finding a balanced dataset is not always possible. To
counteract this, various data balancing methods can be employed:
• Under-sampling: Here, entries from the over-represented class are eliminated to
equalize the distribution between the minority classes and majority classes. However,
if the original dataset is limited, this approach can result in overfitting.
• Over-sampling: In this technique, we replicate entries from the lesser-represented
class until its count matches the dominant class. A limitation is that since the minority
Future Internet 2024, 16, 32 34 of 41
class has few unique data points, the model might end up memorizing these patterns,
leading to overfitting.
• Synthetic Data Generation: This method uses Generative Adversarial Networks
(GANs) to mimic the real data’s distribution and create authentic-seeming samples.
The last challenge from our point of view related to the dataset is features constraints.
Most of the studies overlooked the inherent constraints of IoT networks. In contrast
to unconstrained domains like computer vision, where the main feature for adversarial
perturbation is the image’s pixels, the structure of IoT network traffic features involves
a combination of different data types and value ranges. These features can be binary,
categorical, or continuous. Moreover, the values of these features are closely correlated,
with some being constant and others being unalterable.
Given the challenges presented by these data considerations, it is essential to engage
in a comprehensive discussion and comparison of datasets when evaluating IoT ML-based
security systems, adversarial attacks, or adversarial defense methods. Recent studies in the
literature focused on dataset benchmarking [165–168], aiming to elucidate the construction
procedures and characteristics of various benchmarking datasets. These studies offer
valuable insights for researchers, aiding them in quickly identifying datasets that align
with their specific requirements and maintaining the necessary conditions for simulating in
the most realistic IoT traffic flows.
defense methods. In this scheme, the first dimension delves into defense mechanisms,
consisting of proactive and reactive approaches. The second dimension encompasses
defense strategies, which encompass network optimization, data optimization, and network
addition strategies. In the end, we reviewed the recent literature on adversarial attacks
within three prominent IoT security systems: IDSs, MDSs, and DISs.
In future works, we aim at using the most recent and realistic IoT dataset in which
classes are sufficiently balanced for unbiased learning. We also aim at developing a
technique that takes into consideration the nuanced connections between classes to reflect
the inherent constraints of IoT networks. Then, we propose an adversarial generation
method that maintains these conditions while minimizing the number of perturbed features
to ensure the creation of realistic traffic flows. For IoT security systems, we noticed that
most of the studies (65%) are dedicated to IDS. Therefore, we will give more attention to
MDS and DIS in our future works.
Author Contributions: Conceptualization, H.K., M.R., F.S. and N.K.; methodology, H.K.; validation,
H.K., M.R., F.S. and N.K.; formal analysis, H.K.; investigation, H.K.; resources, H.K.; data curation,
H.K.; writing—original draft preparation, H.K.; writing—review and editing, H.K., M.R., F.S. and
N.K.; supervision, M.R., F.S. and N.K.; project administration, M.R., F.S. and N.K. All authors have
read and agreed to the published version of the manuscript.
Funding: This research received no external funding.
Data Availability Statement: Not applicable.
Conflicts of Interest: The authors declare no conflicts of interest.
References
1. Global IoT and Non-IoT Connections 2010–2025. Available online: https://www.statista.com/statistics/1101442/iot-number-of-
connected-devices-worldwide/ (accessed on 10 December 2023).
2. Khanna, A.; Kaur, S. Internet of Things (IoT), Applications and Challenges: A Comprehensive Review. Wirel. Pers Commun 2020,
114, 1687–1762. [CrossRef]
3. Riahi Sfar, A.; Natalizio, E.; Challal, Y.; Chtourou, Z. A Roadmap for Security Challenges in the Internet of Things. Digit. Commun.
Netw. 2018, 4, 118–137. [CrossRef]
4. Chaabouni, N.; Mosbah, M.; Zemmari, A.; Sauvignac, C.; Faruki, P. Network Intrusion Detection for IoT Security Based on
Learning Techniques. IEEE Commun. Surv. Tutor. 2019, 21, 2671–2701. [CrossRef]
5. Namanya, A.P.; Cullen, A.; Awan, I.U.; Disso, J.P. The World of Malware: An Overview. In Proceedings of the 2018 IEEE 6th
International Conference on Future Internet of Things and Cloud (FiCloud), Barcelona, Spain, 6–8 August 2018; pp. 420–427.
6. Liu, Y.; Wang, J.; Li, J.; Niu, S.; Song, H. Machine Learning for the Detection and Identification of Internet of Things Devices:
A Survey. IEEE Internet Things J. 2022, 9, 298–320. [CrossRef]
7. Benazzouza, S.; Ridouani, M.; Salahdine, F.; Hayar, A. A Novel Prediction Model for Malicious Users Detection and Spectrum
Sensing Based on Stacking and Deep Learning. Sensors 2022, 22, 6477. [CrossRef] [PubMed]
8. Ridouani, M.; Benazzouza, S.; Salahdine, F.; Hayar, A. A Novel Secure Cooperative Cognitive Radio Network Based on Chebyshev
Map. Digit. Signal Process. 2022, 126, 103482. [CrossRef]
9. Benazzouza, S.; Ridouani, M.; Salahdine, F.; Hayar, A. Chaotic Compressive Spectrum Sensing Based on Chebyshev Map for
Cognitive Radio Networks. Symmetry 2021, 13, 429. [CrossRef]
10. Jordan, M.I.; Mitchell, T.M. Machine Learning: Trends, Perspectives, and Prospects. Science 2015, 349, 255–260. [CrossRef]
11. Talaei Khoei, T.; Kaabouch, N. Machine Learning: Models, Challenges, and Research Directions. Future Internet 2023, 15, 332.
[CrossRef]
12. LeCun, Y.; Bengio, Y.; Hinton, G. Deep Learning. Nature 2015, 521, 436–444. [CrossRef]
13. Talaei Khoei, T.; Ould Slimane, H.; Kaabouch, N. Deep Learning: Systematic Review, Models, Challenges, and Research Directions.
Neural Comput. Appl. 2023, 35, 23103–23124. [CrossRef]
14. Szegedy, C.; Zaremba, W.; Sutskever, I.; Bruna, J.; Erhan, D.; Goodfellow, I.; Fergus, R. Intriguing Properties of Neural Networks.
arXiv 2013, arXiv:1312.6199. [CrossRef]
15. Goodfellow, I.J.; Shlens, J.; Szegedy, C. Explaining and Harnessing Adversarial Examples. arXiv 2014, arXiv:1412.6572. [CrossRef]
16. Biggio, B.; Roli, F. Wild Patterns: Ten Years after the Rise of Adversarial Machine Learning. Pattern Recognit. 2018, 84, 317–331.
[CrossRef]
17. Akhtar, N.; Mian, A. Threat of Adversarial Attacks on Deep Learning in Computer Vision: A Survey. arXiv 2018, arXiv:1801.00553.
[CrossRef]
Future Internet 2024, 16, 32 36 of 41
18. Akhtar, N.; Mian, A.; Kardan, N.; Shah, M. Advances in Adversarial Attacks and Defenses in Computer Vision: A Survey. IEEE
Access 2021, 9, 155161–155196. [CrossRef]
19. Naitali, A.; Ridouani, M.; Salahdine, F.; Kaabouch, N. Deepfake Attacks: Generation, Detection, Datasets, Challenges, and
Research Directions. Computers 2023, 12, 216. [CrossRef]
20. Xu, H.; Ma, Y.; Liu, H.; Deb, D.; Liu, H.; Tang, J.; Jain, A.K. Adversarial Attacks and Defenses in Images, Graphs and Text:
A Review. arXiv 2019, arXiv:1909.08072. [CrossRef]
21. Zhang, W.E.; Sheng, Q.Z.; Alhazmi, A.; Li, C. Adversarial Attacks on Deep-Learning Models in Natural Language Processing:
A Survey. ACM Trans. Intell. Syst. Technol. 2020, 11, 1–41. [CrossRef]
22. Qin, Y.; Carlini, N.; Goodfellow, I.; Cottrell, G.; Raffel, C. Imperceptible, Robust, and Targeted Adversarial Examples for Automatic
Speech Recognition. arXiv 2019, arXiv:1903.10346. [CrossRef]
23. Jmila, H.; Khedher, M.I. Adversarial Machine Learning for Network Intrusion Detection: A Comparative Study. Comput. Netw.
2022, 214, 109073. [CrossRef]
24. Ibitoye, O.; Abou-Khamis, R.; el Shehaby, M.; Matrawy, A.; Shafiq, M.O. The Threat of Adversarial Attacks on Machine Learning
in Network Security—A Survey. arXiv 2019, arXiv:1911.02621. [CrossRef]
25. Carlini, N. A Complete List of All Adversarial Example Papers. Available online: https://nicholas.carlini.com/writing/2019/all-
adversarial-example-papers.html (accessed on 28 October 2023).
26. Tavallaee, M.; Bagheri, E.; Lu, W.; Ghorbani, A.A. A Detailed Analysis of the KDD CUP 99 Data Set. In Proceedings of the 2009
IEEE Symposium on Computational Intelligence for Security and Defense Applications, Ottawa, ON, Canada, 8–10 July 2009;
pp. 1–6.
27. Moustafa, N.; Slay, J. UNSW-NB15: A Comprehensive Data Set for Network Intrusion Detection Systems (UNSW-NB15 Network
Data Set). In Proceedings of the 2015 Military Communications and Information Systems Conference (MilCIS), Canberra,
Australia, 10–12 November 2015; pp. 1–6.
28. Alatwi, H.A.; Aldweesh, A. Adversarial Black-Box Attacks Against Network Intrusion Detection Systems: A Survey. In Proceedings
of the 2021 IEEE World AI IoT Congress (AIIoT), Seattle, WA, USA, 10 May 2021; pp. 0034–0040.
29. Joshi, C.; Aliaga, J.R.; Insua, D.R. Insider Threat Modeling: An Adversarial Risk Analysis Approach. IEEE Trans. Inform. Forensic
Secur. 2021, 16, 1131–1142. [CrossRef]
30. Aloraini, F.; Javed, A.; Rana, O.; Burnap, P. Adversarial Machine Learning in IoT from an Insider Point of View. J. Inf. Secur. Appl.
2022, 70, 103341. [CrossRef]
31. Elrawy, M.F.; Awad, A.I.; Hamed, H.F.A. Intrusion Detection Systems for IoT-Based Smart Environments: A Survey. J. Cloud
Comput. 2018, 7, 21. [CrossRef]
32. Bout, E.; Loscri, V.; Gallais, A. How Machine Learning Changes the Nature of Cyberattacks on IoT Networks: A Survey. IEEE
Commun. Surv. Tutor. 2022, 24, 248–279. [CrossRef]
33. Li, J.; Liu, Y.; Chen, T.; Xiao, Z.; Li, Z.; Wang, J. Adversarial Attacks and Defenses on Cyber–Physical Systems: A Survey. IEEE
Internet Things J. 2020, 7, 5103–5115. [CrossRef]
34. He, K.; Kim, D.D.; Asghar, M.R. Adversarial Machine Learning for Network Intrusion Detection Systems: A Comprehensive
Survey. IEEE Commun. Surv. Tutor. 2023, 25, 538–566. [CrossRef]
35. Aryal, K.; Gupta, M.; Abdelsalam, M. A Survey on Adversarial Attacks for Malware Analysis. arXiv 2021, arXiv:2111.08223.
[CrossRef]
36. Alotaibi, A.; Rassam, M.A. Adversarial Machine Learning Attacks against Intrusion Detection Systems: A Survey on Strategies
and Defense. Future Internet 2023, 15, 62. [CrossRef]
37. Perwej, Y.; Haq, K.; Parwej, F.; Hassa, M. The Internet of Things (IoT) and Its Application Domains. IJCA 2019, 182, 36–49.
[CrossRef]
38. Hassija, V.; Chamola, V.; Saxena, V.; Jain, D.; Goyal, P.; Sikdar, B. A Survey on IoT Security: Application Areas, Security Threats,
and Solution Architectures. IEEE Access 2019, 7, 82721–82743. [CrossRef]
39. Balaji, S.; Nathani, K.; Santhakumar, R. IoT Technology, Applications and Challenges: A Contemporary Survey. Wirel. Pers.
Commun. 2019, 108, 363–388. [CrossRef]
40. Tange, K.; De Donno, M.; Fafoutis, X.; Dragoni, N. A Systematic Survey of Industrial Internet of Things Security: Requirements
and Fog Computing Opportunities. IEEE Commun. Surv. Tutor. 2020, 22, 2489–2520. [CrossRef]
41. HaddadPajouh, H.; Dehghantanha, A.M.; Parizi, R.; Aledhari, M.; Karimipour, H. A Survey on Internet of Things Security:
Requirements, Challenges, and Solutions. Internet Things 2021, 14, 100129. [CrossRef]
42. Iqbal, W.; Abbas, H.; Daneshmand, M.; Rauf, B.; Bangash, Y.A. An In-Depth Analysis of IoT Security Requirements, Challenges,
and Their Countermeasures via Software-Defined Security. IEEE Internet Things J. 2020, 7, 10250–10276. [CrossRef]
43. Atlam, H.F.; Wills, G.B. IoT Security, Privacy, Safety and Ethics. In Digital Twin Technologies and Smart Cities; Farsi, M.,
Daneshkhah, A., Hosseinian-Far, A., Jahankhani, H., Eds.; Internet of Things; Springer International Publishing: Cham, Switzer-
land, 2020; pp. 123–149. ISBN 978-3-030-18731-6.
44. Chebudie, A.B.; Minerva, R.; Rotondi, D. Towards a Definition of the Internet of Things (IoT). IEEE Internet Initiat. 2014, 1, 1–86.
45. Krco, S.; Pokric, B.; Carrez, F. Designing IoT Architecture(s): A European Perspective. In Proceedings of the 2014 IEEE World
Forum on Internet of Things (WF-IoT), Seoul, Republic of Korea, 6–8 March 2014; pp. 79–84.
Future Internet 2024, 16, 32 37 of 41
46. Gupta, B.B.; Quamara, M. An Overview of Internet of Things (IoT): Architectural Aspects, Challenges, and Protocols. Concurr.
Comput. 2020, 32, e4946. [CrossRef]
47. Milenkovic, M. Internet of Things: Concepts and System Design; Springer: Cham, Switzerland, 2020; ISBN 978-3-030-41345-3.
48. Sarker, I.H.; Khan, A.I.; Abushark, Y.B.; Alsolami, F. Internet of Things (IoT) Security Intelligence: A Comprehensive Overview,
Machine Learning Solutions and Research Directions. Mob. Netw. Appl. 2023, 28, 296–312. [CrossRef]
49. Wang, C.; Chen, J.; Yang, Y.; Ma, X.; Liu, J. Poisoning Attacks and Countermeasures in Intelligent Networks: Status Quo and
Prospects. Digit. Commun. Netw. 2022, 8, 225–234. [CrossRef]
50. Kurakin, A.; Goodfellow, I.; Bengio, S. Adversarial Examples in the Physical World. arXiv 2016, arXiv:1607.02533. [CrossRef]
51. Madry, A.; Makelov, A.; Schmidt, L.; Tsipras, D.; Vladu, A. Towards Deep Learning Models Resistant to Adversarial Attacks.
arXiv 2017, arXiv:1706.06083. [CrossRef]
52. Papernot, N.; McDaniel, P.; Jha, S.; Fredrikson, M.; Celik, Z.B.; Swami, A. The Limitations of Deep Learning in Adversarial
Settings. arXiv 2015, arXiv:1511.07528. [CrossRef]
53. Carlini, N.; Wagner, D. Towards Evaluating the Robustness of Neural Networks. In Proceedings of the 2017 IEEE Symposium on
Security and Privacy (SP), San Jose, CA, USA, 22–24 May 2017; pp. 39–57.
54. Moosavi-Dezfooli, S.-M.; Fawzi, A.; Frossard, P. DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks. arXiv
2015, arXiv:1511.04599. [CrossRef]
55. Chen, P.-Y.; Zhang, H.; Sharma, Y.; Yi, J.; Hsieh, C.-J. ZOO: Zeroth Order Optimization Based Black-Box Attacks to Deep Neural
Networks without Training Substitute Models. arXiv 2017, arXiv:1708.03999. [CrossRef]
56. Su, J.; Vargas, D.V.; Sakurai, K. One Pixel Attack for Fooling Deep Neural Networks. IEEE Trans. Evol. Computat. 2019, 23, 828–841.
[CrossRef]
57. Storn, R.; Price, K. Differential Evolution—A Simple and Efficient Heuristic for Global Optimization over Continuous Spaces.
J. Glob. Optim. 1997, 11, 341–359. [CrossRef]
58. Biggio, B.; Nelson, B.; Laskov, P. Poisoning Attacks against Support Vector Machines. arXiv 2012, arXiv:1206.6389. [CrossRef]
59. Biggio, B.; Nelson, B. Pavel Laskov Support Vector Machines Under Adversarial Label Noise. In Proceedings of the Asian
Conference on Machine Learning, PMLR, Taoyuan, Taiwan, 17 November 2011; Volume 20, pp. 97–112.
60. Xiao, H.; Eckert, C. Adversarial Label Flips Attack on Support Vector Machines. Front. Artif. Intell. Appl. 2012, 242, 870–875.
[CrossRef]
61. Muñoz-González, L.; Biggio, B.; Demontis, A.; Paudice, A.; Wongrassamee, V.; Lupu, E.C.; Roli, F. Towards Poisoning of Deep
Learning Algorithms with Back-Gradient Optimization. arXiv 2017, arXiv:1708.08689. [CrossRef]
62. Ganin, Y.; Ustinova, E.; Ajakan, H.; Germain, P.; Larochelle, H.; Laviolette, F.; Marchand, M.; Lempitsky, V. Domain-Adversarial
Training of Neural Networks. arXiv 2015, arXiv:1505.07818. [CrossRef]
63. Papernot, N.; McDaniel, P.; Wu, X.; Jha, S.; Swami, A. Distillation as a Defense to Adversarial Perturbations against Deep Neural
Networks. arXiv 2015, arXiv:1511.04508. [CrossRef]
64. Goodfellow, I.J.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative Adversarial
Networks. arXiv 2014, arXiv:1406.2661. [CrossRef]
65. Radford, A.; Metz, L.; Chintala, S. Unsupervised Representation Learning with Deep Convolutional Generative Adversarial
Networks. arXiv 2015, arXiv:1511.06434. [CrossRef]
66. Mirza, M.; Osindero, S. Conditional Generative Adversarial Nets. arXiv 2014, arXiv:1411.1784. [CrossRef]
67. Arjovsky, M.; Chintala, S.; Bottou, L. Wasserstein GAN. arXiv 2017, arXiv:1701.07875. [CrossRef]
68. Hindupur, A. The GAN Zoo. Available online: https://github.com/hindupuravinash/the-gan-zoo (accessed on 28 October 2023).
69. Orekondy, T.; Schiele, B.; Fritz, M. Knockoff Nets: Stealing Functionality of Black-Box Models. In Proceedings of the 2019
IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 16–20 June 2019;
pp. 4949–4958.
70. Jagielski, M.; Carlini, N.; Berthelot, D.; Kurakin, A.; Papernot, N. High Accuracy and High Fidelity Extraction of Neural Networks.
arXiv 2019, arXiv:1909.01838. [CrossRef]
71. Chen, J.; Jordan, M.I.; Wainwright, M.J. HopSkipJumpAttack: A Query-Efficient Decision-Based Attack. In Proceedings of the
2020 IEEE Symposium on Security and Privacy (SP), San Francisco, CA, USA, 18–20 May 2020; pp. 1277–1294.
72. Yuan, X.; He, P.; Zhu, Q.; Li, X. Adversarial Examples: Attacks and Defenses for Deep Learning. IEEE Trans. Neural Netw. Learn.
Syst. 2019, 30, 2805–2824. [CrossRef]
73. Barreno, M.; Nelson, B.; Sears, R.; Joseph, A.D.; Tygar, J.D. Can Machine Learning Be Secure? In Proceedings of the 2006 ACM
Symposium on Information, Computer and Communications Security, Taipei, Taiwan, 21 March 2006; pp. 16–25.
74. Rosenberg, I.; Shabtai, A.; Elovici, Y.; Rokach, L. Adversarial Machine Learning Attacks and Defense Methods in the Cyber
Security Domain. ACM Comput. Surv. 2022, 54, 1–36. [CrossRef]
75. Papernot, N.; McDaniel, P.; Goodfellow, I.; Jha, S.; Celik, Z.B.; Swami, A. Practical Black-Box Attacks against Machine Learning.
In Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security, Abu Dhabi, United Arab
Emirates, 2 April 2017; pp. 506–519.
76. Ross, A.; Doshi-Velez, F. Improving the Adversarial Robustness and Interpretability of Deep Neural Networks by Regularizing
Their Input Gradients. AAAI 2018, 32, 1–10. [CrossRef]
77. Hinton, G.; Vinyals, O.; Dean, J. Distilling the Knowledge in a Neural Network. arXiv 2015, arXiv:1503.02531. [CrossRef]
Future Internet 2024, 16, 32 38 of 41
78. Duddu, V. A Survey of Adversarial Machine Learning in Cyber Warfare. Def. Sc. Jl. 2018, 68, 356. [CrossRef]
79. Folz, J.; Palacio, S.; Hees, J.; Dengel, A. Adversarial Defense Based on Structure-to-Signal Autoencoders. In Proceedings of
the 2020 IEEE Winter Conference on Applications of Computer Vision (WACV), Snowmass Village, CO, USA, 1–5 March 2020;
pp. 3568–3577.
80. Lyu, C.; Huang, K.; Liang, H.-N. A Unified Gradient Regularization Family for Adversarial Examples. In Proceedings of the 2015
IEEE International Conference on Data Mining, Atlantic City, NJ, USA, 14–17 November 2015; pp. 301–309.
81. Nayebi, A.; Ganguli, S. Biologically Inspired Protection of Deep Networks from Adversarial Attacks. arXiv 2017, arXiv:1703.09202.
[CrossRef]
82. Nguyen, L.; Wang, S.; Sinha, A. A Learning and Masking Approach to Secure Learning. arXiv 2017, arXiv:1709.04447. [CrossRef]
83. Jiang, C.; Zhang, Y. Adversarial Defense via Neural Oscillation Inspired Gradient Masking. arXiv 2022, arXiv:2211.02223.
[CrossRef]
84. Drucker, H.; Le Cun, Y. Improving Generalization Performance Using Double Backpropagation. IEEE Trans. Neural Netw. 1992, 3,
991–997. [CrossRef] [PubMed]
85. Zhao, Q.; Griffin, L.D. Suppressing the Unusual: Towards Robust CNNs Using Symmetric Activation Functions. arXiv 2016,
arXiv:1603.05145. [CrossRef]
86. Dabouei, A.; Soleymani, S.; Taherkhani, F.; Dawson, J.; Nasrabadi, N.M. Exploiting Joint Robustness to Adversarial Perturbations.
In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19
June 2020; pp. 1119–1128.
87. Addepalli, S.; Vivek, B.S.; Baburaj, A.; Sriramanan, G.; Venkatesh Babu, R. Towards Achieving Adversarial Robustness by
Enforcing Feature Consistency Across Bit Planes. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and
Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020; pp. 1017–1026.
88. Ma, A.; Faghri, F.; Papernot, N.; Farahmand, A. SOAR: Second-Order Adversarial Regularization. arXiv 2021, arXiv:2004.01832.
89. Yeats, E.C.; Chen, Y.; Li, H. Improving Gradient Regularization Using Complex-Valued Neural Networks. In Proceedings of the
Proceedings of the 38th International Conference on Machine Learning PMLR, Online, 18 July 2021; Volume 139, pp. 11953–11963.
90. Xu, W.; Evans, D.; Qi, Y. Feature Squeezing: Detecting Adversarial Examples in Deep Neural Networks. In Proceedings of the
2018 Network and Distributed System Security Symposium, San Diego, CA, USA, 18–21 February 2018.
91. Gu, S.; Rigazio, L. Towards Deep Neural Network Architectures Robust to Adversarial Examples. arXiv 2014, arXiv:1412.5068.
[CrossRef]
92. Miyato, T.; Dai, A.M.; Goodfellow, I. Adversarial Training Methods for Semi-Supervised Text Classification. arXiv 2016,
arXiv:1605.07725. [CrossRef]
93. Zheng, S.; Song, Y.; Leung, T.; Goodfellow, I. Improving the Robustness of Deep Neural Networks via Stability Training.
In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30
June 2016; pp. 4480–4488.
94. Tramèr, F.; Kurakin, A.; Papernot, N.; Goodfellow, I.; Boneh, D.; McDaniel, P. Ensemble Adversarial Training: Attacks and
Defenses. arXiv 2017, arXiv:1705.07204. [CrossRef]
95. Song, C.; Cheng, H.-P.; Yang, H.; Li, S.; Wu, C.; Wu, Q.; Chen, Y.; Li, H. MAT: A Multi-Strength Adversarial Training Method
to Mitigate Adversarial Attacks. In Proceedings of the 2018 IEEE Computer Society Annual Symposium on VLSI (ISVLSI),
Hong Kong, 8–11 July 2018; pp. 476–481.
96. Kannan, H.; Kurakin, A.; Goodfellow, I. Adversarial Logit Pairing. arXiv 2018, arXiv:1803.06373. [CrossRef]
97. Wang, Y.; Zou, D.; Yi, J.; Bailey, J.; Ma, X.; Gu, Q. Improving Adversarial Robustness Requires Revisiting Misclassified Examples. In
Proceedings of the 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, 26–30 April 2020.
98. Farnia, F.; Zhang, J.M.; Tse, D. Generalizable Adversarial Training via Spectral Normalization. arXiv 2018, arXiv:1811.07457.
[CrossRef]
99. Wang, J.; Zhang, H. Bilateral Adversarial Training: Towards Fast Training of More Robust Models Against Adversarial Attacks.
In Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Republic of Korea,
27 October–2 November 2019; pp. 6628–6637.
100. Shafahi, A.; Najibi, M.; Xu, Z.; Dickerson, J.; Davis, L.S.; Goldstein, T. Universal Adversarial Training. arXiv 2018, arXiv:1811.11304.
[CrossRef]
101. Vivek, B.S.; Venkatesh Babu, R. Single-Step Adversarial Training With Dropout Scheduling. In Proceedings of the 2020 IEEE/CVF
Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020; pp. 947–956.
102. Song, C.; He, K.; Lin, J.; Wang, L.; Hopcroft, J.E. Robust Local Features for Improving the Generalization of Adversarial Training.
arXiv 2019, arXiv:1909.10147. [CrossRef]
103. Pang, T.; Yang, X.; Dong, Y.; Xu, K.; Zhu, J.; Su, H. Boosting Adversarial Training with Hypersphere Embedding. arXiv 2020,
arXiv:2002.08619. [CrossRef]
104. Xu, W.; Evans, D.; Qi, Y. Feature Squeezing Mitigates and Detects Carlini/Wagner Adversarial Examples. arXiv 2017,
arXiv:1705.10686. [CrossRef]
105. Jiang, W.; He, Z.; Zhan, J.; Pan, W. Attack-Aware Detection and Defense to Resist Adversarial Examples. IEEE Trans. Comput.-Aided
Des. Integr. Circuits Syst. 2021, 40, 2194–2198. [CrossRef]
Future Internet 2024, 16, 32 39 of 41
106. Asam, M.; Khan, S.H.; Akbar, A.; Bibi, S.; Jamal, T.; Khan, A.; Ghafoor, U.; Bhutta, M.R. IoT Malware Detection Architecture Using
a Novel Channel Boosted and Squeezed CNN. Sci. Rep. 2022, 12, 15498. [CrossRef]
107. Jia, X.; Wei, X.; Cao, X.; Foroosh, H. ComDefend: An Efficient Image Compression Model to Defend Adversarial Examples.
In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA,
15–20 June 2019; pp. 6077–6085.
108. Song, Y.; Kim, T.; Nowozin, S.; Ermon, S.; Kushman, N. PixelDefend: Leveraging Generative Models to Understand and Defend
against Adversarial Examples. arXiv 2017, arXiv:1710.10766. [CrossRef]
109. Ramachandran, P.; Paine, T.L.; Khorrami, P.; Babaeizadeh, M.; Chang, S.; Zhang, Y.; Hasegawa-Johnson, M.A.; Campbell, R.H.;
Huang, T.S. Fast Generation for Convolutional Autoregressive Models. arXiv 2017, arXiv:1704.06001. [CrossRef]
110. Gao, S.; Yao, S.; Li, R. Transferable Adversarial Defense by Fusing Reconstruction Learning and Denoising Learning. In Proceedings
of the IEEE INFOCOM 2021—IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS), Vancouver, BC,
Canada, 10 May 2021; pp. 1–6.
111. Lee, H.; Han, S.; Lee, J. Generative Adversarial Trainer: Defense to Adversarial Perturbations with GAN. arXiv 2017,
arXiv:1705.03387. [CrossRef]
112. Yumlembam, R.; Issac, B.; Jacob, S.M.; Yang, L. IoT-Based Android Malware Detection Using Graph Neural Network with
Adversarial Defense. IEEE Internet Things J. 2023, 10, 8432–8444. [CrossRef]
113. Benaddi, H.; Jouhari, M.; Ibrahimi, K.; Ben Othman, J.; Amhoud, E.M. Anomaly Detection in Industrial IoT Using Distributional
Reinforcement Learning and Generative Adversarial Networks. Sensors 2022, 22, 8085. [CrossRef] [PubMed]
114. Li, G.; Ota, K.; Dong, M.; Wu, J.; Li, J. DeSVig: Decentralized Swift Vigilance Against Adversarial Attacks in Industrial Artificial
Intelligence Systems. IEEE Trans. Ind. Inf. 2020, 16, 3267–3277. [CrossRef]
115. Benaddi, H.; Jouhari, M.; Ibrahimi, K.; Benslimane, A.; Amhoud, E.M. Adversarial Attacks Against IoT Networks Using
Conditional GAN Based Learning. In Proceedings of the GLOBECOM 2022—2022 IEEE Global Communications Conference,
Rio de Janeiro, Brazil, 4 December 2022; pp. 2788–2793.
116. Odena, A.; Olah, C.; Shlens, J. Conditional Image Synthesis with Auxiliary Classifier GANs. In Proceedings of the 34th
International Conference on Machine Learning, PMLR, Sydney, Australia, 6 August 2017; Volume 70, pp. 2642–2651.
117. Liu, X.; Hsieh, C.-J. Rob-GAN: Generator, Discriminator, and Adversarial Attacker. In Proceedings of the 2019 IEEE/CVF
Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–19 June 2019; pp. 11226–11235.
118. Meng, D.; Chen, H. MagNet: A Two-Pronged Defense against Adversarial Examples. In Proceedings of the 2017 ACM SIGSAC
Conference on Computer and Communications Security, Dallas, TX, USA, 30 October 2017; pp. 135–147.
119. Cohen, G.; Sapiro, G.; Giryes, R. Detecting Adversarial Samples Using Influence Functions and Nearest Neighbors. In Proceedings
of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020;
pp. 14441–14450.
120. Paudice, A.; Muñoz-González, L.; Lupu, E.C. Label Sanitization Against Label Flipping Poisoning Attacks. In ECML PKDD
2018 Workshops; Lecture Notes in Computer Science; Springer International Publishing: Cham, Switzerland, 2019; Volume 11329,
pp. 5–15, ISBN 978-3-030-13452-5.
121. Shahid, A.R.; Imteaj, A.; Wu, P.Y.; Igoche, D.A.; Alam, T. Label Flipping Data Poisoning Attack Against Wearable Human
Activity Recognition System. In Proceedings of the 2022 IEEE Symposium Series on Computational Intelligence (SSCI), Singapore,
4 December 2022; pp. 908–914.
122. Abusnaina, A.; Wu, Y.; Arora, S.; Wang, Y.; Wang, F.; Yang, H.; Mohaisen, D. Adversarial Example Detection Using Latent
Neighborhood Graph. In Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision (ICCV), Montreal,
QC, Canada, 10–17 October 2021; pp. 7667–7676.
123. Ibitoye, O.; Shafiq, O.; Matrawy, A. Analyzing Adversarial Attacks against Deep Learning for Intrusion Detection in IoT Networks.
In Proceedings of the 2019 IEEE Global Communications Conference (GLOBECOM), Waikoloa, HI, USA, 9–13 December 2019;
pp. 1–6.
124. Klambauer, G.; Unterthiner, T.; Mayr, A.; Hochreiter, S. Self-Normalizing Neural Networks. arXiv 2017, arXiv:1706.02515.
[CrossRef]
125. Koroniotis, N.; Moustafa, N.; Sitnikova, E.; Turnbull, B. Towards the Development of Realistic Botnet Dataset in the Internet of
Things for Network Forensic Analytics: Bot-IoT Dataset. Future Gener. Comput. Syst. 2019, 100, 779–796. [CrossRef]
126. Luo, Z.; Zhao, S.; Lu, Z.; Sagduyu, Y.E.; Xu, J. Adversarial Machine Learning Based Partial-Model Attack in IoT. In Proceedings of
the 2nd ACM Workshop on Wireless Security and Machine Learning, Linz, Austria, 13 July 2020; pp. 13–18.
127. Papadopoulos, P.; Thornewill Von Essen, O.; Pitropakis, N.; Chrysoulas, C.; Mylonas, A.; Buchanan, W.J. Launching Adversarial
Attacks against Network Intrusion Detection Systems for IoT. JCP 2021, 1, 252–273. [CrossRef]
128. Qiu, H.; Dong, T.; Zhang, T.; Lu, J.; Memmi, G.; Qiu, M. Adversarial Attacks Against Network Intrusion Detection in IoT Systems.
IEEE Internet Things J. 2021, 8, 10327–10335. [CrossRef]
129. Fu, X.; Zhou, N.; Jiao, L.; Li, H.; Zhang, J. The Robust Deep Learning–Based Schemes for Intrusion Detection in Internet of Things
Environments. Ann. Telecommun. 2021, 76, 273–285. [CrossRef]
130. Pacheco, Y.; Sun, W. Adversarial Machine Learning: A Comparative Study on Contemporary Intrusion Detection Datasets.
In Proceedings of the 7th International Conference on Information Systems Security and Privacy, Online, 11–13 February 2021;
pp. 160–171.
Future Internet 2024, 16, 32 40 of 41
131. Anthi, E.; Williams, L.; Javed, A.; Burnap, P. Hardening Machine Learning Denial of Service (DoS) Defences against Adversarial
Attacks in IoT Smart Home Networks. Comput. Secur. 2021, 108, 102352. [CrossRef]
132. Husnoo, M.A.; Anwar, A. Do Not Get Fooled: Defense against the One-Pixel Attack to Protect IoT-Enabled Deep Learning
Systems. Ad Hoc Netw. 2021, 122, 102627. [CrossRef]
133. Jiang, H.; Lin, J.; Kang, H. FGMD: A Robust Detector against Adversarial Attacks in the IoT Network. Future Gener. Comput. Syst.
2022, 132, 194–210. [CrossRef]
134. Zhou, X.; Liang, W.; Li, W.; Yan, K.; Shimizu, S.; Wang, K.I.-K. Hierarchical Adversarial Attacks Against Graph-Neural-Network-
Based IoT Network Intrusion Detection System. IEEE Internet Things J. 2022, 9, 9310–9319. [CrossRef]
135. Fan, M.; Liu, Y.; Chen, C.; Yu, S.; Guo, W.; Wang, L.; Liu, X. Toward Evaluating the Reliability of Deep-Neural-Network-Based IoT
Devices. IEEE Internet Things J. 2022, 9, 17002–17013. [CrossRef]
136. Hou, T.; Wang, T.; Lu, Z.; Liu, Y.; Sagduyu, Y. IoTGAN: GAN Powered Camouflage Against Machine Learning Based IoT Device
Identification. In Proceedings of the 2021 IEEE International Symposium on Dynamic Spectrum Access Networks (DySPAN), Los
Angeles, CA, USA, 13 December 2021; pp. 280–287.
137. Bao, Z.; Lin, Y.; Zhang, S.; Li, Z.; Mao, S. Threat of Adversarial Attacks on DL-Based IoT Device Identification. IEEE Internet
Things J. 2022, 9, 9012–9024. [CrossRef]
138. Kotak, J.; Elovici, Y. Adversarial Attacks Against IoT Identification Systems. IEEE Internet Things J. 2023, 10, 7868–7883. [CrossRef]
139. Sánchez, P.M.S.; Celdrán, A.H.; Bovet, G.; Pérez, G.M. Adversarial Attacks and Defenses on ML- and Hardware-Based IoT Device
Fingerprinting and Identification. arXiv 2022, arXiv:2212.14677. [CrossRef]
140. Abusnaina, A.; Khormali, A.; Alasmary, H.; Park, J.; Anwar, A.; Mohaisen, A. Adversarial Learning Attacks on Graph-Based IoT
Malware Detection Systems. In Proceedings of the 2019 IEEE 39th International Conference on Distributed Computing Systems
(ICDCS), Dallas, TX, USA, 7–9 July 2019; pp. 1296–1305.
141. Taheri, R.; Javidan, R.; Shojafar, M.; Pooranian, Z.; Miri, A.; Conti, M. On Defending against Label Flipping Attacks on Malware
Detection Systems. Neural Comput. Appl. 2020, 32, 14781–14800. [CrossRef]
142. Understanding the Mirai Botnet; USENIX Association, Ed. 2017. Available online: https://www.usenix.org/system/files/
conference/usenixsecurity17/sec17-antonakakis.pdf (accessed on 13 November 2023).
143. Sharafaldin, I.; Habibi Lashkari, A.; Ghorbani, A.A. Toward Generating a New Intrusion Detection Dataset and Intrusion Traffic
Characterization. In Proceedings of the 4th International Conference on Information Systems Security and Privacy, Madeira,
Portugal, 22–24 January 2018; pp. 108–116.
144. Anthi, E.; Williams, L.; Slowinska, M.; Theodorakopoulos, G.; Burnap, P. A Supervised Intrusion Detection System for Smart
Home IoT Devices. IEEE Internet Things J. 2019, 6, 9042–9053. [CrossRef]
145. Weka 3—Data Mining with Open Source Machine Learning Software in Java. Available online: https://www.cs.waikato.ac.nz/
ml/weka/ (accessed on 28 October 2023).
146. Lecun, Y.; Bottou, L.; Bengio, Y.; Haffner, P. Gradient-Based Learning Applied to Document Recognition. Proc. IEEE 1998, 86,
2278–2324. [CrossRef]
147. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the 2016 IEEE Conference on
Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778.
148. Krizhevsky, A. CIFAR-10 and CIFAR-100 Datasets. Available online: https://www.cs.toronto.edu/~kriz/cifar.html (accessed on
28 October 2023).
149. Stallkamp, J.; Schlipsing, M.; Salmen, J.; Igel, C. Man vs. Computer: Benchmarking Machine Learning Algorithms for Traffic Sign
Recognition. Neural Netw. 2012, 32, 323–332. [CrossRef] [PubMed]
150. DS2OS Traffic Traces. Available online: https://www.kaggle.com/datasets/francoisxa/ds2ostraffictraces (accessed on 28
October 2023).
151. Guerra-Manzanares, A.; Medina-Galindo, J.; Bahsi, H.; Nõmm, S. MedBIoT: Generation of an IoT Botnet Dataset in a Medium-
Sized IoT Network. In Proceedings of the 6th International Conference on Information Systems Security and Privacy, Valletta,
Malta, 25–27 February 2020; pp. 207–218.
152. Kang, H.; Ahn, D.H.; Lee, G.M.; Yoo, J.D.; Park, K.H.; Kim, H.K. IoT Network Intrusion Dataset. IEEE Dataport. 2019. Available
online: https://ieee-dataport.org/open-access/iot-network-intrusion-dataset (accessed on 28 October 2023).
153. Hamza, A.; Gharakheili, H.H.; Benson, T.A.; Sivaraman, V. Detecting Volumetric Attacks on loT Devices via SDN-Based
Monitoring of MUD Activity. In Proceedings of the 2019 ACM Symposium on SDN Research, San Jose, CA, USA, 3 April 2019;
pp. 36–48.
154. Kipf, T.N.; Welling, M. Semi-Supervised Classification with Graph Convolutional Networks. arXiv 2016, arXiv:1609.02907.
[CrossRef]
155. Xu, K.; Li, C.; Tian, Y.; Sonobe, T.; Kawarabayashi, K.; Jegelka, S. Representation Learning on Graphs with Jumping Knowledge
Networks. arXiv 2018, arXiv:1806.03536. [CrossRef]
156. Zhou, X.; Liang, W.; Wang, K.I.-K.; Huang, R.; Jin, Q. Academic Influence Aware and Multidimensional Network Analysis for
Research Collaboration Navigation Based on Scholarly Big Data. IEEE Trans. Emerg. Top. Comput. 2021, 9, 246–257. [CrossRef]
157. Sun, Z.; Ambrosi, E.; Pedretti, G.; Bricalli, A.; Ielmini, D. In-Memory PageRank Accelerator with a Cross-Point Array of Resistive
Memories. IEEE Trans. Electron. Devices 2020, 67, 1466–1470. [CrossRef]
Future Internet 2024, 16, 32 41 of 41
158. Ma, J.; Ding, S.; Mei, Q. Towards More Practical Adversarial Attacks on Graph Neural Networks. arXiv 2020, arXiv:2006.05057.
[CrossRef]
159. Wong, E.; Rice, L.; Kolter, J.Z. Fast Is Better than Free: Revisiting Adversarial Training. arXiv 2020, arXiv:2001.03994. [CrossRef]
160. Bao, J.; Hamdaoui, B.; Wong, W.-K. IoT Device Type Identification Using Hybrid Deep Learning Approach for Increased IoT
Security. In Proceedings of the 2020 International Wireless Communications and Mobile Computing (IWCMC), Limassol, Cyprus,
15–19 June 2020; pp. 565–570.
161. Sivanathan, A.; Gharakheili, H.H.; Loi, F.; Radford, A.; Wijenayake, C.; Vishwanath, A.; Sivaraman, V. Classifying IoT Devices in
Smart Environments Using Network Traffic Characteristics. IEEE Trans. Mob. Comput. 2019, 18, 1745–1759. [CrossRef]
162. Trabelsi, C.; Bilaniuk, O.; Zhang, Y.; Serdyuk, D.; Subramanian, S.; Santos, J.F.; Mehri, S.; Rostamzadeh, N.; Bengio, Y.; Pal, C.J.
Deep Complex Networks. arXiv 2017, arXiv:1705.09792. [CrossRef]
163. Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.;
Gelly, S.; et al. An Image Is Worth 16x16 Words: Transformers for Image Recognition at Scale. arXiv 2020, arXiv:2010.11929.
[CrossRef]
164. Sánchez Sánchez, P.M.; Jorquera Valero, J.M.; Huertas Celdrán, A.; Bovet, G.; Gil Pérez, M.; Martínez Pérez, G. LwHBench:
A Low-Level Hardware Component Benchmark and Dataset for Single Board Computers. Internet Things 2023, 22, 100764.
[CrossRef]
165. De Keersmaeker, F.; Cao, Y.; Ndonda, G.K.; Sadre, R. A Survey of Public IoT Datasets for Network Security Research. IEEE
Commun. Surv. Tutor. 2023, 25, 1808–1840. [CrossRef]
166. Kaur, B.; Dadkhah, S.; Shoeleh, F.; Neto, E.C.P.; Xiong, P.; Iqbal, S.; Lamontagne, P.; Ray, S.; Ghorbani, A.A. Internet of Things (IoT)
Security Dataset Evolution: Challenges and Future Directions. Internet Things 2023, 22, 100780. [CrossRef]
167. Alex, C.; Creado, G.; Almobaideen, W.; Alghanam, O.A.; Saadeh, M. A Comprehensive Survey for IoT Security Datasets
Taxonomy, Classification and Machine Learning Mechanisms. Comput. Secur. 2023, 132, 103283. [CrossRef]
168. Ahmad, R.; Alsmadi, I.; Alhamdani, W.; Tawalbeh, L. A Comprehensive Deep Learning Benchmark for IoT IDS. Comput. Secur.
2022, 114, 102588. [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual
author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to
people or property resulting from any ideas, methods, instructions or products referred to in the content.