0% found this document useful (0 votes)
6 views51 pages

Object Detection Recognition and Tracking Algorith

This paper reviews recent trends in object detection, recognition, and tracking algorithms used in advanced driver assistance systems (ADASs), which enhance vehicle safety and driving ease. It discusses the evolution of ADAS technologies, the various sensors employed, and the algorithms that enable functionalities such as detecting pedestrians, vehicles, and traffic signs. The paper concludes by highlighting the need for further research in challenging environments to improve these algorithms' effectiveness.

Uploaded by

Siddardha P
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views51 pages

Object Detection Recognition and Tracking Algorith

This paper reviews recent trends in object detection, recognition, and tracking algorithms used in advanced driver assistance systems (ADASs), which enhance vehicle safety and driving ease. It discusses the evolution of ADAS technologies, the various sensors employed, and the algorithms that enable functionalities such as detecting pedestrians, vehicles, and traffic signs. The paper concludes by highlighting the need for further research in challenging environments to improve these algorithms' effectiveness.

Uploaded by

Siddardha P
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 51

sensors

Review
Object Detection, Recognition, and Tracking Algorithms for
ADASs—A Study on Recent Trends
Vinay Malligere Shivanna 1, * and Jiun-In Guo 1,2,3

1 Department of Electrical Engineering, Institute of Electronics, National Yang-Ming Chiao Tung University,
Hsinchu City 30010, Taiwan; jiguo@nycu.edu.tw
2 Pervasive Artificial Intelligence Research (PAIR) Labs, National Yang Ming Chiao Tung University,
Hsinchu City 30010, Taiwan
3 eNeural Technologies Inc., Hsinchu City 30010, Taiwan
* Correspondence: vinay.ms23@gmail.com

Abstract: Advanced driver assistance systems (ADASs) are becoming increasingly common in
modern-day vehicles, as they not only improve safety and reduce accidents but also aid in smoother
and easier driving. ADASs rely on a variety of sensors such as cameras, radars, lidars, and a
combination of sensors, to perceive their surroundings and identify and track objects on the road.
The key components of ADASs are object detection, recognition, and tracking algorithms that allow
vehicles to identify and track other objects on the road, such as other vehicles, pedestrians, cyclists,
obstacles, traffic signs, traffic lights, etc. This information is then used to warn the driver of potential
hazards or used by the ADAS itself to take corrective actions to avoid an accident. This paper provides
a review of prominent state-of-the-art object detection, recognition, and tracking algorithms used in
Citation: Malligere Shivanna, V.; Guo,
J.-I. Object Detection, Recognition,
different functionalities of ADASs. The paper begins by introducing the history and fundamentals of
and Tracking Algorithms for ADASs followed by reviewing recent trends in various ADAS algorithms and their functionalities,
ADASs—A Study on Recent Trends. along with the datasets employed. The paper concludes by discussing the future of object detection,
Sensors 2024, 24, 249. https:// recognition, and tracking algorithms for ADASs. The paper also discusses the need for more research
doi.org/10.3390/s24010249 on object detection, recognition, and tracking in challenging environments, such as those with low
visibility or high traffic density.
Academic Editors: Stelios Krinidis
and Christos Nikolaos E.
Anagnostopoulos
Keywords: object detection; object tracking; advanced driver assistance system (ADAS); deep learning

Received: 28 September 2023


Revised: 13 December 2023
Accepted: 20 December 2023 1. Introduction
Published: 31 December 2023
Advanced driver assistance systems (ADASs) are a group of electronic technologies
Correction Statement: This article that assist drivers in driving and parking functions. Through a safe human–machine
has been republished with a minor interface, ADASs increase car and road safety. They use automated technology, such as
change. The change does not affect sensors and cameras, to detect nearby obstacles or driver errors, and respond or issue alerts
the scientific content of the article and
accordingly. They can enable various levels of autonomous driving, depending on the
further details are available within the
features installed in the car.
backmatter of the website version of
ADASs use a variety of sensors such as cameras, radar, lidar, and a combination of
this article.
these, to detect objects and conditions around the vehicle. The sensors send data to a
computing system, which then analyzes the data and determines the best course of action
based on the algorithmic design. For instance, if a camera detects a pedestrian in the
Copyright: © 2023 by the authors.
vehicle’s path, the computing system may trigger the ADAS to sound an alarm or apply
Licensee MDPI, Basel, Switzerland. the brakes.
This article is an open access article The chronicles of ADAS date back to the 1970s [1,2] with the development of the
distributed under the terms and first anti-lock braking system (ABS). Following a slow and steady evolution, additional
conditions of the Creative Commons features such as the lane departure warning system (LDWS) and electronic stability control
Attribution (CC BY) license (https:// (ESC) emerged in the 1990s. In recent years, there has been a rapid development of
creativecommons.org/licenses/by/ numerous ADASs, with new functionalities being introduced every other day and becoming
4.0/). increasingly prevalent in modern vehicles, as they offer a variety of safety features that

Sensors 2024, 24, 249. https://doi.org/10.3390/s24010249 https://www.mdpi.com/journal/sensors


Sensors 2024, 24, x FOR PEER REVIEW 2 of 52

Sensors 2024, 24, 249 numerous ADASs, with new functionalities being introduced every other day and becom- 2 of 51
ing increasingly prevalent in modern vehicles, as they offer a variety of safety features that
aid in preventing accidents, relying on the aforementioned variety of sensors that have
made
aid inthe ADAS a potential
preventing accidents,system
relyingwith which
on the to significantlyvariety
aforementioned reduceofthe number
sensors of have
that traf-
fic accidents and fatalities. A study by the Insurance Institute for Highway
made the ADAS a potential system with which to significantly reduce the number of Safety [3] found
that different
traffic accidentsuses
andof fatalities.
ADASs can reduceby
A study the risk
the of a fatalInstitute
Insurance crash byforupHighway
to 20–25%. There-
Safety [3]
fore,
found ADASs are becoming
that different uses ofincreasingly
ADASs cancommon
reduce the in cars. In a2021,
risk of fatal33%
crash ofbynewupcars sold in
to 20–25%.
the United States
Therefore, ADASs hadareADAS
becomingfeatures. This number
increasingly common is expected
in cars.toIngrow
2021,to33%50%of bynew
2030, as
cars
ADASs are expected
sold in the to play
United States hada major
ADASrole in the future
features. of transportation
This number is expected [4].
toBy helping
grow to 50% to
prevent accidents and collisions, reducing drivers’ fatigue and stress [5,6],
by 2030, as ADASs are expected to play a major role in the future of transportation [4]. improving fuel
efficiency
By helping[7,8], makingaccidents
to prevent parking andeasier and more
collisions, convenient
reducing [9] and
drivers’ thereby
fatigue providing
and stress [5,6],
peace of mind
improving fuelto drivers and
efficiency [7,8],passengers [5,6], ADASs
making parking easier andcanmore
save convenient
lives and make ourthereby
[9] and roads
safer.
providing peace of mind to drivers and passengers [5,6], ADASs can save lives and make
Additionally,
our roads safer. various features of ADASs, as shown in Figure 1, are a crucial part of
the development
Additionally, ofvarious
autonomous driving;
features in other
of ADASs, aswords,
shownself-driving
in Figure 1,cars,
are aascrucial
autonomous
part of
the development
vehicles, of autonomous
rely on the performance driving; in other
and efficiency of words,
ADASsself-driving cars, and
to detect objects as autonomous
conditions
vehicles,
in rely on the performance
their surroundings in real-world and efficiency
scenarios.ofSelf-driving
ADASs to detect objects
cars use and conditions
a combination of
in their surroundings
ADASs in real-world
and artificial intelligence toscenarios. Self-driving
drive themselves. cars useADASs
Therefore, a combination of ADASs
are continuing to
and artificial
play intelligence
an important role in to
thedrive themselves.
development Therefore, ADASs
of autonomous drivingareascontinuing to playma-
the technology an
important role in the development of autonomous driving as the technology matures.
tures.

Figure 1. Different features of ADASs.


Figure 1. Different features of ADASs.
The basic functionalities of ADASs are object detection, recognition, and tracking.
Numerous algorithms
The basic allow vehicles
functionalities of ADASsto detect and recognize—in
are object other words,
detection, recognition, andtotracking.
identify
and then track—other objects on the road, such as vehicles, pedestrians,
Numerous algorithms allow vehicles to detect and recognize—in other words, to identifycyclists, traffic
signs,
and lanes,
then probable obstacles
track—other on the
objects on the road,
road,and
suchmore; warn thepedestrians,
as vehicles, driver of potential hazards;
cyclists, traffic
and/or take evasive action automatically.
signs, lanes, probable obstacles on the road, and more; warn the driver of potential haz-
ards; There
and/orare a number
take evasiveof different
action object detection, recognition, and tracking algorithms
automatically.
that have been developed for ADASs.
There are a number of different object These algorithms
detection, can be broadly
recognition, classified
and tracking into two
algorithms
main categories: traditional methods and deep learning (DL) methods, as
that have been developed for ADASs. These algorithms can be broadly classified into two discussed in
detail in Section 1.3.
main categories: traditional methods and deep learning (DL) methods, as discussed in de-
tail inThis paper
Section 1.3.attempts to provide a comprehensive review of recent trends in different
algorithms for various ADAS functions. The paper begins by discussing the challenges of
This paper attempts to provide a comprehensive review of recent trends in different
object detection, recognition, and tracking in ADAS applications. The paper then discusses
algorithms for various ADAS functions. The paper begins by discussing the challenges of
the different types of sensors used in ADASs and different types of object detection, recog-
object detection, recognition, and tracking in ADAS applications. The paper then dis-
nition, and tracking algorithms that have been developed for various ADAS methodologies
cusses the different types of sensors used in ADASs and different types of object detection,
and datasets used to train and test the methods. The paper concludes by discussing the
recognition, and tracking algorithms that have been developed for various ADAS meth-
future trends in object detection, recognition, and tracking for ADASs.
odologies and datasets used to train and test the methods. The paper concludes by dis-
cussing
1.1. Basicthe future trends in object detection, recognition, and tracking for ADASs.
Terminologies
Before diving into the main objective of the paper, the section below introduces some
of the basic terminologies commonly used in the field of ADAS research:
Sensors 2024, 24, 249 3 of 51

a. Image processing is the process of manipulating digital images to improve their quality
or extract useful information from them. Image processing techniques are commonly
used in ADASs for object detection, recognition, and tracking tasks;
b. Object detection is the task of identifying and locating objects in a scene, such as
vehicles, pedestrians, traffic signs, and other objects that could pose a hazard to
the driver;
c. Object tracking involves following the movement of vehicles, pedestrians, and other
objects over time to predict their future trajectories;
d. Image segmentation is the task of dividing an image into different regions, each of
which corresponds to a different object or part of an object such as the bumper, hood,
and wheels and other objects such as pedestrians, traffic signs, lanes, forward objects,
and so on;
e. Feature extraction is the extraction of features like shape, size, color, and so on from an
image or a video; these features are used to identify objects or track their movements.
f. Classification is the task of assigning a label such as vehicles, pedestrians, traffic signs,
or others to an object or several images to categorize the objects;
g. Recognition is the task of identifying an object or a region in an image by its name or
other attributes.

1.2. An Overview of ADASs


The history of ADAS technology can be traced back to the 1970s with the adoption
of the anti-lock braking system [10,11]. Early ADASs including electronic stability control,
anti-lock brakes, blind spot information systems, lane departure warning, adaptive cruise
control, and traction control emerged in the 1900s and 2000s [12,13]. These systems can
be affected by mechanical alignment adjustments or damage from a collision requiring
automatic reset for these systems after a mechanical alignment is performed.

1.2.1. The Scope of ADASs


ADASs perform a variety of tasks using object detection, recognition, and tracking
algorithms which are deemed as falling within the scope of ADASs; namely, (i) vehicle
detection, (ii) pedestrian detection, (iii) traffic signs detection (TSD), (iv) driver monitoring
system (DMS), (v) lane departure warning system (LDWS), (vi) forward collision warning
system (FCWS), (vii) blind-spot detection (BSD), (viii) emergency braking system (EBS),
(ix) adaptive cruise control (ACC), and (x) around view monitoring (AVM).
These are some of the most important of the many ADAS features that rely on detection,
recognition, and tracking algorithms. These algorithms are constantly being improved as
the demand for safer vehicles continues to grow.

1.2.2. The Objectives of Object Detection, Recognition, and Tracking in ADASs


An ADAS system has various functions with different objectives that can be listed as:
a. Improving road safety: ADASs can aid in improving road safety by reducing the
number of accidents; this is achieved by warning drivers of potential hazards and
by taking corrective actions to avoid collisions. For example, a LDWS can warn the
driver if they are about to drift out of their lane, while a forward collision warning
system can warn the driver if they are about to collide with another vehicle;
b. Reducing driver workload: ADASs can help to reduce driver workload by automating
some of their driving tasks. This can help to make driving safer and more enjoyable.
For example, ACC can automatically maintain a safe distance between the vehicle and
the vehicle in front of it, and lane-keeping assist can automatically keep the vehicle
centered in its lane;
c. Increasing fuel efficiency: ADASs can help to increase fuel efficiency by reducing
the need for the driver to brake and accelerate, which is achieved by maintaining
a constant speed and by avoiding sudden speed changes. For example, ACC can
Sensors 2024, 24, 249 4 of 51

automatically adjust the speed of the vehicle to maintain a safe distance from the
vehicle in front of it, which can help to reduce fuel consumption;
d. Providing information about the road environment: ADASs can provide drivers with
more information about the road environment, such as the speed of other vehicles, the
distance to the nearest object, traffic signs, and the presence of pedestrians or cyclists.
This information can help drivers to make better decisions about how to drive and
can help to reduce the risk of accidents;
e. Assisting drivers with difficult driving tasks: ADASs can assist drivers with difficult
driving tasks, such as parking, merging onto a highway, and driving in bad weather
conditions, thereby reducing driver workload and enabling safer driving;
f. Ensuring a comfortable and enjoyable driving experience: ADASs can provide a more
comfortable and enjoyable driving experience by reducing stress and fatigue that
drivers experience which can be achieved by automating some of the tasks involved in
driving, such as maintaining a constant speed and avoiding sudden changes in speed.
The ADAS algorithms are designed to achieve these objectives by using sensors,
such as cameras, radar, lidar, and now a combination of these, to collect data about the
road environment. The data thus obtained are processed by the algorithms as per their
design to identify and track objects, predict the future movement of objects, and warn the
driver of potential hazards. These ADAS algorithms are constantly being improved as
new technologies are being developed. Continuous and consistent advancements in these
technologies are making ADASs even more capable of improving road safety and reducing
drivers’ workloads.

1.2.3. The Challenges of ADASs


The task of the essential functions of ADASs, namely object detection, recognition,
and tracking, is to allow ADASs to identify and track objects in the vehicle’s surroundings,
such as other vehicles, pedestrians, cyclists, and sometimes random objects and obstacles,
using which ADASs can prevent accidents, keep the vehicle in its lane, and provide other
driver assistance features. However, there are various challenges associated with object
detection, recognition, and tracking in ADASs, such as:
a. Varying environmental conditions: ADASs must be able to operate in a variety of
environmental conditions, including different lighting conditions like bright sunlight,
dark shadows, fog, daytime, nighttime, etc., different weather conditions such as
drizzle, rain, snow, and so on, along with various road conditions including dirt,
gravel, etc.;
b. Occlusion: objects on the road in real scenarios can often be occluded by other objects,
such as other vehicles, pedestrians, or trees, making it difficult for ADASs to detect
and track objects;
c. Deformation: objects on the road can often be deformed, such as when a vehicle is
turning or when a pedestrian is walking, causing difficulties for ADASs in detecting
and tracking objects;
d. Scale: objects on the road can vary greatly in size, from small pedestrians to large
trucks, inducing difficulties for ADASs in detecting and tracking objects of all sizes;
e. Multi-object tracking: ADASs must be able to track multiple objects simultaneously,
and this can be challenging as objects move and interact with each other in complex
ways in real-world scenarios;
f. Real-time performance: most importantly, ADASs must be able to detect, recognize,
and track objects in real time, which is essential for safety-critical applications, as
delays in detection or tracking can lead to accidents and make them unreliable.
Researchers are working on developing newer algorithms and improving the existing
algorithms and techniques to address these challenges. Due to this, ADASs are becoming
increasingly capable of detecting and tracking objects in a variety of challenging conditions.
Sensors 2024, 24, 249 5 of 51

1.2.4. The Essentials of ADASs


The above section discusses the challenges of different ADAS methods, whereas in this
section, we discuss the numerous requirements of [14,15] ADASs, which must be tackled
before the aforementioned issues can be resolved. In other words, ADAS algorithms are
facing numerous additional predicaments while working on overcoming the challenges
discussed in the previous section:
a. The need for accurate sensors: ADASs rely on a variety of sensors to detect and track
objects on the road. These sensors must be accurate and reliable to provide accurate
information to the ADAS. Nevertheless, sensors are usually affected by factors such as
weather, lighting, and the environment, causing difficulties for sensors in providing
accurate information, and thus leading to errors in the ADASs;
b. The need for reliable algorithms: ADASs also rely on a variety of algorithms to process
the data from the sensors and make decisions about how to respond to objects on
the road. These algorithms must be reliable to make accurate and timely decisions.
However, these algorithms can also be affected by factors such as the complexity of
the environment and the number of objects on the road. This makes it difficult for
algorithms to make accurate decisions, leading to errors in the ADAS;
c. The need for integration with other systems: ADASs must be integrated with different
systems in the vehicle, such as the braking system and the steering system. This
integration is necessary in order for the ADAS system to take action to avoid probable
accidents. Nonetheless, integration is complex and time-consuming, resulting in
deployment delays of ADASs;
d. The cost of ADASs: ADASs are expensive to develop and deploy, making it difficult
for some manufacturers to offer ADASs as standard features in their vehicles. As a
result, ADASs are often only available as optional features, which can make them less
accessible to all drivers;
e. The acceptance of ADASs by drivers: Some drivers may still be hesitant to adopt
ADASs because they worry about the technology or they do not trust the technology.
This will result in difficulties persuading drivers to opt for vehicles with ADASs.
Despite these challenges, ADASs have the potential to significantly improve road
safety. As the technology continues to improve, ADASs are likely to become more affordable
and more widely accepted by drivers. This will help to make roads safer for everyone.

1.3. ADAS Algorithms: Traditional vs. Deep Learning


There are two main types of algorithms used in ADASs: traditional algorithms and
DL algorithms. In this section, we discuss the advantages and disadvantages of traditional
and DL algorithms for ADASs and also some of the challenges involved in developing and
deploying ADASs.

1.3.1. Traditional Algorithms


Traditional methods for object detection, recognition, and tracking are typically the
most common type of algorithms used in ADASs, based on hand-crafted, rule-based
features, and heuristics designed to capture the distinctive characteristics of different
objects. That is, a feature for detecting vehicles might be the presence of four wheels and a
windshield. This means that these algorithms use a set of pre-defined rules to determine
what objects are present in the environment and how to respond to them. For instance, a
traditional lane-keeping algorithm might use a rule that says, ‘If the vehicle is drifting out
of its lane, then turn the steering wheel in the opposite direction’ or ‘a rule might state that
if a vehicle is detected in the vehicle’s blind spot, then the driver should be warned’.
Traditional methods are less complex than DL algorithms, making them easier to
develop, and are very effective in certain cases, but they are difficult to generalize to new
objects or situations because they are limited by the rules that are hard-coded into them. If
a new object, obstacle, or hazard is not covered by a rule, then the algorithm may not be
able to detect it. Some of the basic traditional methods-based algorithms are:
Sensors 2024, 24, 249 6 of 51

a. Object detection: Traditional object detection algorithms typically use a two-step


approach:
i. The region proposal step identifies potential regions in an image that may
contain objects, which is typically carried out by using a sliding window
approach, where a small window is moved across the image and features are
extracted from each window;
ii. The classification step classifies each region as an object or background. This is
typically carried out by using a machine learning (ML) algorithm, such as a
support vector machine (SVM) [16] or a random forest [17];
b. Object recognition: Traditional object recognition algorithms typically use a feature-
based approach:
i. The feature extraction step extracts features from an image that are relevant to
the object class, which is typically carried out by using hand-crafted features,
such as color histograms [18], edge features [19], or shape features [20];
ii. The classification step classifies the object class by using a ML algorithm, such
as a SVM [16] or random forest [17];
c. Object tracking: Traditional object-tracking algorithms typically use a Kalman fil-
ter [21]:
i. The state estimation step estimates the state of the object, such as its position,
velocity, and acceleration;
ii. The measurement update step updates the state estimate based on new mea-
surements of the object.
These traditional object detection, recognition, and tracking algorithm are effective for
a variety of ADAS applications. However, they can be computationally expensive and may
not be able to handle challenging conditions, such as occlusion or low lighting.
In recent years, there has been a trend towards using DL algorithms for object detection,
recognition, and tracking in ADASs. DL algorithms have been shown to be more accurate
than traditional algorithms, and they can handle challenging conditions more effectively.

1.3.2. Deep Learning Algorithms


Inspired by the human brain, DL methods for object detection, recognition, and
tracking use artificial neural networks (ANNs) to learn the features that are important for
identifying different objects. They are composed of layers of interconnected nodes. Each
node performs a simple calculation, and the output of each node is used as the input to the
next node.
DL algorithms can learn to detect objects, obstacles, and hazards from large datasets
of labeled data usually collected using a variety of sensors. The algorithm is trained to
associate specific patterns in the data with specific objects or hazards. DL algorithms are
generally more complex than traditional algorithms, but they can achieve higher accuracy
as they are not limited by hand-crafted rules, they can learn to detect objects and hazards
not covered by any rules, and they are also able to handle challenging conditions, such
as occlusion or low lighting, more effectively. Some of the standard DL method-based
algorithms are discussed below:
a. Object detection: DL object detection algorithms commonly use a convolutional neural
network (CNN) to extract features from an image. The CNN is then trained on a
dataset of images that have been labeled with the objects that they contain. Once the
CNN is trained, it can be used to detect objects in new images;
b. Object recognition: DL object recognition algorithms also conventionally use a CNN
to extract features from an image. However, the CNN is trained on a dataset of images
that have been labeled with the class of each object. The trained CNN can be used to
recognize the class of objects in new images;
Sensors 2024, 24, 249 7 of 51

c. Object tracking: DL object tracking algorithms typically use a combination of CNNs


and Kalman filters [21]. The CNN is used to extract features from an image and the
Kalman filter is used to track the state of the object over time.

2. Sensors Used in Object Detection, Recognition, and Tracking Algorithms of ADASs


Several sensors can be used for object detection, recognition, and tracking in ADASs.
The most common sensors include cameras, radars, and lidars. In addition to these sensors,
some other sensors can also be used, such as:
a. Ultrasonic sensors: used to detect objects that are close to the vehicle, aiding in
preventing collisions with pedestrians or other vehicles;
b. Inertial measurement units (IMUs): employed to track the movement of the vehicle
using which the accuracy of object detection and tracking can be improved;
c. GPS sensors: used to determine the position of the vehicle and are utilized to track the
movement of the vehicle and to identify objects that are in the vehicle’s path;
d. Gyroscope sensors: used to track the orientation of the vehicle and employed to
improve the accuracy of object detection and tracking algorithms.
The choice of sensors for object detection, recognition, and tracking in ADASs depends
on the specific application. For instance, a system that is designed to detect pedestrians
may use a combination of cameras and radar, while a system that is designed to track the
movement of other vehicles may use a combination of radar and lidar.
The combination of multiple sensors is mostly used in more recent state-of-the-art
methods, as this improves the accuracy of object detection, recognition, and tracking algo-
rithms. The combination of sensors combines the strengths of the sensors and overcomes
the weaknesses of the other sensors.

2.1. Cameras, Radar, and Lidar


Cameras, radar, and lidar are the most common types of sensors used in ADASs.
While there are two main types of cameras—monocular cameras are the most common type
used in ADASs, which have a single lens and can only see in two dimensions, while stereo
cameras have two lenses and can see in three dimensions—there are no distinctive types of
radars and lidars. These sensors are used in ADASs in a variety of ways, including:
a. Object detection: the sensors are used to detect objects in the road environment such as
pedestrians, vehicles, cyclists, and traffic signs, and then warn the driver of potential
hazards or take corrective actions like braking or steering control using the gathered
information;
b. Object recognition: the sensors are used to recognize the class of an object, such as
a pedestrian, a vehicle, a cyclist, or a traffic sign. This information can be used to
provide the driver with more information about the hazard, such as the type of vehicle,
the type of traffic sign and the road condition ahead, or the speed of a pedestrian;
c. Object tracking: the sensors can be used to track the movement of an object over time,
which is then used to predict the future position of an object, which can be used to
warn the driver of potential collisions.
The advantages of cameras are their low cost, ease of installation, wide field of view
(FOV), and high resolution, but they are easily impacted by weather conditions, occlusion
of objects, and varying light conditions. On the other hand, both radars and lidars are
resistant to varying weather conditions such as rain, snow, fog, and so on. While radars are
occlusion-resistant and provide a longer range than cameras, they fail to provide as many
details as cameras and are more expensive than cameras. Compared to both cameras and
radars, lidars provide very accurate information about the distance and shape of objects,
even in difficult conditions, and possess 3D capabilities, enabling them to create a 3D map
of the road environment that makes it easier and more efficient to identify and track objects
that are occluded by other objects. Nonetheless, lidars are more expensive than cameras
and radars, and lidar systems are more complex, making them more challenging to install
Sensors 2024, 24, 249 8 of 51

and maintain. Cameras are used in almost all ADAS functions, while radars and lidars
are used in FCWS, LDWS, BSD, and ACC, with lidars having an additional application in
autonomous driving.
All the above features allow these versatile sensors to be used for a variety of object
detection, recognition, and tracking tasks in ADASs. However, some challenges need to be
addressed before they can be used effectively in all conditions. Hence, some researchers
have attempted to use a combination of these sensors, as discussed in the following section.

2.2. Sensor Fusion


Sensor fusion is the process of combining data from multiple sensors to create a more
complete and accurate picture of the world. This can be used to improve the performance
of object detection, recognition, and tracking algorithms in ADASs.
Numerous different sensor fusion techniques can be used for ADASs, namely:
a. Data-level fusion: a technique that combines data from different sensors at the data
level by averaging the data from different sensors, or by using more sophisticated
techniques such as Kalman filtering [21,22];
b. Feature-level fusion: combines features extracted from data from different sensors by
combining the features, or by using more sophisticated techniques such as Bayesian
fusion [23,24];
c. Decision-level fusion: combines decisions made by different sensors by taking the
majority vote, or by using more sophisticated techniques such as the Dempster–Shafer
theory [25–27].
The choice of sensor fusion technique is application-specific. A data-level fusion may
be a good choice for applications where accuracy is critical, whereas a decision-level fusion
may be a good choice for applications where speed is critical.
The benefits of using sensor fusion for object detection, recognition, and tracking in
ADASs can be listed as [15,28–31]:
a. Improved accuracy: sensor fusion improves the accuracy of object detection, recogni-
tion, and tracking algorithms by combining the strengths of different sensors;
b. Improved robustness: sensor fusion also improves the robustness of object detection,
recognition, and tracking algorithms by making them less susceptible to noise and
other disturbances;
c. Reduced computational complexity: sensor fusion also reduces the computational
complexity of object detection, recognition, and tracking algorithms, as the data from
multiple sensors can be processed together, resulting in saved time and processing
power.
Overall, sensor fusion is a promising, powerful technique that has the potential to
make ADAS object detection, recognition, and tracking algorithms much safer and more
reliable. Although sensor fusion is advantageous, it has some challenges [15,32], such as:
a. Data compatibility: the data from different sensors must be compatible to be fused,
implying the data must be in the same format and have the same resolution;
b. Sensor calibration: the sensors must be calibrated to ensure that they are providing
accurate data, which can be challenging, especially if the sensors are in motion;
c. Computational complexity: Sensor fusion is computationally expensive, especially if
a large number of sensors are being fused. This can limit the use of sensor fusion in
real-time applications.
Despite these challenges, sensor fusion is emerging with greater potential to improve
the performance of ADAS object detection, recognition, and tracking algorithms. As sensor
technology continues to improve, a fusion of sensors will become even more powerful and
efficient, and it will likely become a standard feature in ADASs.
The following section discusses the most commonly fused sensors in ADASs.
Sensors 2024, 24, 249 9 of 51

2.2.1. Camera–Radar Fusion


Camera–radar fusion is a technique that combines data from cameras and radar sensors
to improve the performance of object detection, recognition, and tracking algorithms in
ADASs. As cameras are good at providing good image quality but are susceptible to
weather conditions, radar sensors compensate by seeing through weather conditions. Data-
level fusion and decision-level fusion are the two main approaches to camera–radar fusion.

2.2.2. Camera–Lidar Fusion


Camera–lidar fusion is a technique that combines data from cameras and lidar sensors
to improve the performance of object detection, recognition, and tracking algorithms in
ADASs. Cameras are good at providing detailed information about the appearance of
objects, while lidar sensors are good at providing information about the distance and shape
of objects. By combining data from these two sensors, it is feasible to create a complete and
accurate picture of the object, leading to improved accuracy in object detection and tracking.

2.2.3. Radar–Lidar Fusion


Radar–lidar fusion is a technique that combines the data from radar and lidar sensors,
improving the performance of ADAS algorithms. Radar sensors use radio waves to detect
objects at long distances, while lidar sensors use lasers to detect objects in detail. By fusing
the data from the two sensors, the system can obtain a more complete and accurate view of
the environment.

2.2.4. Lidar–Lidar Fusion


Lidar–lidar fusion is a technique that combines data from two or more lidar sensors,
improving the performance of object detection, recognition, and tracking algorithms in
ADASs. Lidar sensors are good at providing information about the distance and shape
of objects, but they can be limited in their ability to detect objects that are close to the
vehicle or that are occluded by other objects. By fusing data from multiple lidar sensors, it
is possible to create a complete and accurate picture of the environment, which can lead to
improved accuracy in object detection and tracking.The above discussed advantages and
disadvantages of various ADASs sensors are listed in the Table 1.

Table 1. Summary of the advantages and disadvantages of each sensor and combinations used in
ADAS applications.

Sensors Advantages Disadvantages


i. Affected by environmental factors (lighting,
i. Relatively inexpensive; weather);
ii. Easy to use; ii. Difficult to interpret images in low-visibility
Camera conditions;
iii. High-resolution images.
iii. Can be fooled by glare and reflections;
iv. Can only detect objects in the visible spectrum.

i. Can detect objects at a longer range than


i. Lower resolution than cameras;
cameras, even in poor visibility;
Radar ii. More expensive than cameras;
ii. Less affected by weather conditions;
iii. Can be complex to integrate into vehicles.
iii. Can be used to estimate the speed of objects.

i. Not affected by environmental factors; i. Expensive;


ii. Accurate measurement of distance, speed, and ii. Difficult to mount on vehicles;
Lidar
shape of objects. iii. Can produce sparse point clouds;
iv. Can be limited in field of view (FOV).
Sensors 2024, 24, 249 10 of 51

Table 1. Cont.

Sensors Advantages Disadvantages

i. Combines the strengths of cameras and i. More expensive than using a single sensor;
Camera–
lidar sensors; ii. Can be complex to implement.
Radar Fusion
ii. Can be used in challenging weather conditions.

i. Combines the strengths of cameras and lidar;


ii. Can provide accurate 3D measurements i. More expensive than a camera or lidar alone;
Camera–
of objects; ii. Can be computationally complex.
Lidar Fusion
iii. Robust object detection and tracking system;
iv. Can be used in challenging weather conditions.

i. Combines the strengths of radar and


Radar–Lidar lidar sensors; i. More expensive than a camera or lidar alone;
Fusion ii. Improves accuracy of object detection and ii. Can be computationally complex.
tracking in challenging weather conditions.

i. Combines data from multiple lidar sensors;


ii. Can improve the accuracy of 3D mapping and i. More expensive than lidars alone;
Lidar–Lidar
object detection; ii. Can be computationally complex.
Fusion
iii. More accurate and reliable object detection and
tracking system.

3. Systematic Literature Review


The main objective of this review is to determine the latest trends and approaches im-
plemented for different ADAS methods in autonomous vehicles and discuss their achieve-
ments. This paper also attempts to evaluate the valuable basis of the methods, implementa-
tion, and applications to furnish a state-of-the-art understanding for new researchers in
this computer vision and autonomous vehicles field.
The writing of this paper follows a planned, conducted, and observed process. The
planning phase involved clarifying the research questions and review protocol, which
comprised identifying the publications’ sources, keywords to search for, and selection crite-
ria. The conducting phase involved analyzing, extracting, and synthesizing the literature
collection. This included identifying the key themes and findings from the literature and
drawing conclusions that address the research questions and objectives. The observed
stage contained the review results, addressing the summary of the key findings as well as
any limitations or implications of the study.

3.1. Research Questions (RQs)


The main objective of this review is to determine the trend of the methods implemented
for different ADAS methods in the field of autonomous vehicles, as well as the achievements
of the latest techniques. Additionally, we aim to provide a valuable foundation for the
methods, challenges, and opportunities, thus providing state-of-the-art knowledge to
support new research in the field of computer vision and ADASs.
Two research questions (RQs) have been defined as follows:
1. What techniques have been implemented for different ADAS methods in an au-
tonomous vehicle?
2. What dataset was applied for the network training, validation, and testing?
A focused approach has been adopted while scanning the literature. First, each article
was reviewed to see if it answered the earlier questions. The information acquired was
then presented comprehensively to achieve the vision of this article.
Sensors 2024, 24, 249 11 of 51

3.2. Review Protocol


Below, we have listed the literature search sources, search terms, and inclusion and
exclusion selection criteria, as well as the technique of literature collection used for this
systematic literature review (SLR).

3.2.1. Search Sources


IEEE Xplore and MDPI were chosen as the databases from which the data were
extracted.

3.2.2. Search Terms


Different sets of search terms were used to investigate the various ADAS methods
presented in this research. The OR, AND, and NOT operators were used to select and
combine the most relevant and commonly used applicable phrases. The AND operator
combined individual search strings into a search query. The databases included IEEE
Xplore and MDPI. The search terms used for the respective different methods of ADASs
are listed in the respective sections of this paper.

3.2.3. Inclusion Criteria


The study covered all primary publications published in English that discussed the
different ADAS methods or any other task related to them discussed in this paper. There
were no constraints on subject categories or time frames for a broad search spectrum. The
selected articles were among the top most cited journal papers published across four years,
from 2019 to 2022.
In addition, the below parameters were also considered while selecting the papers:
a. Relevance of the research papers to the topic of the review paper covering the most
important aspects of the topic and providing a comprehensive overview of the current
state of knowledge;
b. The quality of the research papers should be high. They should be well written, well
argued, and well supported by implementation details and experimental results;
c. Coverage of the research papers should include a wide range of perspectives on the
topic and not limited to a single viewpoint or approach;
d. The methodology presented in the research papers should be sound such that the
research methods must be rigorous and provide clear evidence to support their con-
clusions;
e. The research papers should be well written and easy to understand in a clear and
concise style so that the information is accessible and understandable to a wide
audience;
f. The research papers should have had a significant impact on the field. They should
have been cited by other researchers and used to inform new research.

3.2.4. Exclusion Criteria


Articles written in languages other than English were not considered. The exclu-
sion criteria also included short papers, such as abstracts or expanded abstracts, earlier
published versions of the detailed works, and survey/review papers.

4. Discussion—Methodology
4.1. Vehicle Detection
Vehicle detection, one of the key components and a critical task of ADASs, is the
process of identifying and locating vehicles in the surrounding scenes using sensors such
as cameras, radars, and lidar employing computer vision techniques. This information is
used to provide drivers with warnings about potential hazards, such as cars that are too
close or that are changing lanes and pedestrians or cyclists that might be in the vehicles’
way. It is a crucial function for many ADAS features, such as ACC, LDWS, FCWS, and BSD,
discussed in the later sections of the paper.
Sensors 2024, 24, 249 12 of 51

Vehicle detection is a challenging task, as vehicles vary in size, shape, and color,
affecting their appearance in images and videos. They can be seen from a variety of
different angles, which can also affect their appearance; furthermore, vehicle sizes can
be too small or too big, they could be partially or fully occluded by other objects in the
scene; there are different types of vehicles, each with a unique appearance, and the lighting
conditions and possible background clutter also affect the appearance of vehicles. All of
these factors make detection challenging.
Despite these challenges, the vehicle detection algorithm in ADASs has greatly evolved
and is still evolving, and there have been significant advances in vehicle detection over the
years. Early algorithms were based on relatively simple-to-implement image processing
techniques, such as edge detection and color segmentation, but they were not very accurate.
In the early 2000s, there was a shift towards using ML techniques that can learn from
data, making them more accurate than simple image processing techniques. Some of the
most common ML algorithms used for vehicle detection include support vector machines
(SVMs), random forests, and DL NNs.
Deep learning NNs are the most effective machine learning algorithms for vehicle
detection. Deep learning NNs can learn complex features from data, which makes them
very accurate. Regardless, DL NNs are also more computationally expensive than other
ML algorithms. In recent years, there has been a trend towards using sensor fusion for
vehicle detection.
The vehicle detection algorithms in ADASs are still evolving. As sensor technology
continues to improve, and as ML algorithms become more powerful, vehicle detection
algorithms will become even more accurate and reliable.

Search Terms and Recent Trends in Vehicle Detection


‘Vehicle detection’, ‘vehicle tracking’, and ‘vehicle detection and tracking’ are three
prominent search terms which were used to investigate the topic. The ‘OR’ operator was
used to choose and combine the most relevant and regularly used applicable phrases;
that is, the search phrases ‘vehicle detection’, ‘vehicle tracking’, and ‘vehicle detection
and tracking’ were discovered. Figure 2 shows the complete search query for each13
Sensors 2024, 24, x FOR PEER REVIEW of of
the52
databases. The databases include IEEE Xplore and MDPI.

Figure2.2.Search
Figure Searchqueries
queriesfor
foreach
eachof
ofthe
thedatabases
databasesfor
forvehicle
vehicledetection.
detection.The
Thedatabases
databasesinclude
includeIEEE
IEEE
Xplore and MDPI.
Xplore and MDPI.

Since the
Since theevolution
evolution of
ofvehicle
vehicle detection
detection has
has been
been rapid,
rapid, considering
considering thethe detection,
detection,
recognition, and
recognition, and tracking
tracking of
ofother
othervehicles,
vehicles,pedestrians,
pedestrians, andandobjects,
objects, plenty
plentyof
ofdifferent
different
methodshave
methods havebeen
beenproposed
proposedininthe
thepast
past few
few years.
years. Some
Some of of
thethe recent
recent prominent
prominent state-
state-of-
of-the-art vehicle detection methods are discussed in the following
the-art vehicle detection methods are discussed in the following sections. sections.
Ref.[33]
Ref. [33]presents
presentsaascale-insensitive
scale-insensitive CNN,
CNN, SINet,
SINet, which
which isisdesigned
designedforforrapid
rapidand
and
accuratevehicle
accurate vehicledetection.
detection.SINet
SINetemploys
employstwo twolightweight
lightweighttechniques:
techniques:context-aware
context-awareRoIRoI
poolingand
pooling andmulti-branch
multi-branchdecision
decisionnetworks.
networks. These
These preserve
preserve small-scale
small-scale object
object infor-
informa-
mation and enhance classification accuracy. Ref. [34] introduces an integrated approach
to monocular 3D vehicle detection and tracking. It utilizes a CNN for vehicle detection
and employs a Kalman filter-based tracker for temporal continuity. The method incorpo-
rates multi-task learning, 3D proposal generation, and Kalman filter-based tracking. Com-
bining radar and vision sensors, ref. [35] proposes a novel distant vehicle detection ap-
Sensors 2024, 24, 249 13 of 51

tion and enhance classification accuracy. Ref. [34] introduces an integrated approach to
monocular 3D vehicle detection and tracking. It utilizes a CNN for vehicle detection and
employs a Kalman filter-based tracker for temporal continuity. The method incorporates
multi-task learning, 3D proposal generation, and Kalman filter-based tracking. Combining
radar and vision sensors, ref. [35] proposes a novel distant vehicle detection approach.
Radar generates candidate bounding boxes for distant vehicles, which are classified using
vision-based methods, ensuring accurate detection and localization. Ref. [36] focuses on
multi-vehicle tracking, utilizing object detection and viewpoint estimation sensors. The
CNN detects vehicles, while viewpoint estimation enhances tracking accuracy. Ref. [37]
utilizes CNN with feature concatenation for urban vehicle detection, improving robustness
through layer-wise feature combination. Ref. [38] presents a robust vehicle detection and
counting method integrating CNN and optical flow, while [39] pioneers vehicle detec-
tion and classification via distributed fiber optic acoustic sensing. Ref. [40] introduces
a vehicle tracking and speed estimation method using roadside lidar, incorporating a
Kalman filter. Ref. [41] modifies Tiny-YOLOv3 for front vehicle detection with SPP-Net
enhancement, excelling in challenging conditions. Ref. [42] proposes an Extended Kalman
Filter (EKF) for vehicle tracking using radar and lidar data, while [43] enhances SSD for
accurate front vehicle detection. Ref. [44] improves Faster RCNN for oriented vehicle
detection in aerial images with feature amplification and oversampling. Ref. [45] employs
reinforcement learning with partial vehicle detection for efficient intelligent traffic signal
control. Ref. [46] presents a robust DL framework for vehicle detection in adverse weather
conditions. Ref. [47] adopts GAN-based image style transfer for nighttime vehicle detection,
while ref. [48] introduces MultEYE for real-time vehicle detection and tracking using UAV
imagery. Ref. [49] analyzes traffic patterns during COVID-19 using Planet remote-sensing
satellite images for vehicle detection. Ref. [50] proposes one-stage anchor-free 3D vehicle
detection from lidar, ref. [51] fuses RGB-infrared images for accurate vehicle detection
using uncertainty-aware learning. Ref. [52] optimizes YOLOv4 for improved vehicle de-
tection and classification. Ref. [53] introduces a real-time foveal classifier-based system
for nighttime vehicle detection. Ref. [54] combines YOLOv4 and SPP-Net for multi-scale
vehicle detection in varying weather. Ref. [55] efficiently detects moving vehicles with
a CNN-based method incorporating background subtraction. Ref. [56] refines YOLOv5
for vehicle detection in aerial infrared images, ensuring robustness against challenges like
occlusion and low contrast.
Overall, the aforementioned papers represent a diverse set of approaches to vehicle
detection and tracking. Each paper has its strengths and weaknesses, and it is important
to consider the specific application when choosing a method. However, all of the papers
represent significant advances in the field of vehicle detection and tracking. The list of
reviewed papers on vehicle detection is summarized in Table 2.

Table 2. Chosen publications regarding vehicle detection, their source title, and their number of
citations.

SI No. Ref. Year Source Title Citations


1 [33] 2019 IEEE Transactions on Intelligent Transportation 165
Systems
2 [34] 2019 IEEE/CVF International Conference on Computer 88
Vision (ICCV)
3 [35] 2019 IEEE International Conference on Robotics and 79
Automation (ICRA)
4 [36] 2019 MDPI Intelligent Sensors 58
5 [37] 2019 MDPI Intelligent Sensors 42
6 [38] 2019 MDPI Remote Sensors 41
7 [39] 2020 IEEE Transactions on Vehicular Technology 47
8 [40] 2020 IEEE Journal of Selected Topics in Applied Earth 44
Observations and Remote Sensing
Sensors 2024, 24, 249 14 of 51

Table 2. Cont.

SI No. Ref. Year Source Title Citations


9 [41] 2020 IEEE Access 38
10 [42] 2020 MDPI Sensors 56
11 [43] 2020 MDPI Sensors 27
12 [44] 2020 MDPI Remote Sensing 27
13 [45] 2021 IEEE Transactions on Intelligent Transportation 52
Systems
14 [46] 2021 IEEE Transactions on Intelligent Transportation 48
Systems
15 [47] 2021 IEEE Transactions on Intelligent Transportation 47
Systems
16 [48] 2021 MDPI Remote Sensing 37
17 [49] 2021 MDPI Remote Sensing 30
18 [50] 2021 MDPI Sensors 11
19 [51] 2022 IEEE Transactions on Circuits and Systems for 20
Video Technology
20 [52] 2022 IEEE Access 13
21 [53] 2022 IEEE Transactions on Intelligent Transportation 12
Systems
22 [54] 2022 MDPI Electronics 21
23 [55] 2022 MDPI Sensors 10
24 [56] 2022 MDPI Electronics 6

4.2. Pedestrian Detection


Pedestrian detection is also a key component of ADASs that uses sensors to identify
and track pedestrians in the surrounding environment and prevent collisions with pedestri-
ans. The goal of pedestrian detection is to identify and track pedestrians in the surrounding
environment, warn the driver of potential collisions with pedestrians, and take evasive
action such as automatically applying brakes, if necessary.
Pedestrian detection systems typically use a combination of sensors, such as cameras,
radar, and lidar. Cameras are often used to identify the shape and movement of pedestrians,
while radar and lidar can be used to determine the distance and speed of pedestrians.
Cameras can be susceptible to glare and shadows, whereas radar and lidars are less
susceptible to these problems.
Pedestrian detection systems can be used to warn drivers of potential collisions
in a variety of ways. Some systems simply alert the driver with a visual or audible
warning. Others can take more active measures, such as automatically braking the vehicle
or slightly steering it away from the pedestrian. However, pedestrian detection is more
challenging, as pedestrians are often smaller and more difficult to distinguish from other
objects in the environment. Thus, it is an important safety feature for ADASs, as it can
help to prevent accidents involving pedestrians. According to the National Highway
Traffic Safety Administration (NHTSA) [57], pedestrians are involved in about 17% of all
traffic fatalities in the United States. Pedestrian detection systems can help to reduce this
number by warning drivers of potential hazards and by automatically applying the brakes
in emergencies.

Search Terms and Recent Trends in Pedestrian Detection


‘Pedestrian detection’, ‘pedestrian tracking’, and ‘pedestrian detection and tracking’
are three prominent search terms which were used to investigate this topic. The ‘OR’
operator was used to choose and combine the most relevant and regularly used applicable
phrases; that is, the search phrases pedestrian detection, ‘pedestrian tracking’, and ‘pedes-
trian detection and tracking’ were discovered. Figure 3 shows the complete search query
for each of the databases. The databases include IEEE Xplore and MDPI.
‘Pedestrian detection’, ‘pedestrian tracking’, and ‘pedestrian detection and tracking’
are three prominent search terms which were used to investigate this topic. The ’OR’ op-
erator was used to choose and combine the most relevant and regularly used applicable
phrases; that is, the search phrases pedestrian detection,’ ’pedestrian tracking,’ and ’pe-
Sensors 2024, 24, 249
destrian detection and tracking’ were discovered. Figure 3 shows the complete 15 of 51
search
query for each of the databases. The databases include IEEE Xplore and MDPI.

Figure 3.3. Search


Figure Search queries
queries for
foreach
eachof
ofthe
thedatabases
databasesfor
forpedestrian
pedestriandetection.
detection.The
Thedatabases
databasesinclude
include
IEEE Xplore and
IEEE Xplore and MDPI.MDPI.

Ref.[58]
Ref. [58]introduces
introducesaanovel
novel approach
approachto to pedestrian
pedestrian detection,
detection, emphasizing
emphasizinghigh-level
high-level
semanticfeatures
semantic featuresinstead
insteadofof traditional
traditional low-level
low-level features.
features. This
This method
method employs
employs context-
context-
aware RoI pooling and
aware andaamulti-branch
multi-branchdecision
decisionnetwork
network to to
preserve
preservesmall-scale object
small-scale objectde-
tails and enhance classification accuracy. The CNN
details and enhance classification accuracy. The CNN initially initially captures high-level semantic
high-level semantic
features from
features fromimages,
images, which
which arearethen
thenused
usedto totrain
trainaaclassifier
classifiertotodistinguish
distinguishpedestrians
pedestrians
fromother
from otherobjects.
objects. Ref.
Ref. [59]
[59] proposes
proposes an adaptive non-maximum
non-maximum suppression
suppression (NMS)
(NMS) tech-
niquetailored
nique tailoredforforrefining
refiningpedestrian
pedestriandetection
detectionin incrowded
crowdedscenarios.
scenarios.Conventional
ConventionalNMS NMS
algorithmsoften
algorithms ofteneliminate
eliminatevalid
validdetections
detectionsalong
alongwith
withduplicates
duplicatesin incrowded
crowdedscenes.
scenes.TheThe
new‘Adaptive
new ‘Adaptive NMS’NMS’ algorithm
algorithm dynamically
dynamically adjusts
adjusts the
the NMS
NMS threshold
threshold based
based onon crowd
crowd
density,enabling
density, enablingthetheretention
retentionof ofmore
morepedestrian
pedestriancandidates
candidatesin incongested
congestedareas.
areas.Ref.
Ref.[60]
[60]
introduces
introducesthe the‘Mask-Guided
‘Mask-GuidedAttention
Attention Network’
Network’ (MGAN)
(MGAN) for for detecting
detecting occluded pedes-
trians.
trians. Utilizing
Utilizing aa CNN,
CNN, MGAN
MGAN extracts
extracts features
features from
from both pedestrians and backgrounds.
Pedestrian
Pedestrian features
features guide
guide the
the network’s
network’s focus
focus towards
towards occluded
occluded regions,
regions, improving
improving the the
accuracy
accuracyof ofdetecting
detectingoccluded
occludedpedestrians.
pedestrians. Ref.
Ref. [61]
[61] presents
presents aa real-time
real-time method
method to to track
track
pedestrians by utilizing camera and lidar sensors in a moving vehicle. Combining sensor
features enables accurate pedestrian tracking. Features from the camera image, such as
silhouette, clothing, and gait, are extracted. Additionally, features like height, width, and
depth are obtained from the lidar point cloud. These details facilitate precise tracking of
pedestrians’ locations and poses over time. A Kalman filter enhances tracking performance
through sensor data fusion, offering better insights into pedestrian behavior in dynamic
environments. Ref. [62] proposes a computationally efficient single-template matching
technique for accurate pedestrian detection in lidar point clouds. The method creates a
pedestrian template from training data and uses it to identify pedestrians in new point
clouds, even under partial occlusion. Ref. [63] focuses on tracking pedestrian flow and
statistics using a monocular camera and a CNN–Kalman filter fusion. The CNN extracts
features from the camera image, which is followed by a Kalman filter for trajectory estima-
tion. This approach effectively tracks pedestrian flow and vital statistics, including count,
speed, and direction.
Ref. [64] addresses hazy weather pedestrian detection with deep learning. DL mod-
els are trained on hazy weather datasets and use architectural modifications to handle
challenging conditions. This approach achieves high pedestrian detection accuracy, even
in hazy weather. Ref. [65] introduces the ‘NMS by Representative Region’ algorithm to
refine pedestrian detection in crowded scenes. By employing representative regions, this
method enhances crowded scene handling by comparing these regions and removing
duplicate detections, resulting in reduced false positives. Ref. [66] proposes a graininess-
aware deep feature learning approach, equipping DL models to handle grainy images.
A DL model is trained using a graininess-aware loss function on a dataset containing
grainy and non-grainy pedestrian images. This model effectively detects pedestrians in
new images, even when they are grainy. Ref. [67] presents a DL framework for real-time
Sensors 2024, 24, 249 16 of 51

vehicle and pedestrian detection on rural roads, optimized for embedded GPUs. Modified
Faster R-CNN detects both vehicles and pedestrians simultaneously in rural road scenes.
A new rural road image dataset is developed for training the model. Ref. [68] addresses
infrared pedestrian detection at night using an attention-guided encoder–decoder CNN.
Attention mechanisms focus on relevant regions in infrared images, enhancing detection
accuracy in low-light conditions. Ref. [69] focuses on improved YOLOv3-based pedestrian
detection in complex scenarios, incorporating modifications to handle various challenges
like occlusions, lighting variations, and crowded environments.
Ref. [70] introduces Ratio-and-Scale-Aware YOLO (RASYOLO), handling pedestrians
with varying sizes and occlusions through ratio-aware anchors and scale-aware feature
fusion. Ref. [71] introduces Track Management and Occlusion Handling (TMOH), manag-
ing occlusions and multiple-pedestrian tracking through track suspension and resumption.
Ref. [72] incorporates a Part-Aware Multi-Scale fully convolutional network (PAM-FCN)
to enhance pedestrian detection accuracy by considering pedestrian body part informa-
tion and addressing scale variation. Ref. [73] proposes Attention Fusion for One-Stage
Multispectral Pedestrian Detection (AFOS-MSPD), combining attention fusion and a one-
stage approach for multispectral pedestrian detection, improving efficiency and accuracy.
Ref. [74] utilizes multispectral images for Multispectral Pedestrian Detection (MSPD), im-
proving detection using a DNN designed for multispectral data. Ref. [75] presents Robust
Pedestrian Detection Based on Multi-Spectral Image Fusion and Convolutional Neural
Networks (RPOD-FCN), utilizing multi-spectral image fusion and a CNN-based model for
accurate detection.
Ref. [76] introduces Uncertainty-Guided Cross-Modal Learning for Robust Multispec-
tral Pedestrian Detection (UCM-RMPD), addressing multispectral detection challenges
using uncertainty-guided cross-modal learning. Ref. [77] focuses on multimodal pedestrian
detection for autonomous driving using a Spatio-Contextual Deep Network-Based Multi-
modal Pedestrian Detection (SCDN-PMD) approach. Ref. [78] proposes a Novel Approach
to Model-Based Pedestrian Tracking Using Automotive Radar (NMPT radar), utilizing
radar data for model-based pedestrian tracking. Ref. [79] adopts YOLOv4 Architecture
for Low-Latency Multispectral Pedestrian Detection in Autonomous Driving (AYOLOv4),
enhancing detection accuracy using multispectral images. Ref. [80] introduces modifica-
tions to [79] called AIR-YOLOv3, an improved network-pruned YOLOv3 for aerial infrared
pedestrian detection, enhancing robustness and efficiency. Ref. [81] presents YOLOv5-
AC, an attention mechanism-based lightweight YOLOv5 variant for efficient pedestrian
detection on embedded devices. The list of reviewed papers on pedestrian detection is
summarized in Table 3.

Table 3. Chosen publications regarding pedestrian detection, their source title, and their number of
citations.

SI No. Ref. Year Source Title Citations


IEEE/CVF Conference on Computer Vision and Pattern
1 [58] 2019 186
Recognition (CVPR)
IEEE/CVF Conference on Computer Vision and Pattern
2 [59] 2019 163
Recognition (CVPR)
2019 IEEE/CVF International Conference on Computer
3 [60] 2019 111
Vision (ICCV)
4 [61] 2019 MDPI Sensors 45
5 [62] 2019 MDPI Electronics 26
6 [63] 2019 MDPI Applied Sciences 15
7 [64] 2020 IEEE Transactions on Industrial Electronics 98
2020 IEEE/CVF Conference on Computer Vision and
8 [65] 2020 76
Pattern Recognition (CVPR)
9 [66] 2020 IEEE Transactions on Image Processing 42
10 [67] 2020 MDPI Electronics 49
11 [68] 2020 MDPI Applied Science 28
Sensors 2024, 24, 249 17 of 51

Table 3. Cont.

SI No. Ref. Year Source Title Citations


12 [69] 2020 MDPI Sensors 14
13 [70] 2021 IEEE Transactions on Image Processing 54
IEEE/CVF Conference on Computer Vision and Pattern
14 [71] 2021 45
Recognition (CVPR)
15 [72] 2021 IEEE Transactions on Intelligent Transportation Systems 27
16 [73] 2021 MDPI Sensors 21
17 [74] 2021 MDPI Sensors 19
18 [75] 2021 MDPI Electronics 15
IEEE Transactions on Circuits and Systems for Video
19 [76] 2022 15
Technology
20 [77] 2022 IEEE Transactions on Intelligent Transportation Systems 12
21 [78] 2022 IEEE Transactions on Intelligent Transportation Systems 10
22 [79] 2022 MDPI Sensors 20
23 [80] 2022 MDPI Applied Sciences 11
24 [81] 2022 MDPI Sensors 11

4.3. Traffic Signs Detection


Traffic Signs Detection and Recognition (TSR) is another key component of ADASs
that automatically detects and recognizes traffic signs on the road and provides information
to the driver regarding speed limits, upcoming turns, and so on. TSR systems typically
use cameras to capture images of traffic signs and then use computer vision algorithms to
identify and classify the signs.
TSR systems can be a valuable safety feature, as they can help to prevent accidents
caused by driver distraction or drowsiness. For example, TSR systems can alert drivers to
speed limit changes, stop signs, and yield signs. They can also help drivers to stay in their
lane and avoid crossing over into oncoming traffic. Although TSR can be challenging due
to the variety of traffic signs, the different fonts and styles used, and the presence of noise
and clutter, TSR systems are becoming increasingly common in new vehicles. The NHTSA
has mandated that all new cars sold in the United States come equipped with TSR systems
by 2023 [57].

Search Terms and Recent Trends in Traffic Signs Detection


‘Traffic sign detection’, ‘traffic sign recognition, ‘traffic sign classification’, ‘traffic
sign detection and recognition’, and ‘traffic sign detection and recognition system’ are
some of the prominent search terms which were used to investigate this topic. The ‘OR’
operator was used to choose and combine the most relevant and regularly used applicable
phrases; that is, the search phrases ‘driver monitoring system’ and ‘driver monitoring and
assistance system’ were discovered. Figure 4 shows the complete search query for each of
the databases. The databases include IEEE Xplore and MDPI.
Yuan et al. [82] introduce VSSA-NET, a novel architecture for traffic sign detection
(TSD), which employs a vertical spatial sequence attention network to improve accuracy
in complex scenes. VSSA-NET extracts features via CNN, followed by a vertical spatial
sequence attention module to emphasize vertical locations crucial for TSD. The detection
module outputs traffic sign bounding boxes. Li and Wang [83] present real-time traffic
sign recognition using efficient CNNs, addressing diverse lighting and environmental
conditions. MobileNet extracts features from input images, followed by SVM classification.
Liu et al. [84] propose multi-scale region-based CNN (MR-CNN) for recognizing small
traffic signs. MR-CNN extracts multi-scale features using CNN, generates proposals with
RPN, and uses Fast R-CNN for classification and bounding box outputs. Tian et al. [85]
introduce a multi-scale recurrent attention network for TSD. CNN extracts multi-scale
features, the recurrent attention module prioritizes scale, and the detection module outputs
bounding boxes for robust detection across scenarios. Cao et al. [86] present improved TSDR
for intelligent vehicles. CNN performs feature extraction, RPN generates region proposals,
has mandated that all new cars sold in the United States come equipped with TSR systems
by 2023 [57].

4.3.1. Search Terms and Recent Trends in Traffic Signs Detection


Sensors 2024, 24, 249 ‘Traffic sign detection’, ‘traffic sign recognition, ‘traffic sign classification’, ‘traffic sign
18 of 51
detection and recognition’, and ‘traffic sign detection and recognition system’ are some of
the prominent search terms which were used to investigate this topic. The ’OR’ operator
and
wasSVM
used classifies
to chooseproposals, enhancing
and combine the mostreliability in dynamic
relevant and regularlyroad environments.
used Shao
applicable phrases;
et al. is,
that [87] improve
the Faster R-CNN
search phrases ‘driver TSD with a system’
monitoring second RoI
andand HPRPN.
‘driver CNN performs
monitoring and assis-
feature extraction,
tance system’ wereRPN generatesFigure
discovered. region4 proposals,
shows the and the second
complete searchRoI refines
query proposals,
for each of the
enhancing accuracy in complex scenarios.
databases. The databases include IEEE Xplore and MDPI.

Figure4.4.Search
Figure Searchqueries
queriesfor
foreach
eachof
ofthe
thedatabases
databasesfor
fortraffic
trafficsign
signdetection.
detection.The
Thedatabases
databasesinclude
include
IEEE Xplore and MDPI.
IEEE Xplore and MDPI.

Yuan et
Zhang et al.
al. [82]
[88] introduce VSSA-NET,
propose cascaded R-CNN a novel
witharchitecture for traffic for
multiscale attention sign detection
TSD. RPN
(TSD), which employs a vertical spatial sequence attention network
generates proposals, Fast R-CNN classifies, and multiscale attention improves detection to improve accuracy
in complex scenes.
performance, VSSA-NET
particularly whenextracts
there is features via CNN,
an imbalanced data followed by a vertical
distribution. Tabernik spatial
and
sequence
Skočaj [89]attention
explore the module to emphasize
DL framework vertical locations
for large-scale TSDR. CNN crucialperforms
for TSD.feature
The detection
extrac-
module
tion, RPNoutputs
generates traffic
regionsign bounding
proposals, andboxes. Li and Wang
Fast R-CNN [83]exploring
classifies, present real-time traffic
DL’s potential
sign
in recognition
handling diverse using efficientscenarios.
real-world CNNs, addressing
Kamal et diverse
al. [90] lighting
introduceand environmental
automatic TSDR
conditions.
using SegU-Net MobileNet
and modifiedextracts features
Tversky from
loss. input images,
SegU-Net segments followed by SVM
traffic signs classifica-
and modified
tion. Liu et al. [84] propose multi-scale region-based CNN
loss function enhances detection and recognition, handling appearance variations. Tai (MR-CNN) for recognizing
small
et traffic
al. [91] signs. aMR-CNN
propose DL approachextractsformulti-scale featurespyramid
TSR with spatial using CNN, poolinggenerates proposals
and scale anal-
withCNN
ysis. RPN, performs
and uses feature
Fast R-CNN for classification
extraction, while spatial andpyramid
bounding box outputs.
pooling captures Tian et al.
context
and
[85]scales,
introduce enhancing recognition
a multi-scale recurrent across scenarios.
attention network Dewifor et
TSD.al. [92]
CNNevaluate
extracts the spatial
multi-scale
pyramid
features,pooling
the recurrenttechnique on CNN
attention modulefor TSR system scale,
prioritizes robustness.
and the Assessing
detectionpooling
modulesizes out-
and
putsstrategies,
boundingthey boxes evaluate different
for robust CNN architectures
detection across scenarios.for effective
Cao et traffic
al. [86]sign recogni-
present im-
tion. Nartey et al. [93] propose robust semi-supervised TSR with
proved TSDR for intelligent vehicles. CNN performs feature extraction, RPN generates self-training and weakly
supervised learning.
region proposals, andCNN
SVMperforms
classifiesfeature extraction,
proposals, self-training
enhancing reliability labels unlabeled
in dynamic roaddata,
en-
and weakly supervised
vironments. Shao et al.learning classifies
[87] improve labeled
Faster R-CNN data,
TSDenhancing accuracy
with a second RoIusing limited
and HPRPN.
labeled data.
CNN performs feature extraction, RPN generates region proposals, and the second RoI
Dewi
refines et al. [94]enhancing
proposals, leverage YOLOv4
accuracy with synthetic
in complex GAN-generated data for advanced
scenarios.
TSR. YOLOv4 with synthetic data from BigGAN
Zhang et al. [88] propose cascaded R-CNN with multiscale achieves top performance,
attention forenhancing
TSD. RPN
detection on the GTSDB dataset. Wang et al. [95] improve
generates proposals, Fast R-CNN classifies, and multiscale attention improves YOLOv4-Tiny TSR with new
detection
features and classification modules. New data augmentation
performance, particularly when there is an imbalanced data distribution. Tabernik and improves the performance
on the GTSDB
Skočaj dataset,
[89] explore theoptimizing
DL framework recognition while maintaining
for large-scale TSDR. CNNefficiency. Cao et al.
performs feature [96]
extrac-
present
tion, RPN improved
generates sparse
regionR-CNN for TSD
proposals, andwith
Fast aR-CNN
new RPN and loss
classifies, function.
exploring Enhancing
DL’s potential
detection accuracy using advanced techniques within the sparse
in handling diverse real-world scenarios. Kamal et al. [90] introduce automatic TSDR R-CNN framework. Lopez-us-
Montiel et al. [97] propose DL-based embedded system evaluation
ing SegU-Net and modified Tversky loss. SegU-Net segments traffic signs and modified and synthetic data
generation for TSD. Methods to assess DL system performance and efficiency for real-time
TSD applications are developed. Zhou et al. [98] introduce a learning region-based attention
network for TSR. The attention module emphasizes important image regions, potentially
enhancing recognition accuracy. Koh et al. [99] evaluate senior adults’ TSR recognition
through EEG signals, utilizing EEG signals to gain unique insights into senior individuals’
traffic sign perception.
Ahmed et al. [100] present a weather-adaptive DL framework for robust TSR. A cas-
caded detector with a weather classifier improves TSD performance in adverse conditions,
enhancing road safety. Xie et al. [101] explore efficient federated learning in TSR with spike
Sensors 2024, 24, 249 19 of 51

NNs (SNNs). SNNs enable training on decentralized datasets, minimizing communication


overhead and resources. Min et al. [102] propose semantic scene understanding and struc-
tural location for TSR, leveraging scene context and structural information for accurate
traffic sign recognition. Gu and Si [103] introduce a lightweight real-time TSD integration
framework based on YOLOv4. Novel data augmentation and YOLOv4 optimization are
used for speed and accuracy, achieving real-time performance. Liu et al. [104] introduce
the M-YOLO TSD algorithm for complex scenarios. M-YOLO detects and classifies traffic
signs, addressing detection in intricate environments. Wang et al. [105] propose real-time
multi-scale TSD for driverless cars. The multi-scale approach detects traffic signs of various
sizes, enhancing performance in diverse scenarios. The list of reviewed papers on traffic
signs detection is summarized in Table 4.

Table 4. Chosen publications, source title, and the number of citations for traffic signs detection.

SI No. Ref. Year Source Title Citations


1 [82] 2019 IEEE Transactions on Image Processing 118
2 [83] 2019 IEEE Transactions on Intelligent Transportation Systems 96
3 [84] 2019 IEEE Access 53
4 [85] 2019 IEEE Transactions on Intelligent Transportation Systems 50
5 [86] 2019 MDPI Sensors 66
6 [87] 2019 MDPI Sensors 44
7 [88] 2020 IEEE Access 151
8 [89] 2020 IEEE Transactions on Intelligent Transportation Systems 131
9 [90] 2020 IEEE Transactions on Intelligent Transportation Systems 52
10 [91] 2020 MDPI Applied Sciences 46
11 [92] 2020 MDPI Electronics 38
12 [93] 2020 MDPI Sensors 16
13 [94] 2021 IEEE Access 63
14 [95] 2021 IEEE Access 30
15 [96] 2021 IEEE Access 19
16 [97] 2021 IEEE Access 16
17 [98] 2021 MDPI Sensors 25
18 [99] 2020 MDPI Sensors 3
19 [100] 2022 IEEE Transactions on Intelligent Transportation Systems 15
20 [101] 2022 IEEE Transactions on Vehicular Technology 11
21 [102] 2022 IEEE Transactions on Intelligent Transportation Systems 11
22 [103] 2022 MDPI Entropy 13
23 [104] 2022 MDPI Symmetry 8
24 [105] 2022 MDPI Sensors 7

4.4. Driver Monitoring System (DMS)


A driver monitoring system (DMS), also called a driver monitoring and assistance
system (DMAS), is a camera-based safety system used to assess the driver’s alertness and
attention. It monitors a driver’s behavior by detecting and tracking the driver’s face, eyes,
and head position and warns or alerts them when they become distracted or drowsy for
long enough to lose situational awareness or full attention to the task of driving. DMSs can
also use other sensors, such as radar or infrared sensors, to gather additional information
about the driver’s state.
DMSs are becoming increasingly common in vehicles and are used to monitor the
driver’s alertness and attention. This information is then used to prevent accidents and
save lives by warning the driver if they are starting to become drowsy or distracted. Some
of the latest DMSs can even predict if drivers are eating and drinking while driving.

4.4.1. Driver Monitoring System Methods


There are a variety of methods used in DMSs. One common approach is to use a
camera to monitor the driver’s face, while the other approach is to use a sensor fusion
Sensors 2024, 24, 249 20 of 51

approach, which combines data from multiple sensors, such as cameras, radar, and eye
tracking sensors.
DMSs can use a variety of sensors to monitor the driver, including:
a. Facial recognition. This is the most common type of sensor used in DMSs. Facial
recognition systems can track the driver’s face and identify signs of distraction or
drowsiness, such as eye closure, head tilt, and lack of facial expression.
b. A head pose sensor tracks the position of the driver’s head and can identify signs of
distraction or drowsiness, such as looking away from the road or nodding off.
c. An eye gaze sensor tracks the direction of the driver’s eye gaze and can identify signs
of distraction or drowsiness, such as looking at the phone or dashboard.
d. An eye blink rate sensor tracks the driver’s eye blink rate and can identify signs of
drowsiness, such as a decrease in the blink rate.
e. Speech recognition is used in DMSs to detect if the driver is talking on the phone or if
they are not paying attention to the road.
The above sensors are used in DMSs to detect a variety of driver behaviors, such as
(i) when a driver is distracted by looking away from the road, talking on the phone, or
using a mobile device; (ii) when a driver is drowsy, which can be determined by tracking
the driver’s eye movements and eyelid closure; (iii) when a driver is inattentive, which can
be determined by tracking the driver’s head position and eye gaze.
Sensors 2024, 24, x FOR PEER REVIEW 21 of 52
When a DMS detects risky driver behavior, it can provide a variety of alerts to the
driver, including alerts displayed on the dashboard or windshield, referred to as visual
alerts; alerts played through the vehicle’s speakers, which are called audio alerts; and hectic
hectic in
alerts, alerts,
which inalerts
whichare alerts arethrough
issued issued through vibrations
vibrations of the wheel
of the steering steering
or wheel or the
the driver’s
driver’s
seat. seat. cases,
In some In somethecases,
DMSthe mayDMS
alsomay
takealso take corrective
corrective action,
action, such such as applying
as applying the
the brakes
or turning
brakes off the engine.
or turning off the engine.

4.4.2.
4.4.2.Search
SearchTerms
Termsand
andRecent
RecentTrends
TrendsininDriver
DriverMonitoring
MonitoringSystem
SystemMethods
Methods
‘Driver
‘Driver monitoring system’ and ‘driver monitoring and assistance system’
monitoring system’ and ‘driver monitoring and assistance system’ are
are the
the
two
two prominent search terms used to investigate this topic. The ’OR’ operator was usedto
prominent search terms used to investigate this topic. The ’OR’ operator was used to
choose
chooseandandcombine
combinethe
themost
mostrelevant
relevantand
andregularly
regularlyused
usedapplicable
applicablephrases.
phrases.That
Thatis,
is,the
the
search
searchphrases
phrases‘driver
‘drivermonitoring
monitoringsystem’
system’and
and‘driver
‘driver monitoring
monitoring and and assistance
assistance system’
system’
were
were discovered. Figure 5 shows the complete search query for each of the databases.The
discovered. Figure 5 shows the complete search query for each of the databases. The
databases
databasesinclude
includeIEEE
IEEEXplore
Xploreand
andMDPI.
MDPI.

Figure5.5.Search
Figure Searchqueries
queriesfor
foreach
eachof
ofthe
thedatabases
databasesfor
forthe
thedriver
drivermonitoring
monitoringsystem.
system.The
Thedatabases
databases
include IEEE Xplore and MDPI.
include IEEE Xplore and MDPI.

Thepapers
The papers[106–114]
[106–114]discuss
discussa avariety
varietyofof approaches
approaches to to DMSs.
DMSs. These
These include
include somesome
of
of the
the keykey methods
methods likelike
(i) (i)
thethe powerful
powerful technique
technique employingDL,
employing DL,which
whichisisused
usedtotoextract
extract
featuresfrom
features fromimages
imagesand andvideos.
videos.These
Theseareareused
usedtotoidentify
identifydriver
driverbehaviors
behaviorssuch
suchas aseye
eye
closure,
closure,head
headpose,
pose,and
andfacial
facialexpressions.
expressions.(ii)
(ii)AAmore
moregeneral
generalapproach
approachisisusing
usingmachine
machine
learning,
learning,which
whichcan
canbe beused
usedto tolearn
learnpatterns
patternsfrom
fromdata.
data.These
Theseareareused
usedto
toidentify
identifydriver
driver
behaviors that are not easily captured using traditional methods, such as hand gestures
and body language, and (iii) a technique that combines data from multiple sensors, re-
ferred to as sensor fusion, to improve the accuracy of DMSs. For instance, a DMS could
combine data from a camera, an eye tracker, and a heart rate monitor to provide a more
comprehensive assessment of the driver’s state.
Sensors 2024, 24, 249 21 of 51

behaviors that are not easily captured using traditional methods, such as hand gestures and
body language, and (iii) a technique that combines data from multiple sensors, referred to
as sensor fusion, to improve the accuracy of DMSs. For instance, a DMS could combine data
from a camera, an eye tracker, and a heart rate monitor to provide a more comprehensive
assessment of the driver’s state.
Y. Zhao et al. [106] propose a novel real-time DMSs based on deep CNN to monitor
drivers’ behavior and detect distractions. It uses video input from an in-car camera and
employs CNNs to analyze the driver’s facial expressions and head movements to assess
their attentiveness. It can detect eye closure, head pose, and facial expressions with high
accuracy. Ref. [107] works towards a DMS that uses machine learning to estimate driver
situational awareness using eye-tracking data. It aims to predict driver attention and
alertness to the road, enhancing road safety. Ref. [108] proposes a lightweight DMS based
on Multi-Task Mobilenets architecture, which efficiently monitors drivers’ behavior and
attention using low computational resources. It can even run on a simple smartphone,
making it suitable for real-time monitoring. Ref. [109] introduces an optimization algorithm
for DMSs using DL. This algorithm improves the accuracy of the DMS by reducing the
number of false positives and ensuring real-time performance.
Ref. [110] proposes a real-time DMS based on visual cues, leveraging facial expressions
and eye movements to assess driver distraction and inattention. It is able to detect driver
behaviors such as eye closure, head pose, and facial expressions using only a camera.
Ref. [111] proposes an intelligent DMS that uses a combination of sensors and ML. It is
capable of providing a comprehensive assessment of the driver’s state, including their
attention level, fatigue, and drowsiness, and provides timely alerts to improve safety.
Ref. [112] proposes a hybrid DMS combining Internet of Things (IoT) and ML techniques
for comprehensive driver monitoring. It collects data from multiple sensors and uses ML to
identify driver behaviors. Ref. [113] focuses on a distracted DMS that uses AI to detect and
prevent risky behaviors on the road. It detects distracted driving behaviors such as texting
and talking on the phone while driving. Ref. [114] proposes a DMS based on a distracted
driving decision algorithm which aims to assess and address potential distractions to
ensure safe driving practices. It predicts whether the driver is distracted or not.
These papers provide a good overview of the current state of the art in DMS and
contribute to the development of advanced DMS technologies, aiming to enhance driver
safety, detect distractions, and improve situational awareness on the roads. They employ
various techniques, including deep learning, IoT, and machine learning, to create efficient
and effective driver monitoring solutions. However, before DMSs can be widely deployed,
there are still some challenges that need to be addressed, such as:
a. Data collection: It is difficult to collect large datasets of driver behavior representative
of the real world, as it is difficult to monitor drivers naturally without disrupting their
driving experience.
b. Algorithm development: Since the driver behaviors can be subtle and vary from
person to person, it is challenging to develop algorithms that can accurately identify
driver behaviors in real time.
c. Cost: DMS demands the use of specialized sensors and software, making them
expensive to implement and maintain.
Additionally, with the development and availability of new sensors, they could be
used to improve the accuracy and performance of DMSs; for example, radar sensors could
be used to track driver head movements and eye gaze. Besides, autonomous vehicles will
not need DMSs in the same way that human-driven vehicles do. However, DMSs could still
be used to monitor the state of the driver in autonomous vehicles and to provide feedback
to the driver if necessary. Despite these challenges, there is a lot of potential for DMSs to
improve road safety and the future of DMSs looks promising. As the technology continues
to develop, DMSs could become an essential safety feature in vehicles, both human-driven
and autonomous. The list of reviewed papers on driver monitoring system is summarized
in Table 5.
Sensors 2024, 24, 249 22 of 51

Table 5. Chosen publications, source title, and the number of citations referring to the driver
monitoring system.

SI No. Ref. Year Source Title Citations


IEEE International Symposium on Robotic and
1 [106] 2019 2
Sensors Environments
International Conference on Robot and Human
2 [107] 2019 1
Interactive Communication
3 [108] 2019 MDPI Sensors 28
International Conference on Artificial Intelligence in
4 [109] 2020 2
Information and Communication
6th International Conference on Interactive
5 [110] 2020 1
Digital Media
2nd International Conference on Communication,
6 [111] 2021 1
Computing and Industry 4.0
IEEE International Conference on Consumer Electronics
7 [112] 2021 -
and Computer Engineering
Interdisciplinary Research in Technology
8 [113] 2022 -
and Management
13th International Conference on Information and
9 [114] 2022 -
Communication Technology Convergence

4.5. Lane Departure Warning System


The Lane Departure Warning System (LDWS) is a type of ADAS that is designed to
warn drivers when they are unintentionally drifting out of their lane. LDWSs typically use
cameras, radar, lidar, or a combination of sensors to detect the lane markings on the road,
and then they use this information to monitor the driver’s position in the lane. If the driver
starts to drift out of the lane, the LDWS will sound an audible alert or vibrate the steering
wheel to warn the driver. These systems can be a valuable safety feature and are especially
helpful for drivers, as they can help to prevent accidents caused by driver drowsiness or
distraction and they can help to keep drivers alert and focused on the road.
LDWSs are becoming increasingly common in new vehicles. In fact, according to
NHTSA, lane departure crashes account for about 5% of all fatal crashes in the United
States and the NHTSA has mandated that all new vehicles sold in the United States be
equipped with LDWSs by 2022 [115].
LDWSs can be a valuable safety feature, but they are not perfect. They can sometimes
be fooled by objects that look like lane markings, such as shadows or road debris, and may
not be accurate when the road markings are faded or obscured. Additionally, LDWS can
only warn drivers; they cannot take corrective action on their own, which means they may
not be effective for drivers who are drowsy or distracted.
Despite these limitations, LDWS can be a valuable tool for reducing the number of
accidents, and are especially beneficial for long-distance driving, as they can help keep
drivers alert and focused. They can: (i) help to prevent accidents by alerting drivers to
unintentional lane departures, (ii) help drivers stay alert and focused on the road, (iii) be
especially helpful for drivers who are drowsy or distracted, (iv) help to keep drivers in their
lane, which can improve lane discipline and reduce the risk of sideswipe collisions, thus
improving the driver safety and comfort. Therefore, LDWSs are becoming increasingly
common in new vehicles, as they greatly reduce drivers’ stress and fatigue.
Overall, LDWSs are a valuable safety feature that can help to prevent accidents, though
they are not guaranteed to do so. It is important to remember that these systems are not
a substitute for safe driving practices. Drivers should always be alert and focused on the
road, aware of their surroundings and use safe driving practices at all times, even when
they are using an LDWS.
ingly common in new vehicles, as they greatly reduce drivers’ stress and fatigue.
Overall, LDWSs are a valuable safety feature that can help to prevent accidents,
though they are not guaranteed to do so. It is important to remember that these systems
are not a substitute for safe driving practices. Drivers should always be alert and focused
Sensors 2024, 24, 249 23 even
of 51
on the road, aware of their surroundings and use safe driving practices at all times,
when they are using an LDWS.

Search Terms Terms


4.5.1. Search and Recent TrendsTrends
and Recent in LDWS in LDWS
‘Lane
‘Lanedeparture
departurewarning’,
warning’,‘lane
‘lanedeflection
deflectionwarning’,
warning’,‘lane
‘lanedetection’,
detection’, and
and ‘lane
‘lane detec-
detec-
tion
tion and tracking’ are four prominent search terms used to investigate the topic. The ‘OR’
and tracking’ are four prominent search terms used to investigate the topic. The ’OR’
operator
operatorwaswasused
usedtotochoose
chooseand
andcombine
combinethethemost
mostrelevant
relevantandandregularly
regularlyused
usedapplicable
applicable
phrases. The search
phrases. The search phrases
phrases‘lane
‘lanedeparture
departurewarning’,
warning’,’lane
‘lane deflection
deflection warning’,
warning’, ‘lane
‘lane de-
detection’, and ‘lane detection and tracking’ were discovered. Figure 6 shows
tection’, and ’lane detection and tracking’ were discovered. Figure 6 shows the completethe complete
search
searchquery
queryfor
foreach
eachofofthe
thedatabases.
databases.TheThedatabases
databasesinclude
includeIEEEIEEEXplore
XploreandandMDPI.
MDPI.

Figure 6.
Figure 6. Search
Search queries
queriesfor
foreach
eachofofthe databases
the forfor
databases thethe
lane departure
lane warning
departure system.
warning The data-
system. The
bases include
databases IEEE
include Xplore
IEEE andand
Xplore MDPI.
MDPI.

Lane detection is a critical task in computer vision and autonomous driving systems.
These review papers explore various lane detection techniques proposed in recent research
papers. The reviewed papers cover diverse approaches, including lightweight CNNs,
sequential prediction networks, 3D lane detection, and algorithms for intelligent vehicles in
complex environments. The existing lane detection algorithms are not robust to challenging
road conditions, such as shadows, rain, and snow, along with occlusion and illumination,
and scenarios where lane markings are not visible and are limited in their ability to detect
multiple lanes and to accurately estimate the 3D position of the lanes.
This research review paper examines recent advancements in lane detection tech-
niques, focusing on the integration of DNNs and sensor fusion methodologies. The review
encompasses papers published between 2019 and 2022, exploring innovative approaches
to improve the robustness, accuracy, and performance of lane detection systems in various
challenging scenarios.
The reviewed papers present various innovative approaches for lane detection in
the context of autonomous driving systems. Lee et al. [116] introduce a self-attention
distillation method to improve the efficiency of lightweight lane detection CNNs without
compromising accuracy. FastDraw [117] addresses the long tail of lane detection using a
sequential prediction network to consider contextual information for better predictions.
3D-LaneNet [118] incorporates depth information from stereo cameras for end-to-end
3D multiple lane detection. Wang et al. [119] propose a data enhancement technique
called Light Conditions Style Transfer for lane detection in low-light conditions, improving
model robustness. Other methods explore techniques such as ridge detectors [120], LSTM
networks [121], and multitask attention networks [122] to enhance lane detection accuracy
in various challenging scenarios. Additionally, some papers integrate multiple sensor
data [123–126] or use specific sensors like radar [127] and light photometry systems [128] to
achieve more robust and accurate lane detection for autonomous vehicles. These research
contributions provide valuable insights into the development of advanced lane detection
systems for safer and more reliable autonomous driving applications.
In their recent research, Lee et al. [116] proposed a novel approach for learning
lightweight lane detection CNNs by applying self-attention distillation. FastDraw [117]
addressed the long tail of lane detection by using a sequential prediction network to better
Sensors 2024, 24, 249 24 of 51

predict lane markings in challenging conditions. Garnett et al. [118] presented 3D-LaneNet,
an end-to-end method incorporating depth information from stereo cameras for 3D mul-
tiple lane detection. Additionally, Cao et al. [123] tailored a lane detection algorithm for
intelligent vehicles in complex road conditions, enhancing real-world driving reliability.
Kuo et al. [129] optimized image sensor processing techniques for lane detection in vehicle
lane-keeping systems. Lu et al. [120] improved lane detection accuracy using a ridge
detector and regional G-RANSAC. Zou et al. [130] achieved robust lane detection from
continuous driving scenes using deep neural networks. Liu et al. [119] introduced Light
Conditions Style Transfer for lane detection in low-light conditions. Wang et al. [124]
used a map to enhance ego-lane detection in missing feature scenarios. Khan et al. [127]
utilized impulse radio ultra-wideband radar and metal lane reflectors for robust lane detec-
tion in adverse weather conditions. Yang et al. [121] employed long short-term memory
(LSTM) networks for lane position detection. Gao et al. [131] minimized false alarms in lane
departure warnings using an Extreme Learning Residual Network and ϵ-greedy LSTM.
Moreover, ref. [132] proposed a real-time attention-guided DNN-based lane detection
framework and CondLaneNet [133] used conditional convolution for top-to-down lane
detection. Dewangan and Sahu [134] analyzed driving behavior using vision-sensor-based
lane detection. Haris and Glowacz [135] utilized object feature distillation for lane line
detection. Lu et al. [136] combined semantic segmentation and optical flow estimation for
fast and robust lane detection. Suder et al. [128] designed low-complexity lane detection
methods for light photometry systems. Ko et al. [137] combined key points estimation and
point instance segmentation for lane detection. Zheng et al. [138] introduced CLRNet for
lane detection, while Wang et al. [122] proposed a multitask attention network (MAN).
Khan et al. [139] developed LLDNet, a lightweight lane detection approach for autonomous
cars. Chen and Xiang [125] incorporated pre-aligned spatial–temporal attention for lane
mark detection. Nie et al. [126] integrated a camera with dual light sensors to improve
lane-detection performance in autonomous vehicles. These studies collectively present
diverse and effective methodologies, contributing to the advancement of lane-detection
systems in autonomous driving and intelligent vehicle applications. The list of reviewed
papers on lane-departure warning system is summarized in Table 6.

Table 6. Chosen publications, source title, and the number of citations related to a lane-departure
warning system.

SI No. Ref. Year Source Title Cited by


1 [116] 2019 IEEE/CVF International Conference on 253
Computer Vision
2 [117] 2019 IEEE/CVF Conference on Computer Vision and 78
Pattern Recognition
3 [118] 2019 IEEE/CVF International Conference on Computer 57
Vision
4 [123] 2019 MDPI Sensors 34
5 [129] 2019 MDPI Sensors 16
6 [120] 2019 MDPI Sensors 12
7 [130] 2020 IEEE Transactions on Vehicular Technology 165
8 [119] 2020 IEEE Intelligent Vehicles Symposium (IV) 32
9 [124] 2020 IEEE Access 9
10 [127] 2020 MDPI Sensors 14
11 [121] 2020 MDPI Sensors 9
12 [131] 2020 MDPI Sensors 6
13 [132] 2021 IEEE/CVF Conference on Computer Vision and 60
Pattern Recognition
14 [133] 2021 IEEE/CVF International Conference on 44
Computer Vision
15 [134] 2021 IEEE Sensors Journal 40
Sensors 2024, 24, 249 25 of 51

Table 6. Cont.

SI No. Ref. Year Source Title Cited by


16 [135] 2021 MDPI Electronics 17
17 [136] 2021 MDPI Sensors 14
18 [128] 2021 MDPI Electronics 12
19 [137] 2022 IEEE Transactions on Intelligent Transportation Systems 54
20 [138] 2022 IEEE/CVF Conference on Computer Vision and 17
Pattern Recognition
21 [122] 2022 IEEE Transactions on Neural Networks and 15
Learning Systems
22 [139] 2022 MDPI Sensors 4
23 [125] 2022 MDPI Sensors 2
24 [126] 2022 MDPI Electronics -

4.6. Forward-Collision Warning System


A Forward-Collision Warning System (FCWS) is a type of ADAS that warns drivers
of potential collisions with other vehicles or objects in front of them. FCWSs typically
use radar, cameras, or lidar to track the distance and speed of vehicles in front of the
vehicle, and they alert the driver if the vehicle is getting too close to the vehicle in front.
When the system detects that a collision is imminent, it alerts the driver with a visual or
audible warning.
FCWSs can be an invaluable safety feature, as they can help prevent accidents caused
by driver distraction or drowsiness. According to the NHTSA, rear-end collisions account
for about 25% of all fatal crashes in the United States [140].
FCWSs are becoming increasingly common in new vehicles. The NHTSA has man-
dated that all new cars sold in the United States come equipped with FCWS systems
by 2022.
FCWSs: (i) help prevent accidents caused by driver distraction or drowsiness, (ii) help
drivers to brake sooner, which can reduce the severity of rear-end crashes and accidents,
(iii) help improve the driver awareness of the surrounding traffic, (iv) help to reduce driver
stress and fatigue.
Although FCWSs offer many advantages, they have limitations such as: (i) being less
effective in certain conditions, such as heavy rain or snow, (ii) being prone to false alarms,
which can lead to driver desensitization, (iii) are not a substitute for safe driving practices,
such as paying attention to the road and using turn signals.
Overall, FCWSs can be a valuable safety feature, but they are not guaranteed to prevent
accidents. Drivers should still be aware of their surroundings and use safe driving practices
at all times.

Search Terms and Recent Trends in FCWS


‘Forward collision warning’, ‘forward collision’, ‘pre-crash’, ‘collision mitigating’, and
‘forward crash’ are the prominent search terms used to investigate this topic. The ‘OR’
operator was used to choose and combine the most relevant and regularly used applicable
phrases. That is, the search phrases ‘forward collision warning’, ‘forward collision’, ‘pre-
crash’, ‘collision mitigating’, and ‘forward crash’ were discovered. Figure 7 shows the
complete search query for each of the databases. The databases include IEEE Xplore
and MDPI.
The papers listed discuss the development of FCWSs for autonomous vehicles in
recent years. Ref. [141] suggests an autonomous vehicle collision avoidance system that
employs predictive occupancy maps to estimate other vehicles’ future positions, enabling
collision-free motion planning. Ref. [142] introduces a forward collision prediction system
using online visual tracking to anticipate potential collisions based on other vehicles’ posi-
tions. Ref. [143] proposes an FCWS that combines driving intention recognition and V2V
communication to predict and warn about potential collisions with front vehicles. Ref. [144]
FCWSs are becoming increasingly common in new vehicles. The NHTSA has man-
dated that all new cars sold in the United States come equipped with FCWS systems by
2022.
Sensors 2024, 24, 249 FCWSs: (i) help prevent accidents caused by driver distraction or drowsiness, (ii) help
26 of 51
drivers to brake sooner, which can reduce the severity of rear-end crashes and accidents,
(iii) help improve the driver awareness of the surrounding traffic, (iv) help to reduce
driver stress
presents and fatigue.
an FCWS for autonomous vehicles that deploys a CNN to detect and track nearby
Although
vehicles. Ref. [145] FCWSs offer many
introduces advantages,
a real-time FCW they have limitations
technique such as: (i)and
involving detection being less
depth
effective innetworks
estimation certain conditions,
to identify such
nearby as vehicles
heavy rain
andorestimate
snow, (ii) being prone
distances. to false
Ref. [146] alarms,
proposes
awhich can leadFCWS
vision-based to driver desensitization,
merging camera and(iii) aredata
radar not afor
substitute
real-timefor safe drivingdetection,
multi-vehicle practices,
such as paying
addressing attention
challenging to the road
conditions likeand using turn
occlusions andsignals.
lighting variations. Tang et al. [147]
Overall,
introduce FCWSs can
a monocular be aestimation
range valuable safety feature,
system using but they camera
a single are not guaranteed to pre-
for precise FCWS,
vent accidents. Drivers should still be aware of their surroundings
especially in difficult scenarios. Lim et al. [148] suggest a smartphone-based FCWS for and use safe driving
practices at all
motorcyclists times. phone sensors to predict collision risks. Farhat et al. [149] present a
utilizing
cooperative FCWS using DL to predict collision likelihood in real time by considering data
4.6.1.both
from Search Termssensors.
vehicles’ and RecentHong Trends
and Parkin FCWS
[150] offer a lightweight FCWS for low-power
embedded‘Forward collision warning’, ‘forwardradar
systems, combining cameras and for real-time
collision’, multi-vehicle
‘pre-crash’, ‘collisiondetection. Al-
mitigating’,
barella et al. [151] and Lin et al. [152] propose V2X communication-based
and ‘forward crash’ are the prominent search terms used to investigate this topic. The ’OR’ FCWS, with [151]
for electricwas
operator vehicles
used to and [152]and
choose targeting
combine curve
thescenarios.
most relevant Yu andandAi [153] suggest
regularly a hybrid
used applicable
DL approach
phrases. That employing
is, the search CNN and ‘forward
phrases recurrentcollision
NN for warning’,
robust FCWS predictions.
’forward collision’,Olou
‘pre-
et al. [154] introduce an efficient CNN model for accurate forward
crash’, ‘collision mitigating’, and ‘forward crash’ were discovered. Figure 7 shows the collision prediction,
even in challenging
complete search query conditions.
for eachPak [155]
of the presents aThe
databases. hybrid filtering
databases method
include thatXplore
IEEE improvesand
radar-based
MDPI. FCWS by fusing data from multiple sensors, enhancing reliability.

Figure 7.
Figure 7. Search
Search queries
queriesfor
foreach
eachofofthe databases
the forfor
databases thethe
lane-departure warning
lane-departure system.
warning The data-
system. The
bases include IEEE Xplore and MDPI.
databases include IEEE Xplore and MDPI.

The compilation
This papers listedof discuss
researchthe papers
development of FCWSs
demonstrates thefor autonomous
extensive effortsvehicles
in the in re-
field
cent
of years. Ref. [141] warning
forward-collision suggests and
an autonomous vehicle collision
avoidance systems, which areavoidance
crucialsystem that em-
for enhancing
ploys predictive
vehicular safety. occupancy
Lee and Kum maps to estimate
[141] propose other vehicles’
a ‘Collision future positions, enabling
Avoidance/Mitigation Sys-
collision-free
tem’ motion
incorporating planning.occupancy
predictive Ref. [142] introduces a forward collision
maps for autonomous prediction
vehicles. Manghat system
and
using online visual
El-Sharkawy tracking‘Forward
[142] present to anticipate potential
Collision collisions
Prediction based
with on other
Online Visual vehicles’
Tracking’,po-
sitions. Ref. [143] proposes an FCWS that combines driving intention recognition
utilizing online visual tracking for collision prediction. Yang, Wan, and Qu [143] introduce and V2V
‘Acommunication
Forward Collision to predict
WarningandSystem
warn about
Using potential collisionsRecognition’,
Driving Intention with front vehicles.
integratingRef.
[144] presents
driving intentionan recognition
FCWS for autonomous vehicles that deploys
and V2V communication. Kumar, a CNN
Shaw,toMaitra,
detect and trackKar-
nearby[144]
makar vehicles.
offer Ref.
‘FCW: [145] introduces
A Forward a real-time
Collision FCW
Warning technique
System Usinginvolving detection
Convolutional Neuraland
depth estimation
Network’, deployingnetworks
CNN for to identify
warningnearby vehicles
generation. Wangandandestimate distances.
Lin [145] presentRef. [146]
‘A Real-
Time Forward
proposes Collision Warning
a vision-based Technique’,
FCWS merging cameraintegrating
and radardetection and depth
data for real-time estimation
multi-vehicle
networks
detection, addressing challenging conditions like occlusions and lighting variations.Assis-
for real-time warnings. Lin, Dai, Wu, and Chen [146] introduce a ‘Driver Tang
tance
et al. System with Forward
[147] introduce Collisionrange
a monocular and Overtaking Detection’.
estimation system Tang
using and Licamera
a single [147] propose
for pre-
‘End-to-End
cise FCWS, Monocular
especially inRange Estimation’
difficult forLim
scenarios. collision
et al. warning. Lim aetsmartphone-based
[148] suggest al. [148] created a
‘Forward
FCWS forCollision Warning
motorcyclists System
utilizing for Motorcyclists’
phone using
sensors to predict smartphone
collision sensors.
risks. Farhat Farhat,
et al. [149]
Rhaiem, Faiedh, and Souani [149] present a ‘Cooperative Forward Collision Avoidance
System Based on Deep Learning’. Hong and Park [150] propose a ‘Lightweight Collabo-
ration of Detecting and Tracking Algorithm’ for embedded systems. Albarella et al. [151]
present a ‘Forward-Collision Warning System for Electric Vehicles’, validated both virtually
and in real environments. Liu et al. [152] focus on ‘Forward Collision on a Curve based on
V2X’ with a target selection method. Yu and Ai [153] present ‘Vehicle Forward Collision
Sensors 2024, 24, 249 27 of 51

Warning based upon Low-Frequency Video Data’ using hybrid deep learning. Olou, Ezin,
Dembele, and Cambier [154] propose ‘FCPNet: A Novel Model to Predict Forward Colli-
sion’ based on CNN. Pak [155] contributes ‘Hybrid Interacting Multiple Model Filtering’ to
improve radar-based warning reliability. Together, these papers collectively advance the
understanding and development of forward collision warning and avoidance systems. The
list of reviewed papers on forward-collision warning system is summarized in Table 7.

Table 7. Chosen publications, source title, and the number of citations related to forward-collision
warning systems.

SI No. Ref. Year Source Title Cited by


1 [141] 2019 IEEE Access 48
IEEE International Conference on Vehicular Electronics
2 [142] 2019 2
and Safety (ICVES)
3 [143] 2020 IEEE Access 31
IEEE International Conference on Electrical and
4 [144] 2020 2
Electronics Engineering (ICE3)
IEEE International Conference on Systems, Man, and
5 [145] 2020 -
Cybernetics (SMC)
6 [146] 2020 MDPI Sensors 26
7 [147] 2020 MDPI Sensors 4
8 [148] 2021 IEEE Journal of Intelligent and Connected Vehicles 1
IEEE International Conference on Developments in
9 [149] 2021 -
eSystems Engineering (DeSE)
IEEE Twelfth International Conference on Ubiquitous
10 [150] 2021 -
and Future Networks (ICUFN)
11 [151] 2021 MDPI Energies -
7th International Conference on Intelligent Informatics
12 [152] 2022 1
and Biomedical Science (ICIIBMS)
IEEE 25th International Conference on Intelligent
13 [153] 2022 -
Transportation Systems (ITSC)
22nd International Conference on Control, Automation
14 [154] 2022 -
and Systems (ICCAS)
15 [155] 2022 MDPI Sensors 3

4.7. Blind Spot Detection


Blind spot detection (BSD) is a type of ADAS that helps to prevent accidents by alerting
drivers to vehicles, pedestrians, or objects that are in their blind spots. Blind spots are the
areas around a vehicle that cannot be seen by the driver when looking in the rear-view
or side mirrors. These areas can be especially dangerous when changing lanes, merging
onto a highway, or while parking, and it is necessary to present accidents caused by lane
changes into the blind spot of other vehicles.
When a vehicle is detected in the blind spot, the system alerts the driver with a
visual or audible warning. Some systems will also illuminate a light in the side mirror to
indicate that there is a vehicle in the blind spot, while some systems also provide a graphic
representation of the vehicle in the blind spot on the dashboard.
BSD systems can be a valuable safety feature and are becoming increasingly common
in new vehicles, as they can help to prevent accidents caused by driver inattention or
driving changing lanes into other vehicles. They help to reduce the severity of accidents
that do occur, thereby reducing drivers’ stress and fatigue and helping drivers to stay
alert and more aware of their surroundings. According to the NHTSA, blind spot crashes
account for about 2% of all fatal crashes in the United States [57], and the NHTSA has
mandated that all new cars sold in the United States come equipped with BSD systems
by 2022.
Although BSD has many advantages, it has certain limitations such as: (i) it is less
effective in certain conditions, such as heavy rain or snow, (ii) it is prone to false alarms,
Sensors 2024, 24, 249 28 of 51

which can lead to driver desensitization, (iii) it is not a substitute for safe driving practices,
such as using turn signals and checking blind spots before changing lanes.
Overall, BSD systems can be a valuable safety feature, but they are not a guarantee
against accidents. Drivers should still be aware of their surroundings and use safe driving
practices at all times.

Search Terms and Recent Trends in Blind Spot Detection


‘Blind spot’, ‘blind spot detection’, and ‘blind spot warning’, are the three prominent
search terms used to investigate this topic. The ’OR’ operator was used to choose and
combine the most relevant and regularly used applicable phrases. That is, the search
phrases ‘blind spot’, ‘blind spot detection’, and ‘blind spot warning’, were discovered.
Sensors 2024, 24, x FOR PEER REVIEW 29 of 52
Figure 8 shows the complete search query for each of the databases. The databases include
IEEE Xplore and MDPI.

Figure 8.
Figure 8. Search
Search queries
queries for
for each
each of
of the
the databases
databasesfor
forblind
blindspot
spotdetection.
detection.The
Thedatabases
databasesinclude
include
IEEE Xplore and MDPI.
IEEE Xplore and MDPI.

Thepapers
The papersmentioned
mentioneddiscussdiscussthethe development
development of blind-spot
of blind-spot detection
detection systems
systems (BS-
(BSDSs)
DSs) for vehicles.
for vehicles. BSDSs
BSDSs areare designed
designed to to alert
alert drivers
drivers totovehicles
vehiclesthat
thatareareinintheir
theirblind
blind
spots,where
spots, wheretheytheycannot
cannotbe beseen
seenin intheir
theirmirrors.
mirrors.
The Gale Bagi et al. [156] paper discusses aaBSDS
The Gale Bagi et al. [156] paper discusses BSDScombining
combiningradar radarand andcameras
cameras for for
accurate vehicle detection in blind spots. Radar detects vehicles
accurate vehicle detection in blind spots. Radar detects vehicles and cameras identify and cameras identify
them. Details
them. Details about
about sensors
sensors andand system
system architecture
architecture are are necessary
necessary for foraacomprehensive
comprehensive
understanding.
understanding.
Ref. [157]
Ref. [157]introduces
introduces aaprobabilistic
probabilisticBSDSBSDSestimating
estimating blind
blind spot
spotrisks
risksusing
usingvehicle
vehicle
speed, direction,
speed, direction, andand driver’s
driver’s blind
blind spot
spot angle.
angle. ItIt offers
offers nuanced
nuanced insights
insights intointo collision
collision
potential,enhancing
potential, enhancingsafe safedriving.
driving.
Zhao
Zhao et al. [158] proposeaapromising
et al. [158] propose promisingBSDS BSDSusing
usingaalightweight
lightweightNN NNand andcameras
camerasfor for
real-time
real-time detection.
detection. This
This approach
approach improves
improves detection
detection capabilities
capabilities with
with practical
practical design.
design.
Chang
Changet etal.
al.[159]
[159] present
presentan an AI-based
AI-based BSDS
BSDS warning
warning for for motorcyclists
motorcyclists using using various
various sen-
sors,
sors,proactively
proactivelydetecting
detectingblind
blindspot
spotvehicles
vehiclesandandenhancing
enhancingrider ridersafety.
safety.NaikNaiket etal.
al.[160]
[160]
propose
proposelidar-based
lidar-basedearlyearly BSDS,
BSDS,creating
creatinga 3D mapmap
a 3D to detect blind-spot
to detect vehicles
blind-spot in advance.
vehicles in ad-
The authors of [161] describe a real-time two-wheeler BSDS using computer vision
vance.
and ultrasonic
The authors sensors, confirming
of [161] describe blind spot vehicles.
a real-time two-wheelerSheteBSDS
et al. [162]
usingsuggest
computer a forklift-
vision
specific
and ultrasonic sensors, confirming blind spot vehicles. Shete et al. [162] suggest a drivers.
BSDS using ultrasonic sensors to detect blind spot vehicles and warn forklift-
Schlegel et al. [163]
specific BSDS usingpropose an optimization-based
ultrasonic sensors to detect blind planner
spotforvehicles
robots, andconsidering blind
warn drivers.
spots andet
Schlegel other vehicles
al. [163] to ensure
propose safe navigation. Kundid
an optimization-based planner et for
al. [164]
robots,introduce
consideringan ADAS
blind
algorithm creating a wider view to enhance driver awareness, mitigating
spots and other vehicles to ensure safe navigation. Kundid et al. [164] introduce an ADAS blind spot issues.
Sui et creating
algorithm al. [165]apropose
wider view an A-pillar
to enhance blind spot
driver display algorithm
awareness, mitigatingusing blindcameras
spot issues.to
showSui blind spot information on the A-pillar and side mirrors. Wang
et al. [165] propose an A-pillar blind spot display algorithm using cameras to et al. [166] present a
vision-based BSDS using depth cameras to identify blind spot vehicles
show blind spot information on the A-pillar and side mirrors. Wang et al. [166] present a in a 3D map. Zhou
et al. [167] focus
vision-based BSDS onusing
high-speed pedestrians
depth cameras in blind
to identify spots,
blind spotusing cameras
vehicles in a 3Dand radar
map. Zhouto
et al. [167] focus on high-speed pedestrians in blind spots, using cameras and radar to
detect pedestrians and pre-detection to avoid collisions. Ref. [168] introduces a multi-sen-
sor BSDS for micro e-mobility vehicles, using cameras, radar, ultrasonic sensors, and ges-
ture recognition for better blind-spot awareness. Ref. [169] suggests a multi-deep CNN-
based BSDS for commercial vehicles using cameras, effectively addressing blind-spot
Sensors 2024, 24, 249 29 of 51

detect pedestrians and pre-detection to avoid collisions. Ref. [168] introduces a multi-sensor
BSDS for micro e-mobility vehicles, using cameras, radar, ultrasonic sensors, and gesture
recognition for better blind-spot awareness. Ref. [169] suggests a multi-deep CNN-based
BSDS for commercial vehicles using cameras, effectively addressing blind-spot challenges.
Overall, these papers present a variety of promising methods for developing BSDS.
The systems proposed in these papers can detect vehicles in a variety of conditions, and
they can be used in a variety of vehicles. The collection of research papers explores a broad
spectrum of approaches to address blind spots in various domains, including robotics,
automotive applications, and micro e-mobility. The focus ranges from sensor technologies
such as cameras, lidar, and ultrasonic sensors to methodologies including AI, probabilistic
estimation, and computer vision, introducing innovative algorithms, technologies, and
architectures to enhance blind-spot detection, awareness, and collision prevention. The
studies emphasize real-time detection, early warning, and proactive risk prediction, all
contributing to enhance vehicular safety. The common thread among these studies is their
commitment to improving safety by addressing the visibility limitations posed by blind
spots. The list of reviewed papers on driver monitoring system is summarized in Table 8.

Table 8. Chosen publications, source title, and the number of citations related to driver monitoring
systems.

Number of
SI No. Ref. Year Source Title
Citations
2019 International Conference on Control,
1 [156] 2019 3
Automation and Information Sciences (ICCAIS)
IEEE Intelligent Transportation Systems
2 [157] 2019 1
Conference (ITSC)
3 [158] 2019 MDPI Electronics 16
International Symposium on Computer, Consumer,
4 [159] 2020 1
and Control (IS3C)
International Conference on Smart Electronics and
5 [160] 2020 -
Communication (ICOSEC)
5th International Conference on Electronics,
6 [161] 2021 1
Communication and Aerospace Technology (ICECA)
IEEE International Conference on Technology,
7 [162] 2021 Research, and Innovation for Betterment of -
Society (TRIBES)
8 [163] 2021 European Conference on Mobile Robots (ECMR) -
Zooming Innovation in Consumer Technologies
9 [164] 2021 -
Conference (ZINC)
IEEE 5th International Conference on Computer and
10 [165] 2022 -
Communication Engineering Technology (CCET)
IEEE Intl Conf on Dependable, Autonomic and Secure
Computing, Intl Conf on Pervasive Intelligence and
Computing, Intl Conf on Cloud and Big Data
11 [166] 2022 -
Computing, Intl Conf on Cyber Science and
Technology Congress
(DASC/PiCom/CBDCom/CyberSciTech)
IEEE 25th International Conference on Intelligent
12 [167] 2022 -
Transportation Systems (ITSC)
13 [168] 2022 MDPI Sensors 2
14 [169] 2022 MDPI Sensors 1
Sensors 2024, 24, 249 30 of 51

4.8. Emergency Braking System


The Emergency Braking System (EBS), also referred to as automatic emergency braking
(AEB), is an ADAS that detects and tracks other vehicles in the vicinity, calculates the
risk of a collision, and automatically applies the brakes in the event of an imminent
collision to prevent or mitigate a collision. EBS helps to prevent accidents caused by the
driver’s inattention, drowsiness, or reaction time. EBSs can be a valuable safety feature,
typically using radar, camera, or laser sensors to detect vehicles or objects in front of the car.
According to the NHTSA [140], rear-end crashes account for about 25% of all fatal crashes
in the United States.
Sensors 2024, 24, x FOR PEER REVIEW 31 of 52
EBSs are becoming increasingly common in new vehicles. In fact, the NHTSA has
mandated that all new cars sold in the United States come equipped with EBSs by 2022.
EBSs have numerous benefits, as they help to (i) prevent accidents caused by driver
distractionor
distraction ordrowsiness,
drowsiness,(ii)
(ii)reduce
reducethe theseverity
severityofof accidents
accidents that
that do
do occur,
occur,and
and(iii)
(iii)keep
keep
driversalert
drivers alertand
andfocused
focusedononthetheroad.
road.
Withthese
With thesebenefits
benefitscomes
comescertain
certainlimitations,
limitations,as asthese
thesesystems
systemsareare(i)
(i)less
lesseffective
effectivein in
certain conditions, such as heavy rain or snow, (ii) prone to false alarms, which
certain conditions, such as heavy rain or snow, (ii) prone to false alarms, which can lead to can lead
to driver
driver desensitization,
desensitization, andand (iii)
(iii) notnot a substitute
a substitute forsafe
for safedriving
drivingpractices,
practices,such
suchasaspaying
paying
attentionto
attention tothe
theroad
roadand
andusing
usingturnturnsignals.
signals.
Overall,EBSs
Overall, EBSscan
canbe
beaavaluable
valuable safety
safety feature,
feature,but
butthey
theyare
arenot
notguaranteed
guaranteedto toprevent
prevent
accidents.Drivers
accidents. Driversshould
shouldstill
stillbebe aware
aware of of their
their surroundings
surroundings andand
useuse
safesafe driving
driving prac-
practices
tices
at at all times.
all times.

Search Terms Terms


4.8.1. Search and Recent TrendsTrends
and Recent in Emergency BrakingBraking
in Emergency SystemsSystems
‘Emergency
‘Emergencybraking
brakingsystem’,
system’,‘autonomous
‘autonomousemergency
emergencybraking’,
braking’,‘EBS’,
‘EBS’,and
and‘AEB’,
‘AEB’,are
are
the
the prominent search terms used to investigate this topic. The ’OR’ operator was usedto
prominent search terms used to investigate this topic. The ’OR’ operator was used to
choose
chooseandandcombine
combinethethemost
mostrelevant
relevantandandregularly
regularlyused
usedapplicable
applicablephrases.
phrases.That
Thatis,
is,the
the
search
searchphrases
phrases‘emergency
‘emergencybraking
brakingsystem’,
system’,‘autonomous
‘autonomousemergency
emergencybraking’,
braking’,‘EBS’,
‘EBS’,and
and
‘AEB’, were discovered. Figure 9 shows the complete search query for each of the
‘AEB’, were discovered. Figure 9 shows the complete search query for each of the data-databases.
The databases
bases. includeinclude
The databases IEEE Xplore
IEEE and MDPI.
Xplore and MDPI.

Figure9.9.Search
Figure Searchqueries
queriesfor
foreach
eachof
ofthe
thedatabases
databasesfor
forthe
theemergency
emergencybraking
brakingsystem.
system.The
Thedatabases
databases
includeIEEE
include IEEEXplore
Xploreand
andMDPI.
MDPI.

Floreset
Flores etal.
al.[170]
[170] propose
propose aa cooperative
cooperative car-following
car-followingandand emergency
emergencybraking
brakingsystem
system
using radar,
using radar,lidar,
lidar,and
andcameras
camerasto todetect
detectandandpredict
predictvehicle
vehicle and
and pedestrian
pedestrian movements.
movements.
ItItautomatically
automatically applies
applies the
the brakes
brakes to to prevent
prevent collisions
collisions while
while also
also facilitating
facilitating vehicle-to-
vehicle-to-
vehiclecommunication.
vehicle communication.Shin Shinetetal.al.[171]
[171]introduce
introduce anan adaptive
adaptive AEBAEB strategy
strategy utilizing
utilizing ra-
radar
dar cameras
and and cameras to detect
to detect and calculate
and calculate braking braking
forcesforces forand
for front front and
rear rear vehicle
vehicle collision
collision avoid-
avoidance.
ance. It considers
It considers speed, speed,
distance,distance, and vehicle
and vehicle dynamics dynamics for effective
for effective collisioncollision pre-
prevention.
vention.
Yang et al. [172] have developed an AEB-P system with radar and cameras, using ad-
vanced control
Yang et al.to[172]
determine braking forces
have developed for pedestrian
an AEB-P system collision
with radaravoidance, accounting
and cameras, using
for pedestrian
advanced speed,
control distance, and
to determine brakingvehicle dynamics.
forces Gao et collision
for pedestrian al. [173] present a hardware-
avoidance, account-
ing for pedestrian speed, distance, and vehicle dynamics. Gao et al. [173] present a hard-
ware-in-the-loop simulation platform for AEB system testing across various scenarios, en-
suring reliability and effectiveness. Guo et al. [174] introduce a variable time headway
AEB algorithm using predictive modeling, combining radar and cameras. It adapts time
headway for braking by considering speed, distance, and vehicle dynamics.
Sensors 2024, 24, 249 31 of 51

in-the-loop simulation platform for AEB system testing across various scenarios, ensuring
reliability and effectiveness. Guo et al. [174] introduce a variable time headway AEB algo-
rithm using predictive modeling, combining radar and cameras. It adapts time headway
for braking by considering speed, distance, and vehicle dynamics.
Leyrer et al. [175] propose a simulation-based robust AEBS design using optimiza-
tion techniques to enhance system performance and reliability. Yu et al. [176] introduce
an AEBC system utilizing radar and cameras, applying control algorithms to prevent
collisions at intersections considering vehicle and pedestrian speed, distance, and dynam-
ics. Izquierdo et al. [177] explore using MEMS microphone arrays for AEBS, improving
pedestrian detection through audio cues in a variety of environments.
Jin et al. [178] present an adaptive AEBC strategy for driverless vehicles in campus
environments, utilizing radar and cameras to prevent collisions by considering vehicle
and pedestrian characteristics and dynamics. Mannam and Rajalakshmi [179] assess AEBS
scenarios for autonomous vehicles using radar and cameras, determining collision interven-
tions based on vehicle and pedestrian detection, speed, and distance. Guo et al. [180] study
AEBS control for commercial vehicles, considering driving conditions alongside radar and
camera-based detection and control algorithms to avoid collisions based on vehicle and
pedestrian dynamics.
These papers all represent significant advances in the field of AEB systems. They
propose new methods for detecting and tracking vehicles, pedestrians, and environmental
features. They also propose new control algorithms for determining the optimal braking
force to apply to avoid a collision. These advances have the potential to make AEB systems
more effective and reliable and to help prevent traffic accidents.
All the systems discussed were evaluated in a variety of traffic scenarios, and they
were shown to be able to significantly reduce the number of accidents. The reviewed papers
collectively explore a diverse range of topics within the realm of autonomous emergency
braking (AEB) systems for enhanced road safety.
These topics include cooperative car-following, pedestrian avoidance, collision avoid-
ance with rear vehicles, longitudinal active collision avoidance, hardware-in-the-loop simu-
lation, variable time headway control, environmental feature recognition, simulation-based
robust design, inevitable collision state-based control, innovative sensor utilization (MEMS
microphone array), adaptive strategies for specific scenarios, determination of AEB-relevant
scenarios, and specialized AEB algorithms for commercial vehicles. These contributions
highlight the multi-faceted nature of AEB research, highlighting advancements in simu-
lation, sensing, control strategies, and contextual optimization and emphasizing safety,
prediction, algorithm optimization, and system validation. As autonomous vehicles con-
tinue to evolve, these papers will collectively contribute to enhancing the effectiveness and
reliability of AEB systems, thereby advancing road safety in modern transportation and
ultimately promoting safer and more reliable autonomous driving experiences. The list of
reviewed papers on emergency braking system is summarized in Table 9.

4.9. Adaptive Cruise Control


Adaptive cruise control (ACC) is a driver assistance system that automatically adjusts
a vehicle’s speed when there are slow-moving vehicles ahead to maintain a safe following
distance. When the road ahead is clear, ACC automatically accelerates to the driver’s
pre-set speed.
ACC is a Level 1 ADAS feature, which means that it requires some driver input. The
driver still needs to be alert and ready to take over if necessary. However, ACC can help to
reduce driver fatigue and stress, and it can also help to prevent accidents.
ACC systems typically use a radar sensor to detect the speed and distance of vehicles
ahead. The sensor is mounted in the front of the vehicle, and it can typically detect vehicles
up to several hundred feet away. The sensor sends this information to a control unit, which
then calculates the appropriate speed for the vehicle to maintain a safe following distance.
Sensors 2024, 24, 249 32 of 51

Table 9. Chosen publications, source title, and the number of citations related to the emergency
braking system.

SI No. Ref. Year Source Title Cited by


1 [170] 2019 IEEE Transactions on Intelligent Transportation Systems 31
IEEE Intelligent Transportation Systems
2 [171] 2019 5
Conference (ITSC)
3 [172] 2019 MDPI Sensors 43
IEEE 23rd International Conference on Intelligent
4 [173] 2019 4
Transportation Systems (ITSC)
5 [174] 2019 Chinese Automation Congress (CAC) 4
6 [175] 2019 IEEE Intelligent Vehicles Symposium (IV) -
7 [176] 2020 American Control Conference (ACC) 2
8 [177] 2020 MDPI Sensors 2
International Conference on Advanced Mechatronic
9 [178] 2020 -
Systems (ICAMechS)
IEEE Global Conference on Computing, Power, and
10 [179] 2020 -
Communication Technologies (GlobConPT)
11 [180] 2020 MDPI Machines 4

ACC systems can be either speed-only or full-range systems. Speed-only systems only
adjust the vehicle’s speed, while full-range systems can also brake the vehicle to maintain a
safe following distance. Full-range systems are more advanced, and they are typically more
expensive. ACC systems can be set to a specific speed, or they can be set to follow the speed
of the vehicle ahead. ACC systems can also be set to a maximum following distance, and
the system will not allow the vehicle to get closer than the set distance to the vehicle ahead.
ACC systems are becoming increasingly common in vehicles, as they offer several
safety and convenience benefits such as reducing traffic congestion and improving fuel
efficiency. ACC systems can also help to prevent accidents by reducing the risk of rear-end
collisions. They are especially beneficial for long-distance driving, as they can help to
reduce driver fatigue. The benefits of ACC systems are as follows:
a. Reduced driver fatigue: ACC can help to reduce driver fatigue by taking over the
task of maintaining a safe following distance. This can be especially beneficial for
long-distance driving.
b. Increased safety: ACC can help prevent accidents by automatically adjusting the
vehicle’s speed to maintain a safe following distance.
c. Improved convenience: ACC can make driving more convenient by allowing the
driver to set a cruising speed and then relax.
d. Improved fuel efficiency: ACC systems can help to improve fuel efficiency by allowing
drivers to maintain a constant speed, which can reduce unnecessary acceleration
and braking.
Despite these benefits, ACC systems face numerous challenges, as they are (i) expen-
sive, especially in high-end vehicles, (ii) complex to install and calibrate, which can increase
the cost of ownership, and (iii) unreliable in poor weather conditions, such as rain or snow.
Overall, ACC systems are a valuable safety feature that can help to prevent accidents
and make driving more convenient. However, they are not without their challenges, such
as cost and complexity. As ACC systems become more affordable and reliable, they are
likely to become more widespread in vehicles.

Search Terms and Recent Trends in Adaptive Cruise Control


‘Adaptive cruise control’, ‘ACC’, ‘autonomous cruise control’, and ‘intelligent cruise
control’ are the prominent search terms used to investigate this topic. The ‘OR’ operator
was used to choose and combine the most relevant and regularly used applicable phrases.
That is, the search phrases ‘adaptive cruise control’, ‘ACC’, ‘autonomous cruise control’,
Sensors 2024, 24, 249 33 of 51

and
Sensors 2024, 24, x FOR PEER REVIEW ‘intelligent cruise control’ were discovered. Figure 10 shows the complete search34
query
of 52
for each of the databases. The databases include IEEE Xplore and MDPI.

Figure10.
Figure 10. Search
Search queries
queriesfor
foreach
eachofofthe
thedatabases
databasesforfor
thethe
adaptive cruise
adaptive control
cruise system.
control The data-
system. The
bases include IEEE Xplore and MDPI.
databases include IEEE Xplore and MDPI.

G.Li
G. Liand
andD.D.Görges
Görges[181][181]propose
proposean aninnovative
innovativeapproach
approachcombining
combiningecological
ecologicalACC ACC
andenergy
and energymanagement
managementfor forHEVs
HEVsusing usingheuristic
heuristicdynamic
dynamicprogramming.
programming.The Thealgorithm
algorithm
optimizesspeed
optimizes speedprofiles,
profiles,considering
consideringtraffictrafficconditions,
conditions,state stateofofcharge,
charge,and anddriver
driver prefer-
prefer-
ences for fuel efficiency and comfort. S. Cheng et al. [182] discuss
ences for fuel efficiency and comfort. S. Cheng et al. [182] discuss a multiple-objective ACC a multiple-objective ACC
with dynamic velocity obstacle (DYC) prediction, optimizing
with dynamic velocity obstacle (DYC) prediction, optimizing speed, acceleration, safety, speed, acceleration, safety,
comfort,and
comfort, andfuel
fuelefficiency
efficiencyby byforecasting
forecastingsurrounding
surroundingvehicle vehicletrajectories.
trajectories.J.J.Lunze
Lunze[183][183]
introducesan
introduces anACC
ACCstrategy
strategyensuring
ensuringcollision
collisionavoidance
avoidancethroughthroughpredictive
predictivecontrol
controlusing
using
aacombination
combinationof ofpredictive
predictivecontrol
controland andMPCMPCto tooptimize
optimizevehicle
vehiclespeed speed profiles.
profiles. Woo,
Woo,H. H.
etetal.
al.[184]
[184]enhance
enhanceACC ACCsafety
safetyand andefficiency
efficiencythrough
throughoperation
operationcharacteristic
characteristicestimation
estimation
andtrajectory
and trajectoryprediction.
prediction.Their Theirworkworkadjusts
adjustsspeed
speedand andacceleration
accelerationconsidering
consideringvehicles’
vehicles’
dynamics and surroundings.
dynamics and surroundings.
Zhang,S.S.and
Zhang, andZhuan,
Zhuan,X.X.[185][185]developed
developedan anACCACCfor forBEVs
BEVsthat thataccounts
accountsfor forweight
weight
changes. Weight adjustments based
changes. based on on battery
batterydischarge
dischargeand andpassenger
passengerload loadare areused
usedto
to ensure
ensure safe
safe andand comfortable
comfortable driving.
driving. C. Zhai
C. Zhai et al.et[186]
al. [186] present
present an ecological
an ecological CACCAC strat-
strategy
egy for for HDVsHDVs with
with timetime delaysusing
delays usingdistributed
distributedalgorithms
algorithms for for platoon coordination,
coordination,
achieving
achievingfuel fuelefficiency
efficiencyand and ecological
ecological benefits.
benefits. Li Li
and andGörges
Görges [187] designed
[187] designed an ecological
an ecolog-
ACC
ical ACC for step-gear transmissions using reinforcement learning. It optimizesefficiency
for step-gear transmissions using reinforcement learning. It optimizes fuel fuel effi-
while
ciencymaintaining safety through
while maintaining learnedlearned
safety through intelligent controlcontrol
intelligent strategies. Jia, Jibrin,
strategies. and
Jia, Jibrin,
Görges
and Görges[188][188]
propose an energy-optimal
propose an energy-optimal ACC ACCfor for
EVs EVsusing
using linear
linear and
andnonlinear
nonlinearMPC MPC
techniques,
techniques,minimizing
minimizing energy energy consumption
consumption based based on ondynamic
dynamicdriving drivingandandtraffic
trafficcondi-
con-
ditions.
tions. Nie and Farzaneh [189] focus on eco-driving ACC with an MPC algorithm forfor
Nie and Farzaneh [189] focus on eco-driving ACC with an MPC algorithm re-
reduced
duced fuel fuelconsumption
consumption andand emissions
emissions while
while ensuring
ensuring safety
safety andand comfort.
comfort. Guo,Guo, Ge,
Ge, Sun,
Sun,
and and QiaoQiao [190]
[190] introduce
introduce anan MPC-based
MPC-based ACCwith
ACC withrelaxed
relaxedconstraints
constraints to to enhance
enhancefuel fuel
efficiency while considering speed limits and safety distances
efficiency while considering speed limits and safety distances for driving comfort. for driving comfort.
Liu,
Liu,Wang,
Wang,Hua, Hua,and andWangWang[191] [191]analyze
analyzeCACC CACCsafety safetywith
withcommunication
communicationdelays delays
using MPC and fuzzy logic to ensure stable and effective CACC
using MPC and fuzzy logic to ensure stable and effective CACC operation under real- operation under real-world
communication
world communication conditions. Lin et al.
conditions. [192]
Lin compare
et al. DRL and
[192] compare DRL MPC andforMPC
ACC, forsuggesting
ACC, sug-
agesting
hybrid aapproach
hybrid approach for improved fuel efficiency, comfort, and stability.etGunter
for improved fuel efficiency, comfort, and stability. Gunter al. [193]et
investigate the string stability of commercial ACC systems, highlighting
al. [193] investigate the string stability of commercial ACC systems, highlighting potential potential collision
risks in platooning
collision situations and
risks in platooning recommending
situations improvements.
and recommending Sawant et al. [194]
improvements. Sawant present
et al.
a[194]
robust CACC control algorithm using MPC and fuzzy logic to
present a robust CACC control algorithm using MPC and fuzzy logic to ensure safe ensure safe operation even
with limited data on preceding vehicle acceleration. Yang, Wang, and Yan [195] optimize
operation even with limited data on preceding vehicle acceleration. Yang, Wang, and Yan
ACC through a combination of MPC and ADRC, enhancing fuel efficiency and robustness
[195] optimize ACC through a combination of MPC and ADRC, enhancing fuel efficiency
to disturbances. Anselma [196] proposes a powertrain-oriented ACC considering fuel
and robustness to disturbances. Anselma [196] proposes a powertrain-oriented ACC con-
efficiency and passenger comfort using MPC and powertrain modeling.
sidering fuel efficiency and passenger comfort using MPC and powertrain modeling.
Chen [197] designed an ACC tailored to cut-in scenarios using MPC for fuel efficiency
optimization during lane changes. Hu and Wang [198] introduce a trust-based ACC with
individualization using a CBF approach, allowing vehicles to have personalized safety
requirements. Yan et al. [199] hybridized DDPG and CACC for optimized traffic flow,
Sensors 2024, 24, 249 34 of 51

Chen [197] designed an ACC tailored to cut-in scenarios using MPC for fuel effi-
ciency optimization during lane changes. Hu and Wang [198] introduce a trust-based
ACC with individualization using a CBF approach, allowing vehicles to have personalized
safety requirements. Yan et al. [199] hybridized DDPG and CACC for optimized traffic
flow, leveraging learning-based and cooperative techniques. Zhang et al. [200] created
a human-lead-platooning CACC to integrate human-driven vehicles into platoons. The
author of [201] presents a resilient CACC using ML to enhance robustness and adaptability
to uncertainties and disruptions. Kamal et al. [202] propose an ACC with look-ahead
anticipation for freeway driving, adjusting control inputs based on predicted traffic con-
ditions. Li et al. [203] leverage variable compass operator pigeon-inspired optimization
(VCPO-PIO) for ACC control input optimization. Petri et al. [204] address ACC for EVs
with FOC, considering unique characteristics and energy management needs. The list of
reviewed papers on adaptive cruise control is summarized in Table 10.

Table 10. Chosen publications, source title, and the number of citations related to adaptive cruise
control.

Number of
SI No. Ref. Year Source Title
Citations
1 [181] 2019 IEEE Transactions on Intelligent Transportation Systems 57
2 [182] 2019 IEEE Transactions on Vehicular Technology 54
3 [183] 2019 IEEE Transactions on Intelligent Transportation Systems 39
4 [184] 2019 MDPI Applied Sciences 9
5 [185] 2019 MDPI Symmetry 9
6 [186] 2020 IEEE Access 39
7 [187] 2020 IEEE Transactions on Intelligent Transportation Systems 29
8 [188] 2020 IEEE Transactions on Vehicular Technology 25
9 [189] 2020 MDPI Applied Sciences 29
10 [190] 2020 MDPI Applied Sciences 12
11 [191] 2020 MDPI Sustainability 11
12 [192] 2021 IEEE Transactions on Intelligent Vehicles 69
13 [193] 2021 IEEE Transactions on Intelligent Transportation Systems 68
14 [194] 2021 IEEE Transactions on Intelligent Transportation Systems 31
15 [195] 2021 MDPI Actuators 16
16 [196] 2021 MDPI Energies 13
17 [197] 2021 MDPI Applied Sciences 12
18 [198] 2022 IEEE Transactions on Intelligent Transportation Systems 12
IEEE Transactions on Automation Science
19 [199] 2022 10
and Engineering
20 [200] 2022 IEEE Transactions on Intelligent Transportation Systems 8
21 [201] 2022 IEEE Transactions on Intelligent Transportation Systems 8
22 [202] 2022 MDPI Applied Sciences 5
23 [203] 2022 MDPI Electronics 1
24 [204] 2022 MDPI Applied Sciences 1

4.10. Around-View Monitoring (AVM)


Around-View Monitoring (AVM) is an ADAS that uses multiple cameras to provide a
360-degree view of the vehicle’s surroundings. This helps drivers to see more of what is
around them, which can improve safety and make it easier to park. It is especially helpful
in tight spaces or when backing up.
AVM systems typically use four cameras, one mounted on each side of the vehicle and
one in the rear. The cameras are connected to a central computer, which stitches the images
together to create a panoramic view of the vehicle’s surroundings. This view is displayed
on a screen in the vehicle’s cabin, giving the driver a bird’s-eye view of what is around
them and preventing blind spots. Thus, AVM systems are a valuable safety feature and
can be used for a variety of purposes, including parking, backing up, maneuvering in tight
spaces, monitoring blind spots, and overall enhancing safety by giving drivers a better
Sensors 2024, 24, 249 35 of 51

Sensors 2024, 24, x FOR PEER REVIEW 36 of 52

view of their surroundings and preventing accidents, especially in low-visibility conditions.


The challenges of AVM in ADAS are their high cost and complexity of installation.
TheADAS
The ADASfeatures
featureswith
with which
which AVMAVMareare often
often combined
combined include
include blind-spot
blind-spot detec-
detection,
tion,departure
lane lane departure
warningwarning
system,system, a forward
a forward collision
collision warning warning
system,system, and parking
and parking assis-
assistance
tance systems.
systems. Overall,
Overall, these these features
features can together
can work work together to provide
to provide driversdrivers
with a with
morea
more comprehensive
comprehensive view ofview
theirof their surroundings,
surroundings, help themhelpavoid
themaccidents,
avoid accidents,
and makeanditmake
easierit
toeasier
park.to park.

Search
4.10.1.Terms
Searchand Recent
Terms andTrends
Recentin Around-View
Trends Monitoring
in Around-View Monitoring
‘Around
‘Around view monitoring’, ‘AVM’, and ‘surround view monitoring’
view monitoring’, ‘AVM’, and ‘surround view monitoring’ are
are the
thepromi-
promi-
nent search terms used to investigate this topic. The ‘OR’ operator was used
nent search terms used to investigate this topic. The ’OR’ operator was used to choose to choose
and
and combine
combine thethe most
most relevant
relevant andregularly
and regularlyused
usedapplicable
applicable phrases.
phrases. That
That is
is the
thesearch
search
phrases
phrases‘around
‘aroundview
viewmonitoring’,
monitoring’,‘AVM’,
‘AVM’,and
and‘surround
‘surroundviewviewmonitoring’
monitoring’were
werediscov-
discov-
ered. Figure 11 shows the complete search query for each of the databases. The databases
ered. Figure 11 shows the complete search query for each of the databases. The databases
include
includeIEEE
IEEEXplore
Xploreand
andMDPI.
MDPI.

Figure11.
Figure 11. Search
Search queries
queries for
for each
each of
of the
the databases
databasesfor
foraround
aroundview
viewmonitoring.
monitoring.The
Thedatabases in-
databases
clude IEEE Xplore and MDPI.
include IEEE Xplore and MDPI.

Ref.[205]
Ref. [205]introduces
introducesaanovel novelmethod
methodby byintegrating
integratingsemantic
semanticsegmentation
segmentationwith with
AVMfor
AVM forlane-level
lane-levellocalization.
localization. Utilizing
Utilizing visual
visual data
data and
andsemantic
semanticinformation,
information,aaDL DL
modelsegments
model segments lanes
lanes andand localizes
localizes the vehicle,
the vehicle, enhancing
enhancing navigation
navigation precisionprecision and
and safety.
safety.
Refs. Refs. [206,207]
[206,207] integrateintegrate
motion motionestimationestimation
into an into
AVM anforAVMADAS.for ADAS. The author
The author of [206]of
[206] employs
employs a Kalman a Kalman
filter tofilter to estimate
estimate motion,motion, improving
improving AVMAVM image image accuracy
accuracy by upby to
up
to 20%.
20%. The Theauthorauthor of [207]
of [207] focuses
focuses on homogeneous
on homogeneous surfaces,
surfaces, achieving
achieving 90% accuracy
90% accuracy with
image registration
with image and optical
registration flow.flow.
and optical Ref.Ref.[208] discusses
[208] discussesAVM/lidar
AVM/lidar sensor
sensorfusion
fusionforfor
parking-based
parking-basedSLAM. SLAM.The Thefusion
fusioncreates
createsaamap mapfor forSLAM
SLAMand andparking
parkingdetection,
detection,with withanan
improved
improvedloop loopclosure
closureaccuracy
accuracyofof95%. 95%.
Ref.
Ref.[209]
[209]proposes
proposesAVM-based
AVM-basedparking parkingspace spacedetection
detectionusingusingimage
imageprocessing
processingand and
machine learning, providing an effective solution. Ref. [210]
machine learning, providing an effective solution. Ref. [210] presents automatic AVM presents automatic AVM
camera
cameracalibration
calibrationusing using image
image processing
processing and andmachine
machinelearning,
learning,streamlining
streamliningthe process
the pro-
without a physical calibration rig. Ref. [211] enhances AVM
cess without a physical calibration rig. Ref. [211] enhances AVM image quality image quality via synthetic
via syn-
image
thetic learning for deblurring,
image learning addressing
for deblurring, blurriness
addressing and distortion.
blurriness Ref. [212]Ref.
and distortion. introduces
[212] in-
AVM calibration
troduces using unaligned
AVM calibration square boards,
using unaligned squaresimplifying the process
boards, simplifying theand increasing
process and in-
accuracy without a physical rig. Ref. [213] proposes an AVM-based
creasing accuracy without a physical rig. Ref. [213] proposes an AVM-based automatic automatic parking
system
parking using
systemparking
usingline detection,
parking offering anoffering
line detection, accurateanand efficient
accurate andsolution.
efficientRef. [214]
solution.
suggests a DL-based approach to detect parking and collision
Ref. [214] suggests a DL-based approach to detect parking and collision risk areas in au-risk areas in autonomous
parking
tonomous scenarios,
parkingimproving
scenarios,accuracy
improving and collisionand
accuracy assessment.
collision assessment.
The
The papers discussed above provide a good overviewofofthe
papers discussed above provide a good overview thecurrent
currentstate-of-the-art
state-of-the-art
approaches
approaches using AVM systems for lane-level localization, motion estimation,parking
using AVM systems for lane-level localization, motion estimation, parking
space
spacedetection,
detection,and andcollision
collisionriskriskarea
areadetection
detectionand andimproving
improvingthe theperformance
performanceofofAVM AVM
systems.
systems.The Themethods
methods proposed
proposed in these
in thesepapers
papershavehave
the potential
the potentialto significantly improve
to significantly im-
prove the safety and efficiency of AVM systems, which in turn improves driving and park-
ing efficiencies, and they are likely to become increasingly common in the future.
Sensors 2024, 24, 249 36 of 51

the safety and efficiency of AVM systems, which in turn improves driving and parking
efficiencies, and they are likely to become increasingly common in the future.
These amalgamations of these research papers collectively introduce innovative ap-
proaches ranging from semantic segmentation for lane-level localization to motion estima-
tion techniques for enhancing monitoring accuracy, and collectively focus on crucial aspects
such as automatic calibration, image-quality enhancement, parking-line detection, and
collision-risk assessment. Additionally, by employing advanced techniques like supervised
deblurring and DL, the integration of sensor fusion, such as AVM and lidar, significantly
improves AVM systems’ reliability, accuracy, and safety, offering promising outcomes for
applications like autonomous parking. The synthesis of these diverse techniques showcases
the recent advancements and growing potential of AVM in improving vehicle navigation,
parking, and overall safety, thus revolutionizing vehicle navigation, parking, and overall
driving experiences. The list of reviewed papers on around view monitoring is summarized
in Table 11.

Table 11. Chosen publications, source title, and the number of citations related to around-view
monitoring.

SI No. Ref. Year Source Title Cited by


1 [205] 2019 IEEE Sensors Journal 18
7th International Conference on Mechatronics
2 [206] 2019 -
Engineering (ICOM)
7th International Conference on Mechatronics
3 [207] 2019 -
Engineering (ICOM)
4 [208] 2019 MDPI Sensors 10
5 [209] 2019 MDPI Applied Sciences 9
6 [210] 2020 IEEE Access 3
17th International Conference on Machine Vision and
7 [211] 2021 1
Applications (MVA)
8 [212] 2021 MDPI Sensors 2
9 [213] 2021 MDPI Applied Sciences 1
10 [214] 2022 MDPI Sensors 1

5. Discussion Datasets
The input data are the most important factor for the ADAS functionalities discussed in
this paper. The preparation of the dataset is essential for the DL approaches, particularly in
the training phase. The quality of the dataset preparation in the network model determines
how well the autonomous car can manage its behavior and make decisions.
A review of journal articles, conference papers, and book chapters found that many
studies used self-collected data or collected data online. Some researchers compiled their
own dataset for training and then compared it to a publicly available benchmark dataset.
Others only used self-collected data for training and validation. Still, others relied only on
publicly available datasets for training and validation.
The choice of dataset preparation method depends on the specific research and the
availability of resources. Self-collected data can be more representative of the specific
environment in which the autonomous car will be operating, but it can be more time-
consuming and expensive to collect. Publicly available datasets are more convenient to use,
but they may not be as representative of the specific environment. Table 12 lists various
public datasets used for different state-of-the-art methods discussed in Sections 4.1–4.10.
Sensors 2024, 24, 249 37 of 51

Table 12. Datasets employed by the references chosen in this review paper.

SI No. Name. Categories No. of Objects Papers Used


[33–35,37,41,43,46,50,
Over 70,000 58,59,65,77,78,118,
KITTI Vision Benchmark Vehicles, pedestrians, cyclists, and
1 images & 30,000 121,124,126,129,133,
Suite [215,216] road objects
Lidar scans 141,145,151,154,157,
170,208]
Vehicles, pedestrians, cyclists, traffic
2 Argoverse [217] Over 1M [34]
lights, road objects, and more
Vehicles, pedestrians, cyclists, traffic signs, [35,142,146,147,150,
3 nuScenes [218] Over 1.4M
lights, road markings, and more 153,163,165]
4 GRAM [38] Vehicles, pedestrians, cyclists Around 1M [38]
Vehicles, pedestrians, cyclists, traffic signs,
5 GRAM-RTM [36] - [36]
lights, road markings, and more
6 UA-DETRAC [36,219,220] Car, bus, van, and others 8550 [37]
Cars, pedestrians, animals, buildings,
7 CDNet [221] trees, traffic signs, background scenes, 93,702 [38]
and more
Car, bus, truck, motorcycle, bicycle,
8 VEDAI [222] pedestrian, traffic light, signs, buildings, 33,360 [44]
vegetation, background
Person, car, bus, truck, motorcycle,
bicycle, pedestrian, traffic light, signs,
9 DAWN [223] 275,350 [46,54]
trailer, pole, buildings, vegetation, sky,
ground, and unknown
Car, person, bicycle, motorcycle, bus,
10 MS-COCO [224] truck, train, stop sign, fire hydrant, Over 2M [46,55,105]
traffic light
11 OSM [225] No fixed categories - [49]
12 DroneVehicle [226] Car, truck, bus, van, freight car 24,358 [51]
Vehicles, pedestrians, bicycles, traffic
13 Highway Dataset [227] 42,000 [33,55]
signs, construction, and other objects
14 Space Cup Competition [228] [228]
CityPersons pedestrian
15 Pedestrians 3475 [60,70]
detection benchmark [229]
People, bicycles, motorcycles, cars, vans,
16 PETS2009 [230] 4005 [71]
trucks, and other vehicles
People, bicycles, motorcycles, cars, vans,
17 CalTech Lanes Dataset [231] 30,607 [72,131]
airplanes, faces, Frisbee, trucks, and more
Multispectral pedestrian
18 Pedestrians 86,152 [73–76,79]
detection [232]
Aerial Infrared Pedestrian
19 Pedestrians Over 100K [80]
Detection Benchmark [80]
20 GTSRB [233] Traffic signs 51,839 [82–89,93,98]
21 BTSC [234] Traffic signs 3740 [93]
22 LISA [235] Traffic signs 6160 [97,169]
23 ITSRB & ITSDB [98] Traffic signs 500 [98]
24 Cure-TSD [236] Traffic signs 1080 [100]
25 Tsinghua-Tencent 100K [237] Traffic signs 100,000 [102]
Sensors 2024, 24, 249 38 of 51

Table 12. Cont.

SI No. Name. Categories No. of Objects Papers Used


26 CCTSDB [238] Traffic signs 7717 [104]
27 HRRSD [239] Traffic signs 58,290 [104]
Lane marking, traffic signs, dazzle lights, [116,117,119,122,124,
28 CuLane [240] 10,2448
and more 128,132,134,135,137]
Vehicles, lane markings, traffic signs, [116,119,122–126,130,
29 TUSimple [241] 12,224
pedestrians, cyclists, and more 132,133,137,138]
Pedestrians, riders, cars, trucks, buses,
30 BDD100K [242] 1,407,782 [116]
traffic signs, and more
Udacity Machine Learning
Vehicles, lane markings, traffic signs,
31 Nanodegree Project Dataset 242,999 [139,144]
pedestrians, cyclists, and more
[243]
Car, bus, truck, motorcycle, bicycle,
32 LLAMAS Dataset [244] pedestrian, traffic lights and signs, yield 1300 [122]
light, and more
Cracks and Potholes in Road
33 Cracks and potholes 3235 [139]
Images Dataset [245]
34 Waymo Open Dataset [246] Vehicles, pedestrians, cyclists, and signs 5,447,059 [148]
35 ETH Pedestrian Dataset [247] Pedestrians, cyclists, cars, and van 61,764 [170]

Besides employing publicly available, free-to-use open-source datasets, the most recent
state-of-the-art work uses a self-collected dataset and proposes datasets suitable for their
proposed works and makes their proposed dataset available for other researchers. For
instance, ref. [40] manually constructed a dataset containing 316 vehicle clusters and 224
non-vehicle clusters, ref. [47] used datasets generated from the transformed results that
demonstrate significant improvement, and ref. [62] initially generated a template of a
pedestrian from a training dataset. The template was then used to match pedestrians in
the lidar point cloud. The authors of the paper evaluated their method based on a dataset
of lidar point clouds. Additionally, ref. [63] was evaluated using their dataset and [67]
was evaluated using a dataset of images captured in hazy weather, ref. [66] was trained
and tested on a dataset of images captured in different weather conditions, ref. [67] was
trained on a dataset of images from rural roads, ref. [68] was trained on infrared images
captured during nighttime, and ref. [69] was trained on a dataset of images collected from
different scenarios, including urban roads, highways, and intersections. If a public dataset
is unavailable and the target is specific to a country, as was the case for [91], in which a
public dataset for Taiwan was not available, the author evaluated their method based on a
locally built dataset [248]. On the other hand, many publications do not mention exactly
which dataset was used, instead highlighting that ‘the proposed method was evaluated on
a publicly available dataset’ [94–96].
In addition to the state-of-the-art methods discussed in the above sections, some of
the other notable publications are:
The paper [249] provides a comprehensive overview of the advancements and tech-
niques in object detection facilitated by DL methodologies. The authors survey the state-of-
the-art approaches up to the time of publication in 2019, and discuss various DL architec-
tures and algorithms used for object detection, including two-stage detectors, one-stage
detectors, anchor-based and anchor-free methods, RetinaNet, and FPNs, along with method-
ologies handling small objects, occlusions, and cluttered backgrounds. Additionally, they
present some promising research directions for future work, such as multi-task learning,
attention mechanisms, weakly supervised learning, and domain adaptation. Addition-
ally, their paper explores the architectural evolution of DL models for object detection,
discussing the transition from traditional methods to the emergence of region-based and
Sensors 2024, 24, 249 39 of 51

anchor-based detectors, as well as the introduction of feature pyramid networks. The


review also covers commonly used datasets for object detection, highlighting their signifi-
cance in benchmarking algorithms, and discusses the evaluation metrics used to assess the
performance of object detection models.
The paper [250] serves as a thorough survey of driving monitoring and assistance
systems (DMAS), covering a wide range of technologies and methodologies such as driver
monitoring systems (DMS), advanced driver assistance systems (ADAS), autonomous
emergency braking (AEB), lane-departure warning systems (LDWS), adaptive cruise control
(ACC), and blind spot monitoring (BSM). It explores various aspects of systems designed
to monitor driver behavior and provide assistance, contributing to the understanding of
advancements in the field of intelligent transportation systems. The comprehensive nature
of the survey suggests an in-depth examination of existing technologies, challenges, and
potential future directions for driving monitoring and assistance systems.
The paper [251] proposes a novel approach to 3D object detection utilizing monocular
images. The key focus is on the use of a Proposal Generation Network tailored for 3D
object detection, which integrates depth information derived from monocular images to
generate proposals efficiently, contributing to improve the overall accuracy and efficiency
of 3D object detection. The paper addresses the challenge of 3D object detection using only
monocular images, which is a significant contribution, as many real-world applications
rely on single-camera setups.
The paper [252] presents an innovative one-stage approach to monocular 3D object
detection, streamlining the detection pipeline and potentially improving real-time perfor-
mance compared to traditional two-stage approaches, emphasizing the use of discrete depth
and orientation representations that suggest a departure from continuous representations,
potentially leading to more interpretable and efficient models of the detection process.
The paper [253] explores the integration of AI techniques for object detection and
distance measurement in which the algorithms are employed to identify and locate objects
in images or videos. Once the objects have been detected, the model estimates their distance
from the camera using various techniques, such as depth estimation networks, monocular
depth estimation, and stereo depth estimation. This AI-based approach to object detection
and distance measurement has the potential to revolutionize various fields. It offers high
accuracy, real-time performance, and low cost, making it a promising solution for a wide
range of applications.

6. Conclusions and Future Trends


Various ADASs discussed in the previous section have the potential to revolutionize
the way we drive. By improving road safety, reducing driver workload, and providing a
more comfortable and enjoyable driving experience, ADASs can make our roads safer and
our journeys more enjoyable.
These DL algorithms are still under development, but they have the potential to
revolutionize the way ADASs are designed and implemented. As these algorithms become
more powerful and efficient, they will become more widely used in ADASs. Some of the
advantages of using deep learning for object detection, recognition, and tracking in ADAS
are as follows:
a. Accuracy: Deep learning algorithms have been shown to be more accurate than
traditional algorithms, especially in challenging conditions.
b. Speed: Deep learning algorithms can be very fast, which is important for real-time
applications.
c. Scalability: Deep learning algorithms can be scaled to handle large datasets and
complex tasks.
d. Robustness: Deep learning algorithms are relatively robust to noise and other distur-
bances.
These advantages come with some of the challenges of using DL for object detection,
recognition, and tracking in ADAS:
Sensors 2024, 24, 249 40 of 51

a. Data requirements: Deep learning algorithms require large datasets of labeled data to
train. This can be a challenge to obtain, especially for rare or unusual objects.
b. Computational requirements: Deep learning algorithms can be computationally ex-
pensive, which can limit their use in real-time applications.
c. Interpretability: Deep learning algorithms are often difficult to interpret, which can
make it difficult to understand why they make certain decisions.
Researchers are working on developing newer algorithms and improvising the existing
algorithms and techniques to address these challenges. As a result, ADASs are becoming
increasingly capable of detecting and tracking objects in a variety of challenging conditions.
ADASs are still under development, but they have the potential to revolutionize the
way we drive. By making our roads safer and more efficient, ADASs can help to create a
better future for transportation.
ADASs are not without their drawbacks. They can be expensive, and they can some-
times malfunction. Additionally, drivers may become too reliant on ADASs and become
less attentive to their driving.
Overall, ADASs offer numerous potential benefits for safety and convenience. How-
ever, it is important to be aware of the drawbacks and to use these systems responsibly.
The ongoing continuous advancements and researches are focusing on overcoming
the existing drawbacks and the same can be foreseen as the future trends of ADAS.
a. Multi-sensor fusion: ADASs are increasingly using multiple sensors, such as cameras,
radar, and lidar, to improve the accuracy and reliability of object detection. Multi-
sensor fusion can help to overcome the limitations of individual sensors, such as
occlusion and poor weather conditions.
b. Deep learning: DL is rapidly becoming the dominant approach for object detection,
recognition, and tracking in ADAS. Deep learning algorithms are very effective at
learning the features that are important for identifying different objects.
c. Real-time performance: ADASs must be able to detect, recognize, and track objects
in real time. This is essential for safety-critical applications, as delays in detection or
tracking can lead to accidents.
d. Robustness to challenging conditions: ADASs must be able to operate in a variety
of challenging conditions, such as different lighting conditions, weather conditions,
and road conditions. Researchers are working on developing new algorithms and
techniques to improve the robustness of ADASs to challenging conditions.
e. Integration with other ADAS features: ADASs are seeing increased integration with
other ADAS features, such as collision avoidance, lane departure warning, and adap-
tive cruise control. This integration can help to improve the overall safety of vehicles.
These are just some of the future trends in object detection, recognition, and tracking
for ADAS. As research in this area continues, ADASs are becoming increasingly capable of
detecting and tracking objects in a variety of challenging conditions. This will help to make
vehicles safer and more reliable.
Some additional trends that are worth mentioning could be:
a. The use of synthetic data: Synthetic data are being used increasingly often to train
object detection, recognition, and tracking algorithms. Synthetic data are generated
by computer simulations, and they can be used to create training datasets that are
more diverse and challenging than the real-world datasets. This might enhance
the efficiency of the neural networks, as they can be trained with a combination of
real-world datasets supplemented with the synthetic datasets.
b. The use of edge computing: Edge computing is a distributed computing paradigm that
brings computation and storage closer to the edge of the network. Edge computing
can be used to improve the performance and efficiency of ADASs by performing object
detection, recognition, and local tracking on the vehicle, implying that the greater the
storage on the ADAS implement vehicles, the better the performance of the ADASs.
Sensors 2024, 24, 249 41 of 51

c. The use of 5G: 5G is the next generation of cellular network technology. 5G will offer
much higher bandwidth and lower latency than 4G, which will make it possible to
stream high-definition video from cameras to cloud-based servers for object detection,
recognition, and tracking. Thus, a better cellular network will aid in the continuous
training of the NNs and greatly improve the performance with newer data from real
environments.
These are just some of the future trends that are likely to shape the development of
object detection, recognition, and tracking for ADAS in the years to come.

Author Contributions: Conceptualization, V.M.S. and J.-I.G.; methodology, V.M.S. and J.-I.G.; valida-
tion, V.M.S. and J.-I.G.; formal analysis, V.M.S.; investigation, V.M.S.; resources, V.M.S. and J.-I.G.; data
curation, V.M.S.; writing—original draft preparation, V.M.S.; writing—review and editing, V.M.S.;
visualization, V.M.S.; supervision, J.-I.G.; project administration, J.-I.G.; funding acquisition, J.-I.G.
All authors have read and agreed to the published version of the manuscript.
Funding: This work is supported by the National Science and Technology Council (NSTC), Tai-
wan R.O.C. projects with grants 112-2218-E-A49-027-, 112-2218-E-002-042-, 111-2622-8-A49-023-,
111-2221-E-A49-126-MY3, 111-2634-F-A49-013-, and 110-2221-E-A49-145-MY3, and by the Satellite
Communications and AIoT Research Center/The Co-operation Platform of the Industry-Academia
Innovation School, National Yang Ming Chiao Tung University (NYCU), Taiwan R.OC. projects with
grants 111UC2N006 and 112UC2N006.
Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable.
Data Availability Statement: No data used in the article but only the state-of-the-art publications as
listed in the ‘References’ section.
Acknowledgments: We extend of sincere thanks to the National Yang Ming Chiao Tung University
(NYCU), Taiwan R.OC., National Science and Technology Council (NSTC), Taiwan R.O.C., and the
Satellite Communications and AIoT Research Center/The Co-operation Platform of the Industry-
Academia Innovation School, National Yang Ming Chiao Tung University (NYCU), Taiwan R.OC. for
their valuable support. We extend our heartfelt thanks to all the members and staff of the Intelligent
Vision System Laboratory (iVSL), National Yang Ming Chiao Tung University, Taiwan R.O.C.
Conflicts of Interest: Author Jiun-In Guo was employed by the company eNeural Technologies Inc.
All the authors declare that the research was conducted in the absence of any commercial or financial
relationships that could be construed as a potential conflict of interest.

References
1. Dewesoft. What Is ADAS? Dewesoft Blog. 8 March 2022. Available online: https://dewesoft.com/blog/what-is-adas (accessed
on 12 March 2022).
2. FEV Consulting. Forbes Honors FEV Consulting as One of the World’s Best Management Consulting Firms. FEV Media Center. 20
July 2022. Available online: https://www.fev.com/en/media-center/press/press-releases/news-article/article/forbes-honors-
fev-consulting-as-one-of-the-worlds-best-management-consulting-firms-2022.html (accessed on 17 March 2022).
3. Insurance Institute for Highway Safety. Effectiveness of advanced driver assistance systems in preventing fatal crashes. Traffic Inj.
Prev. 2019, 20, 849–858.
4. Traffic Safety Facts: 2021 Data. Available online: https://crashstats.nhtsa.dot.gov/Api/Public/ViewPublication/813001 (accessed
on 1 October 2022).
5. Palat, B.; Delhomme, P.; Saint Pierre, G. Numerosity heuristic in route choice based on the presence of traffic lights. Transp. Res.
Part F Traffic Psychol. Behav. 2014, 22, 104–112. [CrossRef]
6. Papadimitriou, E.; Lassarre, S.; Yannis, G. Introducing human factors in pedestrian crossing behaviour models. Transp. Res. Part F
Traffic Psychol. Behav. 2016, 36, 69–82. [CrossRef]
7. King, E.; Bourdeau, E.; Zheng, X.; Pilla, F. A combined assessment of air and noise pollution on the High Line, New York City.
Transp. Res. Part D Transp. Environ. 2016, 42, 91–103. [CrossRef]
8. Woodburn, A. An analysis of rail freight operational efficiency and mode share in the British port-hinterland container market.
Transp. Res. Part D Transp. Environ. 2017, 51, 190–202. [CrossRef]
9. Haybatollahi, M.; Czepkiewicz, M.; Laatikainen, T.; Kyttä, M. Neighbourhood preferences, active travel behaviour, and built
environment: An exploratory study. Transp. Res. Part F Traffic Psychol. Behav. 2015, 29, 57–69. [CrossRef]
Sensors 2024, 24, 249 42 of 51

10. Honda Worldwide. Honda Motor Co. Advanced Brake Introduced for Motorcycles by Honda ahead of Others. Available online:
https://web.archive.org/web/20160310200739/http://world.honda.com/motorcycle-technology/brake/p2.html (accessed on
30 November 2022).
11. American Honda. Combined Braking System (CBS). 9 December 2013. Available online: https://web.archive.org/web/20180710
010624/http://powersports.honda.com/experience/articles/090111c08139be28.aspx (accessed on 16 September 2022).
12. Blancher, A.; Zuby, D. Interview: Into the Future with ADAS and Vehicle Autonomy. Visualize, Verisk. 8 March 2023. Available
online: https://www.verisk.com/insurance/visualize/interview-into-the-future-with-adas-and-vehicle-autonomy/ (accessed
on 16 September 2022).
13. Yeong, D.J.; Velasco-Hernandez, G.; Barry, J.; Walsh, J. Sensor and Sensor Fusion Technology in Autonomous Vehicles: A Review.
Sensors 2021, 21, 2140. [CrossRef] [PubMed]
14. Continental, A.G. ADAS Challenges and Solutions. 2022. Available online: https://conf.laas.fr/WORCS13/Slides/WORCS-13_2
013-SergeBoverie.pdf (accessed on 8 March 2023).
15. Blanco, S. Advanced Driver-Assistance Systems. What the Heck Are They Anyway? Forbes. 26 May 2022. Available online:
https://www.forbes.com/wheels/advice/advanced-driver-assistance-systems-what-are-they/ (accessed on 20 May 2023).
16. Cortes, C.; Vapnik, V. Support-vector networks. Mach. Learn. 1995, 20, 273–297. [CrossRef]
17. Breiman, L. Random Forests. Mach. Learn. 2001, 45, 5–32. [CrossRef]
18. Swain, M.J.; Ballard, D.H. Color indexing. Int. J. Comput. Vis. 1991, 7, 11–32. [CrossRef]
19. Sobel, I.; Feldman, G. A 3 × 3 Isotropic Gradient Operator for Edge Detection; Presented at the Stanford Artificial Project; Stanford
University: Stanford, CA, USA, 1968.
20. Belongie, S.; Malik, J.; Puzicha, J. Shape matching and object recognition using shape contexts. IEEE Trans. Pattern Anal. Mach.
Intell. 2002, 24, 509–522. [CrossRef]
21. Kalman, R.E. A new approach to linear filtering and prediction problems. J. Basic Eng. 1960, 82, 35–45. [CrossRef]
22. Simon, D. Optimal State Estimation: Kalman, H Infinity, and Nonlinear Approaches; John Wiley & Sons: Hoboken, NJ, USA, 2006.
23. Wu, J.K.; Wong, Y.F. Bayesian Approach for Data Fusion in Sensor Networks. In Proceedings of the 2006 9th International
Conference on Information Fusion, Florence, Italy, 10–13 July 2006; pp. 1–5. [CrossRef]
24. Sun, Y.-Q.; Tian, J.-W.; Liu, J. Target Recognition using Bayesian Data Fusion Method. In Proceedings of the 2006 International
Conference on Machine Learning and Cybernetics, Dalian, China, 13–16 August 2006; pp. 3288–3292. [CrossRef]
25. Le Hegarat-Mascle, S.L.; Bloch, I.; Vidal-Madjar, D. Application of Dempster-Shafer evidence theory to unsupervised classification
in multisource remote sensing. IEEE Trans. Geosci. Remote Sens. 1997, 35, 1018–1031. [CrossRef]
26. Chen, C.; Jafari, R.; Kehtarnavaz, N. Improving Human Action Recognition Using Fusion of Depth Camera and Inertial Sensors.
IEEE Trans. Hum. Mach. Syst. 2015, 45, 51–61. [CrossRef]
27. Ding, B.; Wen, G.; Huang, X.; Ma, C.; Yang, X. Target Recognition in Synthetic Aperture Radar Images via Matching of Attributed
Scattering Centers. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2017, 10, 3334–3347. [CrossRef]
28. Gu, J.; Lind, A.; Chhetri, T.R.; Bellone, M.; Sell, R. End-to-End Multimodal Sensor Dataset Collection Framework for Autonomous
Vehicles. Sensors 2023, 23, 6783. [CrossRef] [PubMed]
29. RGBSI. What Is Sensor Fusion for Autonomous Driving Systems?—Part 1. RGBSI Blog. 15 February 2023. Available online:
https://blog.rgbsi.com/sensor-fusion-autonomous-driving-systems-part-1 (accessed on 30 April 2023).
30. Sasken. Sensor Fusion Paving the Way for Autonomous Vehicles. Sasken Blog. 22 February 2023. Available online: https:
//blog.sasken.com/sensor-fusion-paving-the-way-for-autonomous-vehicles (accessed on 18 May 2023).
31. Haider, A.; Pigniczki, M.; Köhler, M.H.; Fink, M.; Schardt, M.; Cichy, Y.; Zeh, T.; Haas, L.; Poguntke, T.; Jakobi, M.; et al.
Development of High-Fidelity Automotive LiDAR Sensor Model with Standardized Interfaces. Sensors 2022, 22, 7556. [CrossRef]
32. Waymo. The Waymo Driver Handbook: Teaching an Autonomous Vehicle How to Perceive and Understand the World around It.
Waymo Blog. 11 October 2021. Available online: https://waymo.com/blog/2021/10/the-waymo-driver-handbook-perception.
html (accessed on 18 May 2023).
33. Hu, X.; Xu, X.; Xiao, Y.; Chen, H.; He, S.; Qin, J.; Heng, P.-A. SINet: A Scale-Insensitive Convolutional Neural Network for Fast
Vehicle Detection. IEEE Trans. Intell. Transp. Syst. 2019, 20, 1010–1019. [CrossRef]
34. Hu, X.; Xu, X.; Xiao, Y.; Chen, H.; He, S.; Qin, J.; Heng, P.-A. Joint Monocular 3D Vehicle Detection and Tracking. In Proceedings
of the 2019 IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Republic of Korea, 27 October–2 November
2019; pp. 5389–5398. [CrossRef]
35. Chadwick, S.; Maddern, W.; Newman, P. Distant Vehicle Detection Using Radar and Vision. In Proceedings of the 2019
International Conference on Robotics and Automation (ICRA), Montreal, QC, Canada, 20–24 May 2019; pp. 8311–8317. [CrossRef]
36. López-Sastre, R.J.; Herranz-Perdiguero, C.; Guerrero-Gómez-Olmedo, R.; Oñoro-Rubio, D.; Maldonado-Bascón, S. Boosting
Multi-Vehicle Tracking with a Joint Object Detection and Viewpoint Estimation Sensor. Sensors 2019, 19, 4062. [CrossRef]
37. Zhang, F.; Li, C.; Yang, F. Vehicle Detection in Urban Traffic Surveillance Images Based on Convolutional Neural Networks with
Feature Concatenation. Sensors 2019, 19, 594. [CrossRef]
38. Gomaa, A.; Abdelwahab, M.M.; Abo-Zahhad, M.; Minematsu, T.; Taniguchi, R.-I. Robust Vehicle Detection and Counting
Algorithm Employing a Convolution Neural Network and Optical Flow. Sensors 2019, 19, 4588. [CrossRef] [PubMed]
39. Liu, H.; Ma, J.; Xu, T.; Yan, W.; Ma, L.; Zhang, X. Vehicle Detection and Classification Using Distributed Fiber Optic Acoustic
Sensing. IEEE Trans. Veh. Technol. 2020, 69, 1363–1374. [CrossRef]
Sensors 2024, 24, 249 43 of 51

40. Zhang, J.; Xiao, W.; Coifman, B.; Mills, J.P. Vehicle Tracking and Speed Estimation From Roadside Lidar. IEEE J. Sel. Top. Appl.
Earth Obs. Remote Sens. 2020, 13, 5597–5608. [CrossRef]
41. Wang, X.; Wang, S.; Cao, J.; Wang, Y. Data-Driven Based Tiny-YOLOv3 Method for Front Vehicle Detection Inducing SPP-Net.
IEEE Access 2020, 8, 110227–110236. [CrossRef]
42. Kim, T.; Park, T.-H. Extended Kalman Filter (EKF) Design for Vehicle Position Tracking Using Reliability Function of Radar and
Lidar. Sensors 2020, 20, 4126. [CrossRef] [PubMed]
43. Cao, J.; Song, C.; Song, S.; Peng, S.; Wang, D.; Shao, Y.; Xiao, F. Front Vehicle Detection Algorithm for Smart Car Based on
Improved SSD Model. Sensors 2020, 20, 4646. [CrossRef] [PubMed]
44. Mo, N.; Yan, L. Improved Faster RCNN Based on Feature Amplification and Oversampling Data Augmentation for Oriented
Vehicle Detection in Aerial Images. Remote Sens. 2020, 12, 2558. [CrossRef]
45. Zhang, R.; Ishikawa, A.; Wang, W.; Striner, B.; Tonguz, O.K. Using Reinforcement Learning with Partial Vehicle Detection for
Intelligent Traffic Signal Control. IEEE Trans. Intell. Transp. Syst. 2021, 22, 404–415. [CrossRef]
46. Hassaballah, M.; Kenk, M.A.; Muhammad, K.; Minaee, S. Vehicle Detection and Tracking in Adverse Weather Using a Deep
Learning Framework. IEEE Trans. Intell. Transp. Syst. 2021, 22, 4230–4242. [CrossRef]
47. Lin, C.-T.; Huang, S.-W.; Wu, Y.-Y.; Lai, S.-H. GAN-Based Day-to-Night Image Style Transfer for Nighttime Vehicle Detection.
IEEE Trans. Intell. Transp. Syst. 2021, 22, 951–963. [CrossRef]
48. Balamuralidhar, N.; Tilon, S.; Nex, F. MultEYE: Monitoring System for Real-Time Vehicle Detection, Tracking and Speed Estimation
from UAV Imagery on Edge-Computing Platforms. Remote Sens. 2021, 13, 573. [CrossRef]
49. Chen, Y.; Qin, R.; Zhang, G.; Albanwan, H. Spatial-Temporal Analysis of Traffic Patterns during the COVID-19 Epidemic by
Vehicle Detection Using Planet Remote-Sensing Satellite Images. Remote Sens. 2021, 13, 208. [CrossRef]
50. Li, H.; Zhao, S.; Zhao, W.; Zhang, L.; Shen, J. One-Stage Anchor-Free 3D Vehicle Detection from LiDAR Sensors. Sensors 2021, 21,
2651. [CrossRef] [PubMed]
51. Sun, Y.; Cao, B.; Zhu, P.; Hu, Q. Drone-Based RGB-Infrared Cross-Modality Vehicle Detection Via Uncertainty-Aware Learning.
IEEE Trans. Circuits Syst. Video Technol. 2022, 32, 6700–6713. [CrossRef]
52. Zhao, J.; Hao, S.; Dai, C.; Zhang, H.; Zhao, L.; Ji, Z.; Ganchev, I. Improved Vision-Based Vehicle Detection and Classification by
Optimized YOLOv4. IEEE Access 2022, 10, 8590–8603. [CrossRef]
53. Bell, A.; Mantecon, T.; Diaz, C.; Del-Blanco, C.R.; Jaureguizar, F.; Garcia, N. A Novel System for Nighttime Vehicle Detection
Based on Foveal Classifiers with Real-Time Performance. IEEE Trans. Intell. Transp. Syst. 2022, 23, 5421–5433. [CrossRef]
54. Humayun, M.; Ashfaq, F.; Jhanjhi, N.Z.; Alsadun, M.K. Traffic Management: Multi-Scale Vehicle Detection in Varying Weather
Conditions Using YOLOv4 and Spatial Pyramid Pooling Network. Electronics 2022, 11, 2748. [CrossRef]
55. Charouh, Z.; Ezzouhri, A.; Ghogho, M.; Guennoun, Z. A Resource-Efficient CNN-Based Method for Moving Vehicle Detection.
Sensors 2022, 22, 1193. [CrossRef]
56. Fan, Y.; Qiu, Q.; Hou, S.; Li, Y.; Xie, J.; Qin, M.; Chu, F. Application of Improved YOLOv5 in Aerial Photographing Infrared
Vehicle Detection. Electronics 2022, 11, 2344. [CrossRef]
57. National Highway Traffic Safety Administration. Traffic Safety Facts 2021 Data: Pedestrians. [Fact Sheet]; 27 June 2023. Available
online: https://crashstats.nhtsa.dot.gov/Api/Public/ViewPublication/813450 (accessed on 2 May 2023).
58. Liu, W.; Liao, S.; Ren, W.; Hu, W.; Yu, Y. High-Level Semantic Feature Detection: A New Perspective for Pedestrian Detection. In
Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA,
16–20 June 2019; pp. 5182–5191. [CrossRef]
59. Liu, S.; Huang, D.; Wang, Y. Adaptive NMS: Refining Pedestrian Detection in a Crowd. In Proceedings of the 2019 IEEE/CVF
Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 16–20 June 2019; pp. 6452–6461.
[CrossRef]
60. Pang, Y.; Xie, J.; Khan, M.H.; Anwer, R.M.; Khan, F.S.; Shao, L. Mask-Guided Attention Network for Occluded Pedestrian
Detection. In Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Republic of Korea,
27 October–2 November 2019; pp. 4966–4974. [CrossRef]
61. Dimitrievski, M.; Veelaert, P.; Philips, W. Behavioral Pedestrian Tracking Using a Camera and LiDAR Sensors on a Moving
Vehicle. Sensors 2019, 19, 391. [CrossRef]
62. Liu, K.; Wang, W.; Wang, J. Pedestrian Detection with Lidar Point Clouds Based on Single Template Matching. Electronics 2019,
8, 780. [CrossRef]
63. He, M.; Luo, H.; Hui, B.; Chang, Z. Pedestrian Flow Tracking and Statistics of Monocular Camera Based on Convolutional Neural
Network and Kalman Filter. Appl. Sci. 2019, 9, 1624. [CrossRef]
64. Li, G.; Yang, Y.; Qu, X. Deep Learning Approaches on Pedestrian Detection in Hazy Weather. IEEE Trans. Ind. Electron. 2020, 67,
8889–8899. [CrossRef]
65. Huang, X.; Ge, Z.; Jie, Z.; Yoshie, O. NMS by Representative Region: Towards Crowded Pedestrian Detection by Proposal Pairing.
In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 14–19
June 2020; pp. 10747–10756. [CrossRef]
66. Lin, C.; Lu, J.; Wang, G.; Zhou, J. Graininess-Aware Deep Feature Learning for Robust Pedestrian Detection. IEEE Trans. Image
Process. 2020, 29, 3820–3834. [CrossRef]
Sensors 2024, 24, 249 44 of 51

67. Barba-Guaman, L.; Eugenio Naranjo, J.; Ortiz, A. Deep Learning Framework for Vehicle and Pedestrian Detection in Rural Roads
on an Embedded GPU. Electronics 2020, 9, 589. [CrossRef]
68. Chen, Y.; Shin, H. Pedestrian Detection at Night in Infrared Images Using an Attention-Guided Encoder-Decoder Convolutional
Neural Network. Appl. Sci. 2020, 10, 809. [CrossRef]
69. Cao, J.; Song, C.; Peng, S.; Song, S.; Zhang, X.; Shao, Y.; Xiao, F. Pedestrian Detection Algorithm for Intelligent Vehicles in Complex
Scenarios. Sensors 2020, 20, 3646. [CrossRef]
70. Hsu, W.-Y.; Lin, W.-Y. Ratio-and-Scale-Aware YOLO for Pedestrian Detection. IEEE Trans. Image Process. 2021, 30, 934–947.
[CrossRef]
71. Stadler, D.; Beyerer, J. Improving Multiple Pedestrian Tracking by Track Management and Occlusion Handling. In Proceedings of
the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 20–25 June 2021; pp.
10953–10962. [CrossRef]
72. Yang, P.; Zhang, G.; Wang, L.; Xu, L.; Deng, Q.; Yang, M.-H. A Part-Aware Multi-Scale Fully Convolutional Network for Pedestrian
Detection. IEEE Trans. Intell. Transp. Syst. 2021, 22, 1125–1137. [CrossRef]
73. Cao, Z.; Yang, H.; Zhao, J.; Guo, S.; Li, L. Attention Fusion for One-Stage Multispectral Pedestrian Detection. Sensors 2021, 21, 4184.
[CrossRef]
74. Nataprawira, J.; Gu, Y.; Goncharenko, I.; Kamijo, S. Pedestrian Detection Using Multispectral Images and a Deep Neural Network.
Sensors 2021, 21, 2536. [CrossRef] [PubMed]
75. Chen, X.; Liu, L.; Tan, X. Robust Pedestrian Detection Based on Multi-Spectral Image Fusion and Convolutional Neural Networks.
Electronics 2022, 11, 1. [CrossRef]
76. Kim, J.U.; Park, S.; Ro, Y.M. Uncertainty-Guided Cross-Modal Learning for Robust Multispectral Pedestrian Detection. IEEE
Trans. Circuits Syst. Video Technol. 2022, 32, 1510–1523. [CrossRef]
77. Dasgupta, K.; Das, A.; Das, S.; Bhattacharya, U.; Yogamani, S. Spatio-Contextual Deep Network-Based Multimodal Pedestrian
Detection for Autonomous Driving. IEEE Trans. Intell. Transp. Syst. 2022, 23, 15940–15950. [CrossRef]
78. Held, P.; Steinhauser, D.; Koch, A.; Brandmeier, T.; Schwarz, U.T. A Novel Approach for Model-Based Pedestrian Tracking Using
Automotive Radar. IEEE Trans. Intell. Transp. Syst. 2022, 23, 7082–7095. [CrossRef]
79. Roszyk, K.; Nowicki, M.R.; Skrzypczyński, P. Adopting the YOLOv4 Architecture for Low-Latency Multispectral Pedestrian
Detection in Autonomous Driving. Sensors 2022, 22, 1082. [CrossRef]
80. Shao, Y.; Zhang, X.; Chu, H.; Zhang, X.; Zhang, D.; Rao, Y. AIR-YOLOv3: Aerial Infrared Pedestrian Detection via an Improved
YOLOv3 with Network Pruning. Appl. Sci. 2022, 12, 3627. [CrossRef]
81. Lv, H.; Yan, H.; Liu, K.; Zhou, Z.; Jing, J. YOLOv5-AC: Attention Mechanism-Based Lightweight YOLOv5 for Track Pedestrian
Detection. Sensors 2022, 22, 5903. [CrossRef]
82. Yuan, Y.; Xiong, Z.; Wang, Q. VSSA-NET: Vertical Spatial Sequence Attention Network for Traffic Sign Detection. IEEE Trans.
Image Process. 2019, 28, 3423–3434. [CrossRef]
83. Li, J.; Wang, Z. Real-Time Traffic Sign Recognition Based on Efficient CNNs in the Wild. IEEE Trans. Intell. Transp. Syst. 2019, 20,
975–984. [CrossRef]
84. Liu, Z.; Du, J.; Tian, F.; Wen, J. MR-CNN: A Multi-Scale Region-Based Convolutional Neural Network for Small Traffic Sign
Recognition. IEEE Access 2019, 7, 57120–57128. [CrossRef]
85. Tian, Y.; Gelernter, J.; Wang, X.; Li, J.; Yu, Y. Traffic Sign Detection Using a Multi-Scale Recurrent Attention Network. IEEE Trans.
Intell. Transp. Syst. 2019, 20, 4466–4475. [CrossRef]
86. Cao, J.; Song, C.; Peng, S.; Xiao, F.; Song, S. Improved Traffic Sign Detection and Recognition Algorithm for Intelligent Vehicles.
Sensors 2019, 19, 4021. [CrossRef] [PubMed]
87. Shao, F.; Wang, X.; Meng, F.; Zhu, J.; Wang, D.; Dai, J. Improved Faster R-CNN Traffic Sign Detection Based on a Second Region of
Interest and Highly Possible Regions Proposal Network. Sensors 2019, 19, 2288. [CrossRef] [PubMed]
88. Zhang, J.; Xie, Z.; Sun, J.; Zou, X.; Wang, J. A Cascaded R-CNN with Multiscale Attention and Imbalanced Samples for Traffic
Sign Detection. IEEE Access 2020, 8, 29742–29754. [CrossRef]
89. Tabernik, D.; Skočaj, D. Deep Learning for Large-Scale Traffic-Sign Detection and Recognition. IEEE Trans. Intell. Transp. Syst.
2020, 21, 1427–1440. [CrossRef]
90. Kamal, U.; Tonmoy, T.I.; Das, S.; Hasan, M.K. Automatic Traffic Sign Detection and Recognition Using SegU-Net and a Modified
Tversky Loss Function with L1-Constraint. IEEE Trans. Intell. Transp. Syst. 2020, 21, 1467–1479. [CrossRef]
91. Tai, S.-K.; Dewi, C.; Chen, R.-C.; Liu, Y.-T.; Jiang, X.; Yu, H. Deep Learning for Traffic Sign Recognition Based on Spatial Pyramid
Pooling with Scale Analysis. Appl. Sci. 2020, 10, 6997. [CrossRef]
92. Dewi, C.; Chen, R.-C.; Tai, S.-K. Evaluation of Robust Spatial Pyramid Pooling Based on Convolutional Neural Network for
Traffic Sign Recognition System. Electronics 2020, 9, 889. [CrossRef]
93. Nartey, O.T.; Yang, G.; Asare, S.K.; Wu, J.; Frempong, L.N. Robust Semi-Supervised Traffic Sign Recognition via Self-Training and
Weakly-Supervised Learning. Sensors 2020, 20, 2684. [CrossRef]
94. Dewi, C.; Chen, R.-C.; Liu, Y.-T.; Jiang, X.; Hartomo, K.D. Yolo V4 for Advanced Traffic Sign Recognition with Synthetic Training
Data Generated by Various GAN. IEEE Access 2021, 9, 97228–97242. [CrossRef]
95. Wang, L.; Zhou, K.; Chu, A.; Wang, G.; Wang, L. An Improved Light-Weight Traffic Sign Recognition Algorithm Based on
YOLOv4-Tiny. IEEE Access 2021, 9, 124963–124971. [CrossRef]
Sensors 2024, 24, 249 45 of 51

96. Cao, J.; Zhang, J.; Jin, X. A Traffic-Sign Detection Algorithm Based on Improved Sparse R-cnn. IEEE Access 2021, 9, 22774–122788.
[CrossRef]
97. Lopez-Montiel, M.; Orozco-Rosas, U.; Sánchez-Adame, M.; Picos, K.; Ross, O.H.M. Evaluation Method of Deep Learning-Based
Embedded Systems for Traffic Sign Detection. IEEE Access 2021, 9, 101217–101238. [CrossRef]
98. Zhou, K.; Zhan, Y.; Fu, D. Learning Region-Based Attention Network for Traffic Sign Recognition. Sensors 2021, 21, 686. [CrossRef]
99. Koh, D.-W.; Kwon, J.-K.; Lee, S.-G. Traffic Sign Recognition Evaluation for Senior Adults Using EEG Signals. Sensors 2021, 21, 4607.
[CrossRef] [PubMed]
100. Ahmed, S.; Kamal, U.; Hasan, M.K. DFR-TSD: A Deep Learning Based Framework for Robust Traffic Sign Detection Under
Challenging Weather Conditions. IEEE Trans. Intell. Transp. Syst. 2022, 23, 5150–5162. [CrossRef]
101. Xie, K.; Zhang, Z.; Li, B.; Kang, J.; Niyato, D.; Xie, S.; Wu, Y. Efficient Federated Learning with Spike Neural Networks for Traffic
Sign Recognition. IEEE Trans. Veh. Technol. 2022, 71, 9980–9999. [CrossRef]
102. Min, W.; Liu, R.; He, D.; Han, Q.; Wei, Q.; Wang, Q. Traffic Sign Recognition Based on Semantic Scene Understanding and
Structural Traffic Sign Location. IEEE Trans. Intell. Transp. Syst. 2022, 23, 15794–15807. [CrossRef]
103. Gu, Y.; Si, B. A Novel Lightweight Real-Time Traffic Sign Detection Integration Framework Based on YOLOv4. Entropy 2022,
24, 487. [CrossRef]
104. Liu, Y.; Shi, G.; Li, Y.; Zhao, Z. M-YOLO: Traffic Sign Detection Algorithm Applicable to Complex Scenarios. Symmetry 2022,
14, 952. [CrossRef]
105. Wang, X.; Guo, J.; Yi, J.; Song, Y.; Xu, J.; Yan, W.; Fu, X. Real-Time and Efficient Multi-Scale Traffic Sign Detection Method for
Driverless Cars. Sensors 2022, 22, 6930. [CrossRef] [PubMed]
106. Zhao, Y.; Mammeri, A.; Boukerche, A. A Novel Real-time Driver Monitoring System Based on Deep Convolutional Neural
Network. In Proceedings of the 2019 IEEE International Symposium on Robotic and Sensors Environments (ROSE), Ottawa, ON,
Canada, 17–18 June 2019; pp. 1–7. [CrossRef]
107. Hijaz, A.; Louie, W.-Y.G.; Mansour, I. Towards a Driver Monitoring System for Estimating Driver Situational Awareness. In
Proceedings of the 2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), New
Delhi, India, 14–18 October 2019; pp. 1–6. [CrossRef]
108. Kim, W.; Jung, W.-S.; Choi, H.K. Lightweight Driver Monitoring System Based on Multi-Task Mobilenets. Sensors 2019, 19, 3200.
[CrossRef] [PubMed]
109. Yoo, M.W.; Han, D.S. Optimization Algorithm for Driver Monitoring System using Deep Learning Approach. In Proceedings of
the 2020 International Conference on Artificial Intelligence in Information and Communication (ICAIIC), Fukuoka, Japan, 19–21
February 2020; pp. 043–046. [CrossRef]
110. Pondit, A.; Dey, A.; Das, A. Real-time Driver Monitoring System Based on Visual Cues. In Proceedings of the 2020 6th International
Conference on Interactive Digital Media (ICIDM), Bandung, Indonesia, 14–15 December 2020; pp. 1–6. [CrossRef]
111. Supraja, P.; Revati, P.; Ram, K.S.; Jyotsna, C. An Intelligent Driver Monitoring System. In Proceedings of the 2021 2nd International
Conference on Communication, Computing and Industry 4.0 (C2I4), Bangalore, India, 16–17 December 2021; pp. 1–5. [CrossRef]
112. Zhu, L.; Xiao, Y.; Li, X. Hybrid driver monitoring system based on Internet of Things and machine learning. In Proceedings of the
2021 IEEE International Conference on Consumer Electronics and Computer Engineering (ICCECE), Guangzhou, China, 15–17
January 2021; pp. 635–638. [CrossRef]
113. Darapaneni, N.; Parikh, B.; Paduri, A.R.; Kumar, S.; Beedkar, T.; Narayanan, A.; Tripathi, N.; Khoche, T. Distracted Driver
Monitoring System Using AI. In Proceedings of the 2022 Interdisciplinary Research in Technology and Management (IRTM),
Kolkata, India, 24–26 February 2022; pp. 1–8. [CrossRef]
114. Jeon, S.; Lee, S.; Lee, E.; Shin, J. Driver Monitoring System based on Distracted Driving Decision Algorithm. In Proceedings of the
2022 13th International Conference on Information and Communication Technology Convergence (ICTC), Jeju Island, Republic of
Korea, 19–21 October 2022; pp. 2280–2283. [CrossRef]
115. National Highway Traffic Safety Administration. NHTSA Orders Crash Reporting for Vehicles Equipped with Advanced Driver
Assistance Systems. 31 May 2023. Available online: https://www.nhtsa.gov/press-releases/nhtsa-orders-crash-reporting-
vehicles-equipped-advanced-driver-assistance-systems (accessed on 24 June 2023).
116. Hou, Y.; Ma, Z.; Liu, C.; Loy, C.C. Learning Lightweight Lane Detection CNNs by Self Attention Distillation. In Proceedings of
the 2019 IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Republic of Korea, 27 October–2 November
2019; pp. 1013–1021. [CrossRef]
117. Philion, J. FastDraw: Addressing the Long Tail of Lane Detection by Adapting a Sequential Prediction Network. In Proceedings
of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 16–20 June 2019;
pp. 11574–11583. [CrossRef]
118. Garnett, N.; Cohen, R.; Pe, T.; Lahav, R.; Levi, D. 3D-LaneNet: End-to-End 3D Multiple Lane Detection. In Proceedings of the
2019 IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Republic of Korea, 27 October–2 November 2019;
pp. 2921–2930. [CrossRef]
119. Liu, T.; Chen, Z.; Yang, Y.; Wu, Z.; Li, H. Lane Detection in Low-light Conditions Using an Efficient Data Enhancement: Light
Conditions Style Transfer. In Proceedings of the 2020 IEEE Intelligent Vehicles Symposium (IV), Las Vegas, NV, USA, 19
October–13 November 2020; pp. 1394–1399. [CrossRef]
Sensors 2024, 24, 249 46 of 51

120. Lu, Z.; Xu, Y.; Shan, X.; Liu, L.; Wang, X.; Shen, J. A Lane Detection Method Based on a Ridge Detector and Regional G-RANSAC.
Sensors 2019, 19, 4028. [CrossRef] [PubMed]
121. Yang, W.; Zhang, X.; Lei, Q.; Shen, D.; Xiao, P.; Huang, Y. Lane Position Detection Based on Long Short-Term Memory (LSTM).
Sensors 2020, 20, 3115. [CrossRef] [PubMed]
122. Wang, Q.; Han, T.; Qin, Z.; Gao, J.; Li, X. Multitask Attention Network for Lane Detection and Fitting. IEEE Trans. Neural Netw.
Learn. Syst. 2022, 33, 1066–1078. [CrossRef] [PubMed]
123. Cao, J.; Song, C.; Song, S.; Xiao, F.; Peng, S. Lane Detection Algorithm for Intelligent Vehicles in Complex Road Conditions and
Dynamic Environments. Sensors 2019, 19, 3166. [CrossRef]
124. Wang, X.; Qian, Y.; Wang, C.; Yang, M. Map-Enhanced Ego-Lane Detection in the Missing Feature Scenarios. IEEE Access 2020, 8,
107958–107968. [CrossRef]
125. Chen, Y.; Xiang, Z. Lane Mark Detection with Pre-Aligned Spatial-Temporal Attention. Sensors 2022, 22, 794. [CrossRef]
126. Lee, Y.; Park, M.-k.; Park, M. Improving Lane Detection Performance for Autonomous Vehicle Integrating Camera with Dual
Light Sensors. Electronics 2022, 11, 1474. [CrossRef]
127. Kim, D.-H. Lane Detection Method with Impulse Radio Ultra-Wideband Radar and Metal Lane Reflectors. Sensors 2020, 20, 324.
[CrossRef] [PubMed]
128. Suder, J.; Podbucki, K.; Marciniak, T.; Dabrowski,
˛ A. Low Complexity Lane Detection Methods for Light Photometry System.
Electronics 2021, 10, 1665. [CrossRef]
129. Kuo, C.Y.; Lu, Y.R.; Yang, S.M. On the Image Sensor Processing for Lane Detection and Control in Vehicle Lane Keeping Systems.
Sensors 2019, 19, 1665. [CrossRef] [PubMed]
130. Zou, Q.; Jiang, H.; Dai, Q.; Yue, Y.; Chen, L.; Wang, Q. Robust Lane Detection From Continuous Driving Scenes Using Deep
Neural Networks. IEEE Trans. Veh. Technol. 2020, 69, 41–54. [CrossRef]
131. Gao, Q.; Yin, H.; Zhang, W. Lane Departure Warning Mechanism of Limited False Alarm Rate Using Extreme Learning Residual
Network and ϵ-Greedy LSTM. Sensors 2020, 20, 644. [CrossRef]
132. Tabelini, L.; Berriel, R.; Paixão, T.M.; Badue, C.; De Souza, A.F.; Oliveira-Santos, T. Keep your Eyes on the Lane: Real-time
Attention-guided Lane Detection. In Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition
(CVPR), Nashville, TN, USA, 20–25 June 2021; pp. 294–302. [CrossRef]
133. Liu, L.; Chen, X.; Zhu, S.; Tan, P. CondLaneNet: A Top-to-down Lane Detection Framework Based on Conditional Convolution. In
Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision (ICCV), Montreal, QC, Canada, 10–17 October
2021; pp. 3753–3762. [CrossRef]
134. Dewangan, D.K.; Sahu, S.P. Driving Behavior Analysis of Intelligent Vehicle System for Lane Detection Using Vision-Sensor. IEEE
Sens. J. 2021, 21, 6367–6375. [CrossRef]
135. Haris, M.; Glowacz, A. Lane Line Detection Based on Object Feature Distillation. Electronics 2021, 10, 1102. [CrossRef]
136. Lu, S.; Luo, Z.; Gao, F.; Liu, M.; Chang, K.; Piao, C. A Fast and Robust Lane Detection Method Based on Semantic Segmentation
and Optical Flow Estimation. Sensors 2021, 21, 400. [CrossRef]
137. Ko, Y.; Lee, Y.; Azam, S.; Munir, F.; Jeon, M.; Pedrycz, W. Key Points Estimation and Point Instance Segmentation Approach for
Lane Detection. IEEE Trans. Intell. Transp. Syst. 2022, 23, 8949–8958. [CrossRef]
138. Zheng, T.; Huang, Y.; Liu, Y.; Tang, W.; Yang, Z.; Cai, D.; He, X. CLRNet: Cross-Layer Refinement Network for Lane Detection. In
Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA,
18–24 June 2022; pp. 888–897. [CrossRef]
139. Khan, M.A.-M.; Haque, M.F.; Hasan, K.R.; Alajmani, S.H.; Baz, M.; Masud, M.; Nahid, A.-A. LLDNet: A Lightweight Lane
Detection Approach for Autonomous Cars Using Deep Learning. Sensors 2022, 22, 5595. [CrossRef]
140. National Highway Traffic Safety Administration. Traffic Safety Facts 2020 Data: Crashes. 20 September 2021. Available online:
https://crashstats.nhtsa.dot.gov/Api/Public/ViewPublication/812801 (accessed on 19 July 2023).
141. Lee, K.; Kum, D. Collision Avoidance/Mitigation System: Motion Planning of Autonomous Vehicle via Predictive Occupancy
Map. IEEE Access 2019, 7, 52846–52857. [CrossRef]
142. Manghat, S.K.; El-Sharkawy, M. Forward Collision Prediction with Online Visual Tracking. In Proceedings of the 2019 IEEE
International Conference on Vehicular Electronics and Safety (ICVES), Cairo, Egypt, 4–6 September 2019; pp. 1–5. [CrossRef]
143. Yang, W.; Wan, B.; Qu, X. A Forward Collision Warning System Using Driving Intention Recognition of the Front Vehicle and
V2V Communication. IEEE Access 2020, 8, 11268–11278. [CrossRef]
144. Kumar, S.; Shaw, V.; Maitra, J.; Karmakar, R. FCW: A Forward Collision Warning System Using Convolutional Neural Network.
In Proceedings of the 2020 International Conference on Electrical and Electronics Engineering (ICE3), Gorakhpur, India, 14–15
February 2020; pp. 1–5. [CrossRef]
145. Wang, H.-M.; Lin, H.-Y. A Real-Time Forward Collision Warning Technique Incorporating Detection and Depth Estimation
Networks. In Proceedings of the 2020 IEEE International Conference on Systems, Man, and Cybernetics (SMC), Toronto, ON,
Canada, 11–14 October 2020; pp. 1966–1971. [CrossRef]
146. Lin, H.-Y.; Dai, J.-M.; Wu, L.-T.; Chen, L.-Q. A Vision-Based Driver Assistance System with Forward Collision and Overtaking
Detection. Sensors 2020, 20, 5139. [CrossRef] [PubMed]
147. Tang, J.; Li, J. End-to-End Monocular Range Estimation for Forward Collision Warning. Sensors 2020, 20, 5941. [CrossRef]
[PubMed]
Sensors 2024, 24, 249 47 of 51

148. Lim, Q.; Lim, Y.; Muhammad, H.; Tan, D.W.M.; Tan, U.-X. Forward collision warning system for motorcyclist using smartphone
sensors based on time-to-collision and trajectory prediction. J. Intell. Connect. Veh. 2021, 4, 93–103. [CrossRef]
149. Farhat, W.; Rhaiem, O.B.; Faiedh, H.; Souani, C. Cooperative Forward Collision Avoidance System Based on Deep Learning. In
Proceedings of the 2021 14th International Conference on Developments in eSystems Engineering (DeSE), Sharjah, United Arab
Emirates, 7–10 December 2021; pp. 515–519. [CrossRef]
150. Hong, S.; Park, D. Lightweight Collaboration of Detecting and Tracking Algorithm in Low-Power Embedded Systems for Forward
Collision Warning. In Proceedings of the 2021 Twelfth International Conference on Ubiquitous and Future Networks (ICUFN),
Jeju Island, Republic of Korea, 17–20 August 2021; pp. 159–162. [CrossRef]
151. Albarella, N.; Masuccio, F.; Novella, L.; Tufo, M.; Fiengo, G. A Forward-Collision Warning System for Electric Vehicles: Experi-
mental Validation in Virtual and Real Environment. Energies 2021, 14, 4872. [CrossRef]
152. Liu, Y.; Wang, X.; Zhang, Y.; Wang, Y. An effective target selection method for forward collision on a curve based on V2X. In
Proceedings of the 2022 7th International Conference on Intelligent Informatics and Biomedical Science (ICIIBMS), Nara, Japan,
24–26 November 2022; pp. 110–114. [CrossRef]
153. Yu, R.; Ai, H. Vehicle Forward Collision Warning based upon Low-Frequency Video Data: A hybrid Deep Learning Modeling
Approach. In Proceedings of the 2022 IEEE 25th International Conference on Intelligent Transportation Systems (ITSC), Macau,
China, 8–12 October 2022; pp. 59–64. [CrossRef]
154. Olou, H.B.; Ezin, E.C.; Dembele, J.M.; Cambier, C. FCPNet: A novel model to predict forward collision based upon CNN. In
Proceedings of the 2022 22nd International Conference on Control, Automation, and Systems (ICCAS), Jeju, Republic of Korea, 27
November–1 December 2022; pp. 1327–1332. [CrossRef]
155. Pak, J.M. Hybrid Interacting Multiple Model Filtering for Improving the Reliability of Radar-Based Forward Collision Warning
Systems. Sensors 2022, 22, 875. [CrossRef]
156. Bagi, S.S.G.; Garakani, H.G.; Moshiri, B.; Khoshnevisan, M. Sensing Structure for Blind Spot Detection System in Vehicles. In
Proceedings of the 2019 International Conference on Control, Automation and Information Sciences (ICCAIS), Chengdu, China,
24–27 October 2019; pp. 1–6. [CrossRef]
157. Sugiura, T.; Watanabe, T. Probable Multi-hypothesis Blind Spot Estimation for Driving Risk Prediction. In Proceedings of the
2019 IEEE Intelligent Transportation Systems Conference (ITSC), Auckland, New Zealand, 27–30 October 2019; pp. 4295–4302.
[CrossRef]
158. Zhao, Y.; Bai, L.; Lyu, Y.; Huang, X. Camera-Based Blind Spot Detection with a General Purpose Lightweight Neural Network.
Electronics 2019, 8, 233. [CrossRef]
159. Chang, I.-C.; Chen, W.-R.; Kuo, X.-M.; Song, Y.-J.; Liao, P.-H.; Kuo, C. An Artificial Intelligence-based Proactive Blind Spot
Warning System for Motorcycles. In Proceedings of the 2020 International Symposium on Computer, Consumer and Control
(IS3C), Taichung City, Taiwan, 13–16 November 2020; pp. 404–407. [CrossRef]
160. Naik, A.; Naveen, G.V.V.S.; Satardhan, J.; Chavan, A. LiEBiD—A LIDAR based Early Blind Spot Detection and Warning System for
Traditional Steering Mechanism. In Proceedings of the 2020 International Conference on Smart Electronics and Communication
(ICOSEC), Trichy, India, 10–12 September 2020; pp. 604–609. [CrossRef]
161. Singh, N.; Ji, G. Computer vision assisted, real-time blind spot detection based collision warning system for two-wheelers.
In Proceedings of the 2021 5th International Conference on Electronics, Communication and Aerospace Technology (ICECA),
Coimbatore, India, 2–4 December 2021; pp. 1179–1184. [CrossRef]
162. Shete, R.G.; Kakade, S.K.; Dhanvijay, M. A Blind-spot Assistance for Forklift using Ultrasonic Sensor. In Proceedings of the 2021
IEEE International Conference on Technology, Research, and Innovation for Betterment of Society (TRIBES), Raipur, India, 17–19
December 2021; pp. 1–4. [CrossRef]
163. Schlegel, K.; Weissig, P.; Protzel, P. A blind-spot-aware optimization-based planner for safe robot navigation. In Proceedings of
the 2021 European Conference on Mobile Robots (ECMR), Bonn, Germany, 31 August–3 September 2021; pp. 1–8. [CrossRef]
164. Kundid, J.; Vranješ, M.; Lukač, Ž.; Popović, M. ADAS algorithm for creating a wider view of the environment with a blind spot
display for the driver. In Proceedings of the 2021 Zooming Innovation in Consumer Technologies Conference (ZINC), Novi Sad,
Serbia, 26–27 May 2021; pp. 219–224. [CrossRef]
165. Sui, S.; Li, T.; Chen, S. A-pillar Blind Spot Display Algorithm Based on Line of Sight. In Proceedings of the 2022 IEEE 5th
International Conference on Computer and Communication Engineering Technology (CCET), Beijing, China, 19–21 August 2022;
pp. 100–105. [CrossRef]
166. Wang, Z.; Jin, Q.; Wu, B. Design of a Vision Blind Spot Detection System Based on Depth Camera. In Proceedings of the 2022 IEEE
Intl Conf on Dependable, Autonomic and Secure Computing, Intl Conf on Pervasive Intelligence and Computing, Intl Conf on
Cloud and Big Data Computing, Intl Conf on Cyber Science and Technology Congress (DASC/PiCom/CBDCom/CyberSciTech),
Falerna, Italy, 12–15 September 2022; pp. 1–5. [CrossRef]
167. Zhou, J.; Hirano, M.; Yamakawa, Y. High-Speed Recognition of Pedestrians out of Blind Spot with Pre-detection of Potentially
Dangerous Regions. In Proceedings of the 2022 IEEE 25th International Conference on Intelligent Transportation Systems (ITSC),
Macau, China, 8–12 October 2022; pp. 945–950. [CrossRef]
168. Seo, H.; Kim, H.; Lee, K.; Lee, K. Multi-Sensor-Based Blind-Spot Reduction Technology and a Data-Logging Method Using a
Gesture Recognition Algorithm Based on Micro E-Mobility in an IoT Environment. Sensors 2022, 22, 1081. [CrossRef]
Sensors 2024, 24, 249 48 of 51

169. Muzammel, M.; Yusoff, M.Z.; Saad, M.N.M.; Sheikh, F.; Awais, M.A. Blind-Spot Collision Detection System for Commercial
Vehicles Using Multi Deep CNN Architecture. Sensors 2022, 22, 6088. [CrossRef]
170. Flores, C.; Merdrignac, P.; de Charette, R.; Navas, F.; Milanés, V.; Nashashibi, F. A Cooperative Car-Following/Emergency Braking
System with Prediction-Based Pedestrian Avoidance Capabilities. IEEE Trans. Intell. Transp. Syst. 2019, 20, 1837–1846. [CrossRef]
171. Shin, S.-G.; Ahn, D.-R.; Baek, Y.-S.; Lee, H.-K. Adaptive AEB Control Strategy for Collision Avoidance Including Rear Vehicles. In
Proceedings of the 2019 IEEE Intelligent Transportation Systems Conference (ITSC), Auckland, New Zealand, 27–30 October 2019;
pp. 2872–2878. [CrossRef]
172. Yang, W.; Zhang, X.; Lei, Q.; Cheng, X. Research on Longitudinal Active Collision Avoidance of Autonomous Emergency Braking
Pedestrian System (AEB-P). Sensors 2019, 19, 4671. [CrossRef] [PubMed]
173. Gao, Y.; Xu, Z.; Zhao, X.; Wang, G.; Yuan, Q. Hardware-in-the-Loop Simulation Platform for Autonomous Vehicle AEB Prototyping
and Validation. In Proceedings of the 2020 IEEE 23rd International Conference on Intelligent Transportation Systems (ITSC),
Rhodes, Greece, 20–23 September 2020; pp. 1–6. [CrossRef]
174. Guo, L.; Ge, P.; Sun, D. Variable Time Headway Autonomous Emergency Braking Control Algorithm Based on Model Predictive
Control. In Proceedings of the 2020 Chinese Automation Congress (CAC), Shanghai, China, 6–8 November 2020; pp. 1794–1798.
[CrossRef]
175. Leyrer, M.L.; Stöckle, C.; Herrmann, S.; Dirndorfer, T.; Utschick, W. An Efficient Approach to Simulation-Based Robust Function
and Sensor Design Applied to an Automatic Emergency Braking System. In Proceedings of the 2020 IEEE Intelligent Vehicles
Symposium (IV), Las Vegas, NV, USA, 19 October–13 November 2020; pp. 617–622. [CrossRef]
176. Yu, L.; Wang, R.; Lu, Z. Autonomous Emergency Braking Control Based on Inevitable Collision State for Multiple Collision
Scenarios at Intersection. In Proceedings of the 2021 American Control Conference (ACC), New Orleans, LA, USA, 25–28 May
2021; pp. 148–153. [CrossRef]
177. Izquierdo, A.; Val, L.D.; Villacorta, J.J. Feasibility of Using a MEMS Microphone Array for Pedestrian Detection in an Autonomous
Emergency Braking System. Sensors 2021, 21, 4162. [CrossRef] [PubMed]
178. Jin, X.; Zhang, J.; Wu, Y.; Gao, J. Adaptive AEB control strategy for driverless vehicles in campus scenario. In Proceedings of the
2022 International Conference on Advanced Mechatronic Systems (ICAMechS), Toyama, Japan, 17–20 December 2022; pp. 47–52.
[CrossRef]
179. Mannam, N.P.B.; Rajalakshmi, P. Determination of ADAS AEB Car to Car and Car to Pedestrian Scenarios for Autonomous Vehi-
cles. In Proceedings of the 2022 IEEE Global Conference on Computing, Power and Communication Technologies (GlobConPT),
New Delhi, India, 23–25 September 2022; pp. 1–7. [CrossRef]
180. Guo, J.; Wang, Y.; Yin, X.; Liu, P.; Hou, Z.; Zhao, D. Study on the Control Algorithm of Automatic Emergency Braking System
(AEBS) for Commercial Vehicle Based on Identification of Driving Condition. Machines 2022, 10, 895. [CrossRef]
181. Li, G.; Görges, D. Ecological Adaptive Cruise Control and Energy Management Strategy for Hybrid Electric Vehicles Based on
Heuristic Dynamic Programming. IEEE Trans. Intell. Transp. Syst. 2019, 20, 3526–3535. [CrossRef]
182. Cheng, S.; Li, L.; Mei, M.-M.; Nie, Y.-L.; Zhao, L. Multiple-Objective Adaptive Cruise Control System Integrated with DYC. IEEE
Trans. Veh. Technol. 2019, 68, 4550–4559. [CrossRef]
183. Lunze, J. Adaptive Cruise Control with Guaranteed Collision Avoidance. IEEE Trans. Intell. Transp. Syst. 2019, 20, 1897–1907.
[CrossRef]
184. Woo, H.; Madokoro, H.; Sato, K.; Tamura, Y.; Yamashita, A.; Asama, H. Advanced Adaptive Cruise Control Based on Operation
Characteristic Estimation and Trajectory Prediction. Appl. Sci. 2019, 9, 4875. [CrossRef]
185. Zhang, S.; Zhuan, X. Study on Adaptive Cruise Control Strategy for Battery Electric Vehicle Considering Weight Adjustment.
Symmetry 2019, 11, 1516. [CrossRef]
186. Zhai, C.; Chen, X.; Yan, C.; Liu, Y.; Li, H. Ecological Cooperative Adaptive Cruise Control for a Heterogeneous Platoon of
Heavy-Duty Vehicles with Time Delays. IEEE Access 2020, 8, 146208–146219. [CrossRef]
187. Li, G.; Görges, D. Ecological Adaptive Cruise Control for Vehicles with Step-Gear Transmission Based on Reinforcement Learning.
IEEE Trans. Intell. Transp. Syst. 2020, 21, 4895–4905. [CrossRef]
188. Jia, Y.; Jibrin, R.; Görges, D. Energy-Optimal Adaptive Cruise Control for Electric Vehicles Based on Linear and Nonlinear Model
Predictive Control. IEEE Trans. Veh. Technol. 2020, 69, 14173–14187. [CrossRef]
189. Nie, Z.; Farzaneh, H. Adaptive Cruise Control for Eco-Driving Based on Model Predictive Control Algorithm. Appl. Sci. 2020,
10, 5271. [CrossRef]
190. Guo, L.; Ge, P.; Sun, D.; Qiao, Y. Adaptive Cruise Control Based on Model Predictive Control with Constraints Softening. Appl.
Sci. 2020, 10, 1635. [CrossRef]
191. Liu, Y.; Wang, W.; Hua, X.; Wang, S. Safety Analysis of a Modified Cooperative Adaptive Cruise Control Algorithm Accounting
for Communication Delay. Sustainability 2020, 12, 7568. [CrossRef]
192. Lin, Y.; McPhee, J.; Azad, N.L. Comparison of Deep Reinforcement Learning and Model Predictive Control for Adaptive Cruise
Control. IEEE Trans. Intell. Veh. 2021, 6, 221–231. [CrossRef]
193. Gunter, G.; Gloudemans, D.; Stern, R.E.; McQuade, S.; Bhadani, R.; Bunting, M.; Monache, M.L.D.; Lysecky, R.; Seibold, B.;
Sprinkle, J.; et al. Are Commercially Implemented Adaptive Cruise Control Systems String Stable? IEEE Trans. Intell. Transp. Syst.
2021, 22, 6992–7003. [CrossRef]
Sensors 2024, 24, 249 49 of 51

194. Sawant, J.; Chaskar, U.; Ginoya, D. Robust Control of Cooperative Adaptive Cruise Control in the Absence of Information About
Preceding Vehicle Acceleration. IEEE Trans. Intell. Transp. Syst. 2021, 22, 5589–5598. [CrossRef]
195. Yang, Z.; Wang, Z.; Yan, M. An Optimization Design of Adaptive Cruise Control System Based on MPC and ADRC. Actuators
2021, 10, 110. [CrossRef]
196. Anselma, P.G. Optimization-Driven Powertrain-Oriented Adaptive Cruise Control to Improve Energy Saving and Passenger
Comfort. Energies 2021, 14, 2897. [CrossRef]
197. Chen, C.; Guo, J.; Guo, C.; Chen, C.; Zhang, Y.; Wang, J. Adaptive Cruise Control for Cut-In Scenarios Based on Model Predictive
Control Algorithm. Appl. Sci. 2021, 11, 5293. [CrossRef]
198. Hu, C.; Wang, J. Trust-Based and Individualizable Adaptive Cruise Control Using Control Barrier Function Approach with
Prescribed Performance. IEEE Trans. Intell. Transp. Syst. 2022, 23, 6974–6984. [CrossRef]
199. Yan, R.; Jiang, R.; Jia, B.; Huang, J.; Yang, D. Hybrid Car-Following Strategy Based on Deep Deterministic Policy Gradient and
Cooperative Adaptive Cruise Control. IEEE Trans. Autom. Sci. Eng. 2022, 19, 2816–2824. [CrossRef]
200. Zhang, Y.; Wu, Z.; Zhang, Y.; Shang, Z.; Wang, P.; Zou, Q.; Zhang, X.; Hu, J. Human-Lead-Platooning Cooperative Adaptive
Cruise Control. IEEE Trans. Intell. Transp. Syst. 2022, 23, 18253–18272. [CrossRef]
201. Boddupalli, S.; Rao, A.S.; Ray, S. Resilient Cooperative Adaptive Cruise Control for Autonomous Vehicles Using Machine
Learning. IEEE Trans. Intell. Transp. Syst. 2022, 23, 15655–15672. [CrossRef]
202. Kamal, M.A.S.; Hashikura, K.; Hayakawa, T.; Yamada, K.; Imura, J.-i. Adaptive Cruise Control with Look-Ahead Anticipation for
Driving on Freeways. Appl. Sci. 2022, 12, 929. [CrossRef]
203. Li, Z.; Deng, Y.; Sun, S. Adaptive Cruise Predictive Control Based on Variable Compass Operator Pigeon-Inspired Optimization.
Electronics 2022, 11, 1377. [CrossRef]
204. Petri, A.-M.; Petreus, , D.M. Adaptive Cruise Control in Electric Vehicles with Field-Oriented Control. Appl. Sci. 2022, 12, 7094.
[CrossRef]
205. Deng, L.; Yang, M.; Hu, B.; Li, T.; Li, H.; Wang, C. Semantic Segmentation-Based Lane-Level Localization Using Around View
Monitoring System. IEEE Sens. J. 2019, 19, 10077–10086. [CrossRef]
206. Rasdi, M.H.F.B.; Hashim, N.N.W.B.N.; Hanizam, S. Around View Monitoring System with Motion Estimation in ADAS Applica-
tion. In Proceedings of the 2019 7th International Conference on Mechatronics Engineering (ICOM), Putrajaya, Malaysia, 30–31
October 2019; pp. 1–5. [CrossRef]
207. Hanizam, S.; Hashim, N.N.W.N.; Abidin, Z.Z.; Zaki, H.F.M.; Rahman, H.A.; Mahamud, N.H. Motion Estimation on Homoge-
nous Surface for Around View Monitoring System. In Proceedings of the 2019 7th International Conference on Mechatronics
Engineering (ICOM), Putrajaya, Malaysia, 30–31 October 2019; pp. 1–6. [CrossRef]
208. Im, G.; Kim, M.; Park, J. Parking Line Based SLAM Approach Using AVM/LiDAR Sensor Fusion for Rapid and Accurate Loop
Closing and Parking Space Detection. Sensors 2019, 19, 4811. [CrossRef]
209. Hsu, C.-M.; Chen, J.-Y. Around View Monitoring-Based Vacant Parking Space Detection and Analysis. Appl. Sci. 2019, 9, 3403.
[CrossRef]
210. Lee, Y.H.; Kim, W.-Y. An Automatic Calibration Method for AVM Cameras. IEEE Access 2020, 8, 192073–192086. [CrossRef]
211. Akita, K.; Hayama, M.; Kyutoku, H.; Ukita, N. AVM Image Quality Enhancement by Synthetic Image Learning for Supervised
Deblurring. In Proceedings of the 2021 17th International Conference on Machine Vision and Applications (MVA), Aichi, Japan,
25–27 July 2021; pp. 1–5. [CrossRef]
212. Lee, J.H.; Lee, D.-W. A Novel AVM Calibration Method Using Unaligned Square Calibration Boards. Sensors 2021, 21, 2265.
[CrossRef] [PubMed]
213. Lee, Y.; Park, M. Around-View-Monitoring-Based Automatic Parking System Using Parking Line Detection. Appl. Sci. 2021,
11, 11905. [CrossRef]
214. Lee, S.; Lee, D.; Kee, S.-C. Deep-Learning-Based Parking Area and Collision Risk Area Detection Using AVM in Autonomous
Parking Situation. Sensors 2022, 22, 1986. [CrossRef]
215. Geiger, A.; Lenz, P.; Urtasun, R. Are we ready for autonomous driving? The KITTI vision benchmark suite. In Proceedings of
the 2012 IEEE Conference on Computer Vision and Pattern Recognition, Providence, RI, USA, 16–21 June 2012; pp. 3354–3361.
[CrossRef]
216. Geiger, A.; Lenz, P.; Stiller, C.; Urtasun, R. Vision meets robotics: The KITTI dataset. Int. J. Robot. Res. 2013, 32, 1231–1237.
[CrossRef]
217. Chang, M.-F.; Ramanan, D.; Hays, J.; Lambert, J.; Sangkloy, P.; Singh, J.; Bak, S.; Hartnett, A.; Wang, D.; Carr, P.; et al. Argoverse:
3D Tracking and Forecasting with Rich Maps. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern
Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019; pp. 8740–8749. [CrossRef]
218. Caesar, H.; Bankiti, V.; Lang, A.H.; Vora, S.; Liong, V.E.; Xu, Q.; Krishnan, A.; Pan, Y.; Baldan, G.; Beijbom, O. nuScenes: A
Multimodal Dataset for Autonomous Driving. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern
Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020; pp. 11618–11628. [CrossRef]
219. Lyu, S.; Chang, M.-C.; Du, D.; Wen, L.; Qi, H.; Li, Y.; Wei, Y.; Ke, L.; Hu, T.; Del Coco, M.; et al. UA-DETRAC 2017: Report of
AVSS2017 & IWT4S Challenge on Advanced Traffic Monitoring. In Proceedings of the 2017 14th IEEE International Conference
on Advanced Video and Signal Based Surveillance (AVSS), Lecce, Italy, 29 August–1 September 2017; pp. 1–7. [CrossRef]
Sensors 2024, 24, 249 50 of 51

220. Wen, L.; Du, D.; Cai, Z.; Lei, Z.; Chang, M.C.; Qi, H.; Lim, J.; Yang, M.H.; Lyu, S. UA-DETRAC: A New Benchmark and Protocol
for Multi-Object Detection and Tracking. Comput. Vis. Image Underst. 2020, 193, 102907. [CrossRef]
221. Goyette, N.; Jodoin, P.-M.; Porikli, F.; Konrad, J.; Ishwar, P. Changedetection.net: A new change detection benchmark dataset. In
Proceedings of the 2012 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, Providence,
RI, USA, 16–21 June 2012; pp. 1–8. [CrossRef]
222. Razakarivony, S.; Jurie, F. Vehicle detection in aerial imagery: A small target detection benchmark. J. Vis. Commun. Image Represent.
2015, 26, 2289–2302. [CrossRef]
223. Kenk, M.A.; Hassaballah, M. DAWN: Vehicle detection in adverse weather nature. arXiv 2020, arXiv:2008.05402.
224. Lin, T.Y.; Maire, M.; Belongie, S.; Bourdev, L.; Girshick, R.; Hays, J.; Perona, P.; Zitnick, C.L.; Dollár, P. Microsoft COCO: Common
Objects in Context. In Computer Vision—ECCV 2014 Lecture Notes in Computer Science; Fleet, D., Pajdla, T., Schiele, B., Tuytelaars,
T., Eds.; Springer: Cham, Switzerland, 2014; Volume 8693. [CrossRef]
225. OpenStreetMap contributors. OpenStreetMap Database [PostgreSQL Via API]; OpenStreetMap Foundation: Cambridge, UK, 2023.
226. Li, J.; Sun, W. Drone-based RGB-Infrared Cross-Modality Vehicle Detection via Uncertainty-Aware Learning. arXiv 2020,
arXiv:2003.02437.
227. Song, H.; Liang, H.; Li, H.; Dai, Z.; Yun, X. Vision-based vehicle detection and counting system using deep learning in highway
scenes. Eur. Transp. Res. Rev. 2019, 11, 51. [CrossRef]
228. The Third “Aerospace Cup” National Innovation and Creativity Competition Preliminary Round, Proposition 2, Track 2, Optical
Target Recognition, Preliminary Data Set. Available online: https://www.atrdata.cn/#/customer/match/2cdfe76d-de6c-48f1
-abf9-6e8b7ace1ab8/bd3aac0b-4742-438d-abca-b9a84ca76cb3?questionType=model (accessed on 15 March 2023).
229. Zhang, S.; Benenson, R.; Schiele, B. CityPersons: A Diverse Dataset for Pedestrian Detection. In Proceedings of the 2017 IEEE
Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 4457–4465. [CrossRef]
230. Ferryman, J.; Shahrokni, A. PETS2009: Dataset and challenge. In Proceedings of the 2009 Twelfth IEEE International Workshop
on Performance Evaluation of Tracking and Surveillance, Snowbird, UT, USA, 7–12 December 2009; pp. 1–6. [CrossRef]
231. Dollar, P.; Wojek, C.; Schiele, B.; Perona, P. Pedestrian Detection: An Evaluation of the State of the Art. IEEE Trans. Pattern Anal.
Mach. Intell. 2012, 34, 743–761. [CrossRef] [PubMed]
232. Hwang, S.; Park, J.; Kim, N.; Choi, Y.; Kweon, I.S. Multispectral pedestrian detection: Benchmark dataset and baseline. In
Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June
2015; pp. 1037–1045. [CrossRef]
233. Houben, S.; Stallkamp, J.; Salmen, J.; Schlipsing, M.; Igel, C. Detection of traffic signs in real-world images: The German traffic
sign detection benchmark. In Proceedings of the 2013 International Joint Conference on Neural Networks (IJCNN), Dallas, TX,
USA, 4–9 August 2013; pp. 1–8.
234. Mathias, M.; Timofte, R.; Benenson, R.; Van Gool, L. Traffic sign recognition—How far are we from the solution? In Proceedings
of the 2013 International Joint Conference on Neural Networks (IJCNN), Dallas, TX, USA, 4–9 August 2013; pp. 1–8.
235. Sivaraman, S.; Trivedi, M.M. A general active-learning framework for on-road vehicle recognition and tracking. IEEE Trans. Intell.
Transp. Syst. 2010, 11, 267–276. [CrossRef]
236. Temel, D.; Kwon, G.; Prabhushankar, M.; AlRegib, G. CURE-TSD: Challenging unreal and real environments for traffic sign
recognition. In Proceedings of the NeurIPS Workshop on Machine Learning for Intelligent Transportation Systems, Long Beach,
CA, USA, 4–9 December 2017; pp. 1–6.
237. Zhu, Z.; Liang, D.; Zhang, S.; Huang, X.; Li, B.; Hu, S. Traffic-Sign Detection and Classification in the Wild. In Proceedings of the
2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 2110–2118.
[CrossRef]
238. Zhang, J.; Zou, X.; Kuang, L.D.; Wang, J.; Sherratt, R.S.; Yu, X. CCTSDB 2021: A more comprehensive traffic sign detection
benchmark. Hum.-Centric Comput. Inf. Sci. 2022, 12, 23. [CrossRef]
239. Bai, C.; Wu, K.; Wang, D.; Yan, M. A Small Object Detection Research Based on Dynamic Convolution Neural Network. Available
online: https://assets.researchsquare.com/files/rs-1116930/v1_covered.pdf?c=1639594752 (accessed on 14 August 2023).
240. Pan, X.; Shi, J.; Luo, P.; Wang, X.; Tang, X. Spatial as deep: Spatial CNN for traffic scene understanding. In Proceedings of the
AAAI Conference on Artificial Intelligenc, New Orleans, LA, USA, 2–7 February 2018. [CrossRef]
241. Tusimple Benchmark. Available online: https://github.com/%0ATuSimple/tusimple-benchmark (accessed on 1 January 2021).
242. Yu, F.; Chen, H.; Wang, X.; Xian, W.; Chen, Y.; Liu, F.; Madhavan, V.; Darrell, T. BDD100K: A Diverse Driving Dataset
for Heterogeneous Multitask Learning. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern
Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020; pp. 2633–2642. [CrossRef]
243. Mvirgo. Mvirgo/MLND-Capstone: Lane Detection with Deep Learning—My Capstone Project for Udacity’s ML Nanodegree.
GitHub. Available online: https://github.com/mvirgo/MLND-Capstone (accessed on 12 July 2022).
244. Bosch Automated Driving, Unsupervised Llamas Lane Marker Dataset. 2020. Available online: https://unsupervised-llamas.
com/llamas/ (accessed on 2 April 2023).
245. Passos, B.T.; Cassaniga, M.; Fernandes, A.M.R.; Medeiros, K.B.; Comunello, E. Cracks and Potholes in Road Images. Mendeley
Data, V4. 2020. Available online: https://data.mendeley.com/datasets/t576ydh9v8/4 (accessed on 13 August 2023).
246. Waymo LLC. Waymo Open Dataset. Available online: https://waymo.com/open (accessed on 29 July 2023).
Sensors 2024, 24, 249 51 of 51

247. Ess, A.; Leibe, B.; Van Gool, L. Depth and Appearance for Mobile Scene Analysis. In Proceedings of the 2007 IEEE 11th
International Conference on Computer Vision, Rio de Janeiro, Brazil, 14–20 October 2007; -8, pp. 1–8. [CrossRef]
248. Yen-Zhang, H. Building Traffic Signs Opens the Dataset in Taiwan and Verifies It by Convolutional Neural Network. Ph.D. Thesis,
National Taichung University of Science and Technology, Taichung, Taiwan, 2018.
249. Zhao, Z.-Q.; Zheng, P.; Xu, S.-T.; Wu, X. Object Detection with Deep Learning: A Review. IEEE Trans. Neural Netw. Learn. Syst.
2019, 30, 3212–3232. [CrossRef]
250. Khan, M.Q.; Lee, S. A Comprehensive Survey of Driving Monitoring and Assistance Systems. Sensors 2019, 19, 2574. [CrossRef]
251. Haq, Q.M.U.; Haq, M.A.; Ruan, S.-J.; Liang, P.-J.; Gao, D.-Q. 3D Object Detection Based on Proposal Generation Network Utilizing
Monocular Images. IEEE Consum. Electron. Mag. 2022, 11, 47–53. [CrossRef]
252. Haq, M.A.; Ruan, S.-J.; Shao, M.-E.; Haq, Q.M.U.; Liang, P.-J.; Gao, D.-Q. One Stage Monocular 3D Object Detection Utilizing
Discrete Depth and Orientation Representation. IEEE Trans. Intell. Transp. Syst. 2022, 23, 21630–21640. [CrossRef]
253. Faisal, M.M.; Mohammed, M.S.; Abduljabar, A.M.; Abdulhussain, S.H.; Mahmmod, B.M.; Khan, W.; Hussain, A. Object Detection
and Distance Measurement Using AI. In Proceedings of the 2021 14th International Conference on Developments in eSystems
Engineering (DeSE), Sharjah, United Arab Emirates, 7–10 December 2021; pp. 559–565. [CrossRef]

Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual
author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to
people or property resulting from any ideas, methods, instructions or products referred to in the content.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy