Object Detection Recognition and Tracking Algorith
Object Detection Recognition and Tracking Algorith
Review
Object Detection, Recognition, and Tracking Algorithms for
ADASs—A Study on Recent Trends
Vinay Malligere Shivanna 1, * and Jiun-In Guo 1,2,3
1 Department of Electrical Engineering, Institute of Electronics, National Yang-Ming Chiao Tung University,
Hsinchu City 30010, Taiwan; jiguo@nycu.edu.tw
2 Pervasive Artificial Intelligence Research (PAIR) Labs, National Yang Ming Chiao Tung University,
Hsinchu City 30010, Taiwan
3 eNeural Technologies Inc., Hsinchu City 30010, Taiwan
* Correspondence: vinay.ms23@gmail.com
Abstract: Advanced driver assistance systems (ADASs) are becoming increasingly common in
modern-day vehicles, as they not only improve safety and reduce accidents but also aid in smoother
and easier driving. ADASs rely on a variety of sensors such as cameras, radars, lidars, and a
combination of sensors, to perceive their surroundings and identify and track objects on the road.
The key components of ADASs are object detection, recognition, and tracking algorithms that allow
vehicles to identify and track other objects on the road, such as other vehicles, pedestrians, cyclists,
obstacles, traffic signs, traffic lights, etc. This information is then used to warn the driver of potential
hazards or used by the ADAS itself to take corrective actions to avoid an accident. This paper provides
a review of prominent state-of-the-art object detection, recognition, and tracking algorithms used in
Citation: Malligere Shivanna, V.; Guo,
J.-I. Object Detection, Recognition,
different functionalities of ADASs. The paper begins by introducing the history and fundamentals of
and Tracking Algorithms for ADASs followed by reviewing recent trends in various ADAS algorithms and their functionalities,
ADASs—A Study on Recent Trends. along with the datasets employed. The paper concludes by discussing the future of object detection,
Sensors 2024, 24, 249. https:// recognition, and tracking algorithms for ADASs. The paper also discusses the need for more research
doi.org/10.3390/s24010249 on object detection, recognition, and tracking in challenging environments, such as those with low
visibility or high traffic density.
Academic Editors: Stelios Krinidis
and Christos Nikolaos E.
Anagnostopoulos
Keywords: object detection; object tracking; advanced driver assistance system (ADAS); deep learning
Sensors 2024, 24, 249 numerous ADASs, with new functionalities being introduced every other day and becom- 2 of 51
ing increasingly prevalent in modern vehicles, as they offer a variety of safety features that
aid in preventing accidents, relying on the aforementioned variety of sensors that have
made
aid inthe ADAS a potential
preventing accidents,system
relyingwith which
on the to significantlyvariety
aforementioned reduceofthe number
sensors of have
that traf-
fic accidents and fatalities. A study by the Insurance Institute for Highway
made the ADAS a potential system with which to significantly reduce the number of Safety [3] found
that different
traffic accidentsuses
andof fatalities.
ADASs can reduceby
A study the risk
the of a fatalInstitute
Insurance crash byforupHighway
to 20–25%. There-
Safety [3]
fore,
found ADASs are becoming
that different uses ofincreasingly
ADASs cancommon
reduce the in cars. In a2021,
risk of fatal33%
crash ofbynewupcars sold in
to 20–25%.
the United States
Therefore, ADASs hadareADAS
becomingfeatures. This number
increasingly common is expected
in cars.toIngrow
2021,to33%50%of bynew
2030, as
cars
ADASs are expected
sold in the to play
United States hada major
ADASrole in the future
features. of transportation
This number is expected [4].
toBy helping
grow to 50% to
prevent accidents and collisions, reducing drivers’ fatigue and stress [5,6],
by 2030, as ADASs are expected to play a major role in the future of transportation [4]. improving fuel
efficiency
By helping[7,8], makingaccidents
to prevent parking andeasier and more
collisions, convenient
reducing [9] and
drivers’ thereby
fatigue providing
and stress [5,6],
peace of mind
improving fuelto drivers and
efficiency [7,8],passengers [5,6], ADASs
making parking easier andcanmore
save convenient
lives and make ourthereby
[9] and roads
safer.
providing peace of mind to drivers and passengers [5,6], ADASs can save lives and make
Additionally,
our roads safer. various features of ADASs, as shown in Figure 1, are a crucial part of
the development
Additionally, ofvarious
autonomous driving;
features in other
of ADASs, aswords,
shownself-driving
in Figure 1,cars,
are aascrucial
autonomous
part of
the development
vehicles, of autonomous
rely on the performance driving; in other
and efficiency of words,
ADASsself-driving cars, and
to detect objects as autonomous
conditions
vehicles,
in rely on the performance
their surroundings in real-world and efficiency
scenarios.ofSelf-driving
ADASs to detect objects
cars use and conditions
a combination of
in their surroundings
ADASs in real-world
and artificial intelligence toscenarios. Self-driving
drive themselves. cars useADASs
Therefore, a combination of ADASs
are continuing to
and artificial
play intelligence
an important role in to
thedrive themselves.
development Therefore, ADASs
of autonomous drivingareascontinuing to playma-
the technology an
important role in the development of autonomous driving as the technology matures.
tures.
a. Image processing is the process of manipulating digital images to improve their quality
or extract useful information from them. Image processing techniques are commonly
used in ADASs for object detection, recognition, and tracking tasks;
b. Object detection is the task of identifying and locating objects in a scene, such as
vehicles, pedestrians, traffic signs, and other objects that could pose a hazard to
the driver;
c. Object tracking involves following the movement of vehicles, pedestrians, and other
objects over time to predict their future trajectories;
d. Image segmentation is the task of dividing an image into different regions, each of
which corresponds to a different object or part of an object such as the bumper, hood,
and wheels and other objects such as pedestrians, traffic signs, lanes, forward objects,
and so on;
e. Feature extraction is the extraction of features like shape, size, color, and so on from an
image or a video; these features are used to identify objects or track their movements.
f. Classification is the task of assigning a label such as vehicles, pedestrians, traffic signs,
or others to an object or several images to categorize the objects;
g. Recognition is the task of identifying an object or a region in an image by its name or
other attributes.
automatically adjust the speed of the vehicle to maintain a safe distance from the
vehicle in front of it, which can help to reduce fuel consumption;
d. Providing information about the road environment: ADASs can provide drivers with
more information about the road environment, such as the speed of other vehicles, the
distance to the nearest object, traffic signs, and the presence of pedestrians or cyclists.
This information can help drivers to make better decisions about how to drive and
can help to reduce the risk of accidents;
e. Assisting drivers with difficult driving tasks: ADASs can assist drivers with difficult
driving tasks, such as parking, merging onto a highway, and driving in bad weather
conditions, thereby reducing driver workload and enabling safer driving;
f. Ensuring a comfortable and enjoyable driving experience: ADASs can provide a more
comfortable and enjoyable driving experience by reducing stress and fatigue that
drivers experience which can be achieved by automating some of the tasks involved in
driving, such as maintaining a constant speed and avoiding sudden changes in speed.
The ADAS algorithms are designed to achieve these objectives by using sensors,
such as cameras, radar, lidar, and now a combination of these, to collect data about the
road environment. The data thus obtained are processed by the algorithms as per their
design to identify and track objects, predict the future movement of objects, and warn the
driver of potential hazards. These ADAS algorithms are constantly being improved as
new technologies are being developed. Continuous and consistent advancements in these
technologies are making ADASs even more capable of improving road safety and reducing
drivers’ workloads.
and maintain. Cameras are used in almost all ADAS functions, while radars and lidars
are used in FCWS, LDWS, BSD, and ACC, with lidars having an additional application in
autonomous driving.
All the above features allow these versatile sensors to be used for a variety of object
detection, recognition, and tracking tasks in ADASs. However, some challenges need to be
addressed before they can be used effectively in all conditions. Hence, some researchers
have attempted to use a combination of these sensors, as discussed in the following section.
Table 1. Summary of the advantages and disadvantages of each sensor and combinations used in
ADAS applications.
Table 1. Cont.
i. Combines the strengths of cameras and i. More expensive than using a single sensor;
Camera–
lidar sensors; ii. Can be complex to implement.
Radar Fusion
ii. Can be used in challenging weather conditions.
4. Discussion—Methodology
4.1. Vehicle Detection
Vehicle detection, one of the key components and a critical task of ADASs, is the
process of identifying and locating vehicles in the surrounding scenes using sensors such
as cameras, radars, and lidar employing computer vision techniques. This information is
used to provide drivers with warnings about potential hazards, such as cars that are too
close or that are changing lanes and pedestrians or cyclists that might be in the vehicles’
way. It is a crucial function for many ADAS features, such as ACC, LDWS, FCWS, and BSD,
discussed in the later sections of the paper.
Sensors 2024, 24, 249 12 of 51
Vehicle detection is a challenging task, as vehicles vary in size, shape, and color,
affecting their appearance in images and videos. They can be seen from a variety of
different angles, which can also affect their appearance; furthermore, vehicle sizes can
be too small or too big, they could be partially or fully occluded by other objects in the
scene; there are different types of vehicles, each with a unique appearance, and the lighting
conditions and possible background clutter also affect the appearance of vehicles. All of
these factors make detection challenging.
Despite these challenges, the vehicle detection algorithm in ADASs has greatly evolved
and is still evolving, and there have been significant advances in vehicle detection over the
years. Early algorithms were based on relatively simple-to-implement image processing
techniques, such as edge detection and color segmentation, but they were not very accurate.
In the early 2000s, there was a shift towards using ML techniques that can learn from
data, making them more accurate than simple image processing techniques. Some of the
most common ML algorithms used for vehicle detection include support vector machines
(SVMs), random forests, and DL NNs.
Deep learning NNs are the most effective machine learning algorithms for vehicle
detection. Deep learning NNs can learn complex features from data, which makes them
very accurate. Regardless, DL NNs are also more computationally expensive than other
ML algorithms. In recent years, there has been a trend towards using sensor fusion for
vehicle detection.
The vehicle detection algorithms in ADASs are still evolving. As sensor technology
continues to improve, and as ML algorithms become more powerful, vehicle detection
algorithms will become even more accurate and reliable.
Figure2.2.Search
Figure Searchqueries
queriesfor
foreach
eachof
ofthe
thedatabases
databasesfor
forvehicle
vehicledetection.
detection.The
Thedatabases
databasesinclude
includeIEEE
IEEE
Xplore and MDPI.
Xplore and MDPI.
Since the
Since theevolution
evolution of
ofvehicle
vehicle detection
detection has
has been
been rapid,
rapid, considering
considering thethe detection,
detection,
recognition, and
recognition, and tracking
tracking of
ofother
othervehicles,
vehicles,pedestrians,
pedestrians, andandobjects,
objects, plenty
plentyof
ofdifferent
different
methodshave
methods havebeen
beenproposed
proposedininthe
thepast
past few
few years.
years. Some
Some of of
thethe recent
recent prominent
prominent state-
state-of-
of-the-art vehicle detection methods are discussed in the following
the-art vehicle detection methods are discussed in the following sections. sections.
Ref.[33]
Ref. [33]presents
presentsaascale-insensitive
scale-insensitive CNN,
CNN, SINet,
SINet, which
which isisdesigned
designedforforrapid
rapidand
and
accuratevehicle
accurate vehicledetection.
detection.SINet
SINetemploys
employstwo twolightweight
lightweighttechniques:
techniques:context-aware
context-awareRoIRoI
poolingand
pooling andmulti-branch
multi-branchdecision
decisionnetworks.
networks. These
These preserve
preserve small-scale
small-scale object
object infor-
informa-
mation and enhance classification accuracy. Ref. [34] introduces an integrated approach
to monocular 3D vehicle detection and tracking. It utilizes a CNN for vehicle detection
and employs a Kalman filter-based tracker for temporal continuity. The method incorpo-
rates multi-task learning, 3D proposal generation, and Kalman filter-based tracking. Com-
bining radar and vision sensors, ref. [35] proposes a novel distant vehicle detection ap-
Sensors 2024, 24, 249 13 of 51
tion and enhance classification accuracy. Ref. [34] introduces an integrated approach to
monocular 3D vehicle detection and tracking. It utilizes a CNN for vehicle detection and
employs a Kalman filter-based tracker for temporal continuity. The method incorporates
multi-task learning, 3D proposal generation, and Kalman filter-based tracking. Combining
radar and vision sensors, ref. [35] proposes a novel distant vehicle detection approach.
Radar generates candidate bounding boxes for distant vehicles, which are classified using
vision-based methods, ensuring accurate detection and localization. Ref. [36] focuses on
multi-vehicle tracking, utilizing object detection and viewpoint estimation sensors. The
CNN detects vehicles, while viewpoint estimation enhances tracking accuracy. Ref. [37]
utilizes CNN with feature concatenation for urban vehicle detection, improving robustness
through layer-wise feature combination. Ref. [38] presents a robust vehicle detection and
counting method integrating CNN and optical flow, while [39] pioneers vehicle detec-
tion and classification via distributed fiber optic acoustic sensing. Ref. [40] introduces
a vehicle tracking and speed estimation method using roadside lidar, incorporating a
Kalman filter. Ref. [41] modifies Tiny-YOLOv3 for front vehicle detection with SPP-Net
enhancement, excelling in challenging conditions. Ref. [42] proposes an Extended Kalman
Filter (EKF) for vehicle tracking using radar and lidar data, while [43] enhances SSD for
accurate front vehicle detection. Ref. [44] improves Faster RCNN for oriented vehicle
detection in aerial images with feature amplification and oversampling. Ref. [45] employs
reinforcement learning with partial vehicle detection for efficient intelligent traffic signal
control. Ref. [46] presents a robust DL framework for vehicle detection in adverse weather
conditions. Ref. [47] adopts GAN-based image style transfer for nighttime vehicle detection,
while ref. [48] introduces MultEYE for real-time vehicle detection and tracking using UAV
imagery. Ref. [49] analyzes traffic patterns during COVID-19 using Planet remote-sensing
satellite images for vehicle detection. Ref. [50] proposes one-stage anchor-free 3D vehicle
detection from lidar, ref. [51] fuses RGB-infrared images for accurate vehicle detection
using uncertainty-aware learning. Ref. [52] optimizes YOLOv4 for improved vehicle de-
tection and classification. Ref. [53] introduces a real-time foveal classifier-based system
for nighttime vehicle detection. Ref. [54] combines YOLOv4 and SPP-Net for multi-scale
vehicle detection in varying weather. Ref. [55] efficiently detects moving vehicles with
a CNN-based method incorporating background subtraction. Ref. [56] refines YOLOv5
for vehicle detection in aerial infrared images, ensuring robustness against challenges like
occlusion and low contrast.
Overall, the aforementioned papers represent a diverse set of approaches to vehicle
detection and tracking. Each paper has its strengths and weaknesses, and it is important
to consider the specific application when choosing a method. However, all of the papers
represent significant advances in the field of vehicle detection and tracking. The list of
reviewed papers on vehicle detection is summarized in Table 2.
Table 2. Chosen publications regarding vehicle detection, their source title, and their number of
citations.
Table 2. Cont.
Ref.[58]
Ref. [58]introduces
introducesaanovel
novel approach
approachto to pedestrian
pedestrian detection,
detection, emphasizing
emphasizinghigh-level
high-level
semanticfeatures
semantic featuresinstead
insteadofof traditional
traditional low-level
low-level features.
features. This
This method
method employs
employs context-
context-
aware RoI pooling and
aware andaamulti-branch
multi-branchdecision
decisionnetwork
network to to
preserve
preservesmall-scale object
small-scale objectde-
tails and enhance classification accuracy. The CNN
details and enhance classification accuracy. The CNN initially initially captures high-level semantic
high-level semantic
features from
features fromimages,
images, which
which arearethen
thenused
usedto totrain
trainaaclassifier
classifiertotodistinguish
distinguishpedestrians
pedestrians
fromother
from otherobjects.
objects. Ref.
Ref. [59]
[59] proposes
proposes an adaptive non-maximum
non-maximum suppression
suppression (NMS)
(NMS) tech-
niquetailored
nique tailoredforforrefining
refiningpedestrian
pedestriandetection
detectionin incrowded
crowdedscenarios.
scenarios.Conventional
ConventionalNMS NMS
algorithmsoften
algorithms ofteneliminate
eliminatevalid
validdetections
detectionsalong
alongwith
withduplicates
duplicatesin incrowded
crowdedscenes.
scenes.TheThe
new‘Adaptive
new ‘Adaptive NMS’NMS’ algorithm
algorithm dynamically
dynamically adjusts
adjusts the
the NMS
NMS threshold
threshold based
based onon crowd
crowd
density,enabling
density, enablingthetheretention
retentionof ofmore
morepedestrian
pedestriancandidates
candidatesin incongested
congestedareas.
areas.Ref.
Ref.[60]
[60]
introduces
introducesthe the‘Mask-Guided
‘Mask-GuidedAttention
Attention Network’
Network’ (MGAN)
(MGAN) for for detecting
detecting occluded pedes-
trians.
trians. Utilizing
Utilizing aa CNN,
CNN, MGAN
MGAN extracts
extracts features
features from
from both pedestrians and backgrounds.
Pedestrian
Pedestrian features
features guide
guide the
the network’s
network’s focus
focus towards
towards occluded
occluded regions,
regions, improving
improving the the
accuracy
accuracyof ofdetecting
detectingoccluded
occludedpedestrians.
pedestrians. Ref.
Ref. [61]
[61] presents
presents aa real-time
real-time method
method to to track
track
pedestrians by utilizing camera and lidar sensors in a moving vehicle. Combining sensor
features enables accurate pedestrian tracking. Features from the camera image, such as
silhouette, clothing, and gait, are extracted. Additionally, features like height, width, and
depth are obtained from the lidar point cloud. These details facilitate precise tracking of
pedestrians’ locations and poses over time. A Kalman filter enhances tracking performance
through sensor data fusion, offering better insights into pedestrian behavior in dynamic
environments. Ref. [62] proposes a computationally efficient single-template matching
technique for accurate pedestrian detection in lidar point clouds. The method creates a
pedestrian template from training data and uses it to identify pedestrians in new point
clouds, even under partial occlusion. Ref. [63] focuses on tracking pedestrian flow and
statistics using a monocular camera and a CNN–Kalman filter fusion. The CNN extracts
features from the camera image, which is followed by a Kalman filter for trajectory estima-
tion. This approach effectively tracks pedestrian flow and vital statistics, including count,
speed, and direction.
Ref. [64] addresses hazy weather pedestrian detection with deep learning. DL mod-
els are trained on hazy weather datasets and use architectural modifications to handle
challenging conditions. This approach achieves high pedestrian detection accuracy, even
in hazy weather. Ref. [65] introduces the ‘NMS by Representative Region’ algorithm to
refine pedestrian detection in crowded scenes. By employing representative regions, this
method enhances crowded scene handling by comparing these regions and removing
duplicate detections, resulting in reduced false positives. Ref. [66] proposes a graininess-
aware deep feature learning approach, equipping DL models to handle grainy images.
A DL model is trained using a graininess-aware loss function on a dataset containing
grainy and non-grainy pedestrian images. This model effectively detects pedestrians in
new images, even when they are grainy. Ref. [67] presents a DL framework for real-time
Sensors 2024, 24, 249 16 of 51
vehicle and pedestrian detection on rural roads, optimized for embedded GPUs. Modified
Faster R-CNN detects both vehicles and pedestrians simultaneously in rural road scenes.
A new rural road image dataset is developed for training the model. Ref. [68] addresses
infrared pedestrian detection at night using an attention-guided encoder–decoder CNN.
Attention mechanisms focus on relevant regions in infrared images, enhancing detection
accuracy in low-light conditions. Ref. [69] focuses on improved YOLOv3-based pedestrian
detection in complex scenarios, incorporating modifications to handle various challenges
like occlusions, lighting variations, and crowded environments.
Ref. [70] introduces Ratio-and-Scale-Aware YOLO (RASYOLO), handling pedestrians
with varying sizes and occlusions through ratio-aware anchors and scale-aware feature
fusion. Ref. [71] introduces Track Management and Occlusion Handling (TMOH), manag-
ing occlusions and multiple-pedestrian tracking through track suspension and resumption.
Ref. [72] incorporates a Part-Aware Multi-Scale fully convolutional network (PAM-FCN)
to enhance pedestrian detection accuracy by considering pedestrian body part informa-
tion and addressing scale variation. Ref. [73] proposes Attention Fusion for One-Stage
Multispectral Pedestrian Detection (AFOS-MSPD), combining attention fusion and a one-
stage approach for multispectral pedestrian detection, improving efficiency and accuracy.
Ref. [74] utilizes multispectral images for Multispectral Pedestrian Detection (MSPD), im-
proving detection using a DNN designed for multispectral data. Ref. [75] presents Robust
Pedestrian Detection Based on Multi-Spectral Image Fusion and Convolutional Neural
Networks (RPOD-FCN), utilizing multi-spectral image fusion and a CNN-based model for
accurate detection.
Ref. [76] introduces Uncertainty-Guided Cross-Modal Learning for Robust Multispec-
tral Pedestrian Detection (UCM-RMPD), addressing multispectral detection challenges
using uncertainty-guided cross-modal learning. Ref. [77] focuses on multimodal pedestrian
detection for autonomous driving using a Spatio-Contextual Deep Network-Based Multi-
modal Pedestrian Detection (SCDN-PMD) approach. Ref. [78] proposes a Novel Approach
to Model-Based Pedestrian Tracking Using Automotive Radar (NMPT radar), utilizing
radar data for model-based pedestrian tracking. Ref. [79] adopts YOLOv4 Architecture
for Low-Latency Multispectral Pedestrian Detection in Autonomous Driving (AYOLOv4),
enhancing detection accuracy using multispectral images. Ref. [80] introduces modifica-
tions to [79] called AIR-YOLOv3, an improved network-pruned YOLOv3 for aerial infrared
pedestrian detection, enhancing robustness and efficiency. Ref. [81] presents YOLOv5-
AC, an attention mechanism-based lightweight YOLOv5 variant for efficient pedestrian
detection on embedded devices. The list of reviewed papers on pedestrian detection is
summarized in Table 3.
Table 3. Chosen publications regarding pedestrian detection, their source title, and their number of
citations.
Table 3. Cont.
Figure4.4.Search
Figure Searchqueries
queriesfor
foreach
eachof
ofthe
thedatabases
databasesfor
fortraffic
trafficsign
signdetection.
detection.The
Thedatabases
databasesinclude
include
IEEE Xplore and MDPI.
IEEE Xplore and MDPI.
Yuan et
Zhang et al.
al. [82]
[88] introduce VSSA-NET,
propose cascaded R-CNN a novel
witharchitecture for traffic for
multiscale attention sign detection
TSD. RPN
(TSD), which employs a vertical spatial sequence attention network
generates proposals, Fast R-CNN classifies, and multiscale attention improves detection to improve accuracy
in complex scenes.
performance, VSSA-NET
particularly whenextracts
there is features via CNN,
an imbalanced data followed by a vertical
distribution. Tabernik spatial
and
sequence
Skočaj [89]attention
explore the module to emphasize
DL framework vertical locations
for large-scale TSDR. CNN crucialperforms
for TSD.feature
The detection
extrac-
module
tion, RPNoutputs
generates traffic
regionsign bounding
proposals, andboxes. Li and Wang
Fast R-CNN [83]exploring
classifies, present real-time traffic
DL’s potential
sign
in recognition
handling diverse using efficientscenarios.
real-world CNNs, addressing
Kamal et diverse
al. [90] lighting
introduceand environmental
automatic TSDR
conditions.
using SegU-Net MobileNet
and modifiedextracts features
Tversky from
loss. input images,
SegU-Net segments followed by SVM
traffic signs classifica-
and modified
tion. Liu et al. [84] propose multi-scale region-based CNN
loss function enhances detection and recognition, handling appearance variations. Tai (MR-CNN) for recognizing
small
et traffic
al. [91] signs. aMR-CNN
propose DL approachextractsformulti-scale featurespyramid
TSR with spatial using CNN, poolinggenerates proposals
and scale anal-
withCNN
ysis. RPN, performs
and uses feature
Fast R-CNN for classification
extraction, while spatial andpyramid
bounding box outputs.
pooling captures Tian et al.
context
and
[85]scales,
introduce enhancing recognition
a multi-scale recurrent across scenarios.
attention network Dewifor et
TSD.al. [92]
CNNevaluate
extracts the spatial
multi-scale
pyramid
features,pooling
the recurrenttechnique on CNN
attention modulefor TSR system scale,
prioritizes robustness.
and the Assessing
detectionpooling
modulesizes out-
and
putsstrategies,
boundingthey boxes evaluate different
for robust CNN architectures
detection across scenarios.for effective
Cao et traffic
al. [86]sign recogni-
present im-
tion. Nartey et al. [93] propose robust semi-supervised TSR with
proved TSDR for intelligent vehicles. CNN performs feature extraction, RPN generates self-training and weakly
supervised learning.
region proposals, andCNN
SVMperforms
classifiesfeature extraction,
proposals, self-training
enhancing reliability labels unlabeled
in dynamic roaddata,
en-
and weakly supervised
vironments. Shao et al.learning classifies
[87] improve labeled
Faster R-CNN data,
TSDenhancing accuracy
with a second RoIusing limited
and HPRPN.
labeled data.
CNN performs feature extraction, RPN generates region proposals, and the second RoI
Dewi
refines et al. [94]enhancing
proposals, leverage YOLOv4
accuracy with synthetic
in complex GAN-generated data for advanced
scenarios.
TSR. YOLOv4 with synthetic data from BigGAN
Zhang et al. [88] propose cascaded R-CNN with multiscale achieves top performance,
attention forenhancing
TSD. RPN
detection on the GTSDB dataset. Wang et al. [95] improve
generates proposals, Fast R-CNN classifies, and multiscale attention improves YOLOv4-Tiny TSR with new
detection
features and classification modules. New data augmentation
performance, particularly when there is an imbalanced data distribution. Tabernik and improves the performance
on the GTSDB
Skočaj dataset,
[89] explore theoptimizing
DL framework recognition while maintaining
for large-scale TSDR. CNNefficiency. Cao et al.
performs feature [96]
extrac-
present
tion, RPN improved
generates sparse
regionR-CNN for TSD
proposals, andwith
Fast aR-CNN
new RPN and loss
classifies, function.
exploring Enhancing
DL’s potential
detection accuracy using advanced techniques within the sparse
in handling diverse real-world scenarios. Kamal et al. [90] introduce automatic TSDR R-CNN framework. Lopez-us-
Montiel et al. [97] propose DL-based embedded system evaluation
ing SegU-Net and modified Tversky loss. SegU-Net segments traffic signs and modified and synthetic data
generation for TSD. Methods to assess DL system performance and efficiency for real-time
TSD applications are developed. Zhou et al. [98] introduce a learning region-based attention
network for TSR. The attention module emphasizes important image regions, potentially
enhancing recognition accuracy. Koh et al. [99] evaluate senior adults’ TSR recognition
through EEG signals, utilizing EEG signals to gain unique insights into senior individuals’
traffic sign perception.
Ahmed et al. [100] present a weather-adaptive DL framework for robust TSR. A cas-
caded detector with a weather classifier improves TSD performance in adverse conditions,
enhancing road safety. Xie et al. [101] explore efficient federated learning in TSR with spike
Sensors 2024, 24, 249 19 of 51
Table 4. Chosen publications, source title, and the number of citations for traffic signs detection.
approach, which combines data from multiple sensors, such as cameras, radar, and eye
tracking sensors.
DMSs can use a variety of sensors to monitor the driver, including:
a. Facial recognition. This is the most common type of sensor used in DMSs. Facial
recognition systems can track the driver’s face and identify signs of distraction or
drowsiness, such as eye closure, head tilt, and lack of facial expression.
b. A head pose sensor tracks the position of the driver’s head and can identify signs of
distraction or drowsiness, such as looking away from the road or nodding off.
c. An eye gaze sensor tracks the direction of the driver’s eye gaze and can identify signs
of distraction or drowsiness, such as looking at the phone or dashboard.
d. An eye blink rate sensor tracks the driver’s eye blink rate and can identify signs of
drowsiness, such as a decrease in the blink rate.
e. Speech recognition is used in DMSs to detect if the driver is talking on the phone or if
they are not paying attention to the road.
The above sensors are used in DMSs to detect a variety of driver behaviors, such as
(i) when a driver is distracted by looking away from the road, talking on the phone, or
using a mobile device; (ii) when a driver is drowsy, which can be determined by tracking
the driver’s eye movements and eyelid closure; (iii) when a driver is inattentive, which can
be determined by tracking the driver’s head position and eye gaze.
Sensors 2024, 24, x FOR PEER REVIEW 21 of 52
When a DMS detects risky driver behavior, it can provide a variety of alerts to the
driver, including alerts displayed on the dashboard or windshield, referred to as visual
alerts; alerts played through the vehicle’s speakers, which are called audio alerts; and hectic
hectic in
alerts, alerts,
which inalerts
whichare alerts arethrough
issued issued through vibrations
vibrations of the wheel
of the steering steering
or wheel or the
the driver’s
driver’s
seat. seat. cases,
In some In somethecases,
DMSthe mayDMS
alsomay
takealso take corrective
corrective action,
action, such such as applying
as applying the
the brakes
or turning
brakes off the engine.
or turning off the engine.
4.4.2.
4.4.2.Search
SearchTerms
Termsand
andRecent
RecentTrends
TrendsininDriver
DriverMonitoring
MonitoringSystem
SystemMethods
Methods
‘Driver
‘Driver monitoring system’ and ‘driver monitoring and assistance system’
monitoring system’ and ‘driver monitoring and assistance system’ are
are the
the
two
two prominent search terms used to investigate this topic. The ’OR’ operator was usedto
prominent search terms used to investigate this topic. The ’OR’ operator was used to
choose
chooseandandcombine
combinethe
themost
mostrelevant
relevantand
andregularly
regularlyused
usedapplicable
applicablephrases.
phrases.That
Thatis,
is,the
the
search
searchphrases
phrases‘driver
‘drivermonitoring
monitoringsystem’
system’and
and‘driver
‘driver monitoring
monitoring and and assistance
assistance system’
system’
were
were discovered. Figure 5 shows the complete search query for each of the databases.The
discovered. Figure 5 shows the complete search query for each of the databases. The
databases
databasesinclude
includeIEEE
IEEEXplore
Xploreand
andMDPI.
MDPI.
Figure5.5.Search
Figure Searchqueries
queriesfor
foreach
eachof
ofthe
thedatabases
databasesfor
forthe
thedriver
drivermonitoring
monitoringsystem.
system.The
Thedatabases
databases
include IEEE Xplore and MDPI.
include IEEE Xplore and MDPI.
Thepapers
The papers[106–114]
[106–114]discuss
discussa avariety
varietyofof approaches
approaches to to DMSs.
DMSs. These
These include
include somesome
of
of the
the keykey methods
methods likelike
(i) (i)
thethe powerful
powerful technique
technique employingDL,
employing DL,which
whichisisused
usedtotoextract
extract
featuresfrom
features fromimages
imagesand andvideos.
videos.These
Theseareareused
usedtotoidentify
identifydriver
driverbehaviors
behaviorssuch
suchas aseye
eye
closure,
closure,head
headpose,
pose,and
andfacial
facialexpressions.
expressions.(ii)
(ii)AAmore
moregeneral
generalapproach
approachisisusing
usingmachine
machine
learning,
learning,which
whichcan
canbe beused
usedto tolearn
learnpatterns
patternsfrom
fromdata.
data.These
Theseareareused
usedto
toidentify
identifydriver
driver
behaviors that are not easily captured using traditional methods, such as hand gestures
and body language, and (iii) a technique that combines data from multiple sensors, re-
ferred to as sensor fusion, to improve the accuracy of DMSs. For instance, a DMS could
combine data from a camera, an eye tracker, and a heart rate monitor to provide a more
comprehensive assessment of the driver’s state.
Sensors 2024, 24, 249 21 of 51
behaviors that are not easily captured using traditional methods, such as hand gestures and
body language, and (iii) a technique that combines data from multiple sensors, referred to
as sensor fusion, to improve the accuracy of DMSs. For instance, a DMS could combine data
from a camera, an eye tracker, and a heart rate monitor to provide a more comprehensive
assessment of the driver’s state.
Y. Zhao et al. [106] propose a novel real-time DMSs based on deep CNN to monitor
drivers’ behavior and detect distractions. It uses video input from an in-car camera and
employs CNNs to analyze the driver’s facial expressions and head movements to assess
their attentiveness. It can detect eye closure, head pose, and facial expressions with high
accuracy. Ref. [107] works towards a DMS that uses machine learning to estimate driver
situational awareness using eye-tracking data. It aims to predict driver attention and
alertness to the road, enhancing road safety. Ref. [108] proposes a lightweight DMS based
on Multi-Task Mobilenets architecture, which efficiently monitors drivers’ behavior and
attention using low computational resources. It can even run on a simple smartphone,
making it suitable for real-time monitoring. Ref. [109] introduces an optimization algorithm
for DMSs using DL. This algorithm improves the accuracy of the DMS by reducing the
number of false positives and ensuring real-time performance.
Ref. [110] proposes a real-time DMS based on visual cues, leveraging facial expressions
and eye movements to assess driver distraction and inattention. It is able to detect driver
behaviors such as eye closure, head pose, and facial expressions using only a camera.
Ref. [111] proposes an intelligent DMS that uses a combination of sensors and ML. It is
capable of providing a comprehensive assessment of the driver’s state, including their
attention level, fatigue, and drowsiness, and provides timely alerts to improve safety.
Ref. [112] proposes a hybrid DMS combining Internet of Things (IoT) and ML techniques
for comprehensive driver monitoring. It collects data from multiple sensors and uses ML to
identify driver behaviors. Ref. [113] focuses on a distracted DMS that uses AI to detect and
prevent risky behaviors on the road. It detects distracted driving behaviors such as texting
and talking on the phone while driving. Ref. [114] proposes a DMS based on a distracted
driving decision algorithm which aims to assess and address potential distractions to
ensure safe driving practices. It predicts whether the driver is distracted or not.
These papers provide a good overview of the current state of the art in DMS and
contribute to the development of advanced DMS technologies, aiming to enhance driver
safety, detect distractions, and improve situational awareness on the roads. They employ
various techniques, including deep learning, IoT, and machine learning, to create efficient
and effective driver monitoring solutions. However, before DMSs can be widely deployed,
there are still some challenges that need to be addressed, such as:
a. Data collection: It is difficult to collect large datasets of driver behavior representative
of the real world, as it is difficult to monitor drivers naturally without disrupting their
driving experience.
b. Algorithm development: Since the driver behaviors can be subtle and vary from
person to person, it is challenging to develop algorithms that can accurately identify
driver behaviors in real time.
c. Cost: DMS demands the use of specialized sensors and software, making them
expensive to implement and maintain.
Additionally, with the development and availability of new sensors, they could be
used to improve the accuracy and performance of DMSs; for example, radar sensors could
be used to track driver head movements and eye gaze. Besides, autonomous vehicles will
not need DMSs in the same way that human-driven vehicles do. However, DMSs could still
be used to monitor the state of the driver in autonomous vehicles and to provide feedback
to the driver if necessary. Despite these challenges, there is a lot of potential for DMSs to
improve road safety and the future of DMSs looks promising. As the technology continues
to develop, DMSs could become an essential safety feature in vehicles, both human-driven
and autonomous. The list of reviewed papers on driver monitoring system is summarized
in Table 5.
Sensors 2024, 24, 249 22 of 51
Table 5. Chosen publications, source title, and the number of citations referring to the driver
monitoring system.
Figure 6.
Figure 6. Search
Search queries
queriesfor
foreach
eachofofthe databases
the forfor
databases thethe
lane departure
lane warning
departure system.
warning The data-
system. The
bases include
databases IEEE
include Xplore
IEEE andand
Xplore MDPI.
MDPI.
Lane detection is a critical task in computer vision and autonomous driving systems.
These review papers explore various lane detection techniques proposed in recent research
papers. The reviewed papers cover diverse approaches, including lightweight CNNs,
sequential prediction networks, 3D lane detection, and algorithms for intelligent vehicles in
complex environments. The existing lane detection algorithms are not robust to challenging
road conditions, such as shadows, rain, and snow, along with occlusion and illumination,
and scenarios where lane markings are not visible and are limited in their ability to detect
multiple lanes and to accurately estimate the 3D position of the lanes.
This research review paper examines recent advancements in lane detection tech-
niques, focusing on the integration of DNNs and sensor fusion methodologies. The review
encompasses papers published between 2019 and 2022, exploring innovative approaches
to improve the robustness, accuracy, and performance of lane detection systems in various
challenging scenarios.
The reviewed papers present various innovative approaches for lane detection in
the context of autonomous driving systems. Lee et al. [116] introduce a self-attention
distillation method to improve the efficiency of lightweight lane detection CNNs without
compromising accuracy. FastDraw [117] addresses the long tail of lane detection using a
sequential prediction network to consider contextual information for better predictions.
3D-LaneNet [118] incorporates depth information from stereo cameras for end-to-end
3D multiple lane detection. Wang et al. [119] propose a data enhancement technique
called Light Conditions Style Transfer for lane detection in low-light conditions, improving
model robustness. Other methods explore techniques such as ridge detectors [120], LSTM
networks [121], and multitask attention networks [122] to enhance lane detection accuracy
in various challenging scenarios. Additionally, some papers integrate multiple sensor
data [123–126] or use specific sensors like radar [127] and light photometry systems [128] to
achieve more robust and accurate lane detection for autonomous vehicles. These research
contributions provide valuable insights into the development of advanced lane detection
systems for safer and more reliable autonomous driving applications.
In their recent research, Lee et al. [116] proposed a novel approach for learning
lightweight lane detection CNNs by applying self-attention distillation. FastDraw [117]
addressed the long tail of lane detection by using a sequential prediction network to better
Sensors 2024, 24, 249 24 of 51
predict lane markings in challenging conditions. Garnett et al. [118] presented 3D-LaneNet,
an end-to-end method incorporating depth information from stereo cameras for 3D mul-
tiple lane detection. Additionally, Cao et al. [123] tailored a lane detection algorithm for
intelligent vehicles in complex road conditions, enhancing real-world driving reliability.
Kuo et al. [129] optimized image sensor processing techniques for lane detection in vehicle
lane-keeping systems. Lu et al. [120] improved lane detection accuracy using a ridge
detector and regional G-RANSAC. Zou et al. [130] achieved robust lane detection from
continuous driving scenes using deep neural networks. Liu et al. [119] introduced Light
Conditions Style Transfer for lane detection in low-light conditions. Wang et al. [124]
used a map to enhance ego-lane detection in missing feature scenarios. Khan et al. [127]
utilized impulse radio ultra-wideband radar and metal lane reflectors for robust lane detec-
tion in adverse weather conditions. Yang et al. [121] employed long short-term memory
(LSTM) networks for lane position detection. Gao et al. [131] minimized false alarms in lane
departure warnings using an Extreme Learning Residual Network and ϵ-greedy LSTM.
Moreover, ref. [132] proposed a real-time attention-guided DNN-based lane detection
framework and CondLaneNet [133] used conditional convolution for top-to-down lane
detection. Dewangan and Sahu [134] analyzed driving behavior using vision-sensor-based
lane detection. Haris and Glowacz [135] utilized object feature distillation for lane line
detection. Lu et al. [136] combined semantic segmentation and optical flow estimation for
fast and robust lane detection. Suder et al. [128] designed low-complexity lane detection
methods for light photometry systems. Ko et al. [137] combined key points estimation and
point instance segmentation for lane detection. Zheng et al. [138] introduced CLRNet for
lane detection, while Wang et al. [122] proposed a multitask attention network (MAN).
Khan et al. [139] developed LLDNet, a lightweight lane detection approach for autonomous
cars. Chen and Xiang [125] incorporated pre-aligned spatial–temporal attention for lane
mark detection. Nie et al. [126] integrated a camera with dual light sensors to improve
lane-detection performance in autonomous vehicles. These studies collectively present
diverse and effective methodologies, contributing to the advancement of lane-detection
systems in autonomous driving and intelligent vehicle applications. The list of reviewed
papers on lane-departure warning system is summarized in Table 6.
Table 6. Chosen publications, source title, and the number of citations related to a lane-departure
warning system.
Table 6. Cont.
Figure 7.
Figure 7. Search
Search queries
queriesfor
foreach
eachofofthe databases
the forfor
databases thethe
lane-departure warning
lane-departure system.
warning The data-
system. The
bases include IEEE Xplore and MDPI.
databases include IEEE Xplore and MDPI.
The compilation
This papers listedof discuss
researchthe papers
development of FCWSs
demonstrates thefor autonomous
extensive effortsvehicles
in the in re-
field
cent
of years. Ref. [141] warning
forward-collision suggests and
an autonomous vehicle collision
avoidance systems, which areavoidance
crucialsystem that em-
for enhancing
ploys predictive
vehicular safety. occupancy
Lee and Kum maps to estimate
[141] propose other vehicles’
a ‘Collision future positions, enabling
Avoidance/Mitigation Sys-
collision-free
tem’ motion
incorporating planning.occupancy
predictive Ref. [142] introduces a forward collision
maps for autonomous prediction
vehicles. Manghat system
and
using online visual
El-Sharkawy tracking‘Forward
[142] present to anticipate potential
Collision collisions
Prediction based
with on other
Online Visual vehicles’
Tracking’,po-
sitions. Ref. [143] proposes an FCWS that combines driving intention recognition
utilizing online visual tracking for collision prediction. Yang, Wan, and Qu [143] introduce and V2V
‘Acommunication
Forward Collision to predict
WarningandSystem
warn about
Using potential collisionsRecognition’,
Driving Intention with front vehicles.
integratingRef.
[144] presents
driving intentionan recognition
FCWS for autonomous vehicles that deploys
and V2V communication. Kumar, a CNN
Shaw,toMaitra,
detect and trackKar-
nearby[144]
makar vehicles.
offer Ref.
‘FCW: [145] introduces
A Forward a real-time
Collision FCW
Warning technique
System Usinginvolving detection
Convolutional Neuraland
depth estimation
Network’, deployingnetworks
CNN for to identify
warningnearby vehicles
generation. Wangandandestimate distances.
Lin [145] presentRef. [146]
‘A Real-
Time Forward
proposes Collision Warning
a vision-based Technique’,
FCWS merging cameraintegrating
and radardetection and depth
data for real-time estimation
multi-vehicle
networks
detection, addressing challenging conditions like occlusions and lighting variations.Assis-
for real-time warnings. Lin, Dai, Wu, and Chen [146] introduce a ‘Driver Tang
tance
et al. System with Forward
[147] introduce Collisionrange
a monocular and Overtaking Detection’.
estimation system Tang
using and Licamera
a single [147] propose
for pre-
‘End-to-End
cise FCWS, Monocular
especially inRange Estimation’
difficult forLim
scenarios. collision
et al. warning. Lim aetsmartphone-based
[148] suggest al. [148] created a
‘Forward
FCWS forCollision Warning
motorcyclists System
utilizing for Motorcyclists’
phone using
sensors to predict smartphone
collision sensors.
risks. Farhat Farhat,
et al. [149]
Rhaiem, Faiedh, and Souani [149] present a ‘Cooperative Forward Collision Avoidance
System Based on Deep Learning’. Hong and Park [150] propose a ‘Lightweight Collabo-
ration of Detecting and Tracking Algorithm’ for embedded systems. Albarella et al. [151]
present a ‘Forward-Collision Warning System for Electric Vehicles’, validated both virtually
and in real environments. Liu et al. [152] focus on ‘Forward Collision on a Curve based on
V2X’ with a target selection method. Yu and Ai [153] present ‘Vehicle Forward Collision
Sensors 2024, 24, 249 27 of 51
Warning based upon Low-Frequency Video Data’ using hybrid deep learning. Olou, Ezin,
Dembele, and Cambier [154] propose ‘FCPNet: A Novel Model to Predict Forward Colli-
sion’ based on CNN. Pak [155] contributes ‘Hybrid Interacting Multiple Model Filtering’ to
improve radar-based warning reliability. Together, these papers collectively advance the
understanding and development of forward collision warning and avoidance systems. The
list of reviewed papers on forward-collision warning system is summarized in Table 7.
Table 7. Chosen publications, source title, and the number of citations related to forward-collision
warning systems.
which can lead to driver desensitization, (iii) it is not a substitute for safe driving practices,
such as using turn signals and checking blind spots before changing lanes.
Overall, BSD systems can be a valuable safety feature, but they are not a guarantee
against accidents. Drivers should still be aware of their surroundings and use safe driving
practices at all times.
Figure 8.
Figure 8. Search
Search queries
queries for
for each
each of
of the
the databases
databasesfor
forblind
blindspot
spotdetection.
detection.The
Thedatabases
databasesinclude
include
IEEE Xplore and MDPI.
IEEE Xplore and MDPI.
Thepapers
The papersmentioned
mentioneddiscussdiscussthethe development
development of blind-spot
of blind-spot detection
detection systems
systems (BS-
(BSDSs)
DSs) for vehicles.
for vehicles. BSDSs
BSDSs areare designed
designed to to alert
alert drivers
drivers totovehicles
vehiclesthat
thatareareinintheir
theirblind
blind
spots,where
spots, wheretheytheycannot
cannotbe beseen
seenin intheir
theirmirrors.
mirrors.
The Gale Bagi et al. [156] paper discusses aaBSDS
The Gale Bagi et al. [156] paper discusses BSDScombining
combiningradar radarand andcameras
cameras for for
accurate vehicle detection in blind spots. Radar detects vehicles
accurate vehicle detection in blind spots. Radar detects vehicles and cameras identify and cameras identify
them. Details
them. Details about
about sensors
sensors andand system
system architecture
architecture are are necessary
necessary for foraacomprehensive
comprehensive
understanding.
understanding.
Ref. [157]
Ref. [157]introduces
introduces aaprobabilistic
probabilisticBSDSBSDSestimating
estimating blind
blind spot
spotrisks
risksusing
usingvehicle
vehicle
speed, direction,
speed, direction, andand driver’s
driver’s blind
blind spot
spot angle.
angle. ItIt offers
offers nuanced
nuanced insights
insights intointo collision
collision
potential,enhancing
potential, enhancingsafe safedriving.
driving.
Zhao
Zhao et al. [158] proposeaapromising
et al. [158] propose promisingBSDS BSDSusing
usingaalightweight
lightweightNN NNand andcameras
camerasfor for
real-time
real-time detection.
detection. This
This approach
approach improves
improves detection
detection capabilities
capabilities with
with practical
practical design.
design.
Chang
Changet etal.
al.[159]
[159] present
presentan an AI-based
AI-based BSDS
BSDS warning
warning for for motorcyclists
motorcyclists using using various
various sen-
sors,
sors,proactively
proactivelydetecting
detectingblind
blindspot
spotvehicles
vehiclesandandenhancing
enhancingrider ridersafety.
safety.NaikNaiket etal.
al.[160]
[160]
propose
proposelidar-based
lidar-basedearlyearly BSDS,
BSDS,creating
creatinga 3D mapmap
a 3D to detect blind-spot
to detect vehicles
blind-spot in advance.
vehicles in ad-
The authors of [161] describe a real-time two-wheeler BSDS using computer vision
vance.
and ultrasonic
The authors sensors, confirming
of [161] describe blind spot vehicles.
a real-time two-wheelerSheteBSDS
et al. [162]
usingsuggest
computer a forklift-
vision
specific
and ultrasonic sensors, confirming blind spot vehicles. Shete et al. [162] suggest a drivers.
BSDS using ultrasonic sensors to detect blind spot vehicles and warn forklift-
Schlegel et al. [163]
specific BSDS usingpropose an optimization-based
ultrasonic sensors to detect blind planner
spotforvehicles
robots, andconsidering blind
warn drivers.
spots andet
Schlegel other vehicles
al. [163] to ensure
propose safe navigation. Kundid
an optimization-based planner et for
al. [164]
robots,introduce
consideringan ADAS
blind
algorithm creating a wider view to enhance driver awareness, mitigating
spots and other vehicles to ensure safe navigation. Kundid et al. [164] introduce an ADAS blind spot issues.
Sui et creating
algorithm al. [165]apropose
wider view an A-pillar
to enhance blind spot
driver display algorithm
awareness, mitigatingusing blindcameras
spot issues.to
showSui blind spot information on the A-pillar and side mirrors. Wang
et al. [165] propose an A-pillar blind spot display algorithm using cameras to et al. [166] present a
vision-based BSDS using depth cameras to identify blind spot vehicles
show blind spot information on the A-pillar and side mirrors. Wang et al. [166] present a in a 3D map. Zhou
et al. [167] focus
vision-based BSDS onusing
high-speed pedestrians
depth cameras in blind
to identify spots,
blind spotusing cameras
vehicles in a 3Dand radar
map. Zhouto
et al. [167] focus on high-speed pedestrians in blind spots, using cameras and radar to
detect pedestrians and pre-detection to avoid collisions. Ref. [168] introduces a multi-sen-
sor BSDS for micro e-mobility vehicles, using cameras, radar, ultrasonic sensors, and ges-
ture recognition for better blind-spot awareness. Ref. [169] suggests a multi-deep CNN-
based BSDS for commercial vehicles using cameras, effectively addressing blind-spot
Sensors 2024, 24, 249 29 of 51
detect pedestrians and pre-detection to avoid collisions. Ref. [168] introduces a multi-sensor
BSDS for micro e-mobility vehicles, using cameras, radar, ultrasonic sensors, and gesture
recognition for better blind-spot awareness. Ref. [169] suggests a multi-deep CNN-based
BSDS for commercial vehicles using cameras, effectively addressing blind-spot challenges.
Overall, these papers present a variety of promising methods for developing BSDS.
The systems proposed in these papers can detect vehicles in a variety of conditions, and
they can be used in a variety of vehicles. The collection of research papers explores a broad
spectrum of approaches to address blind spots in various domains, including robotics,
automotive applications, and micro e-mobility. The focus ranges from sensor technologies
such as cameras, lidar, and ultrasonic sensors to methodologies including AI, probabilistic
estimation, and computer vision, introducing innovative algorithms, technologies, and
architectures to enhance blind-spot detection, awareness, and collision prevention. The
studies emphasize real-time detection, early warning, and proactive risk prediction, all
contributing to enhance vehicular safety. The common thread among these studies is their
commitment to improving safety by addressing the visibility limitations posed by blind
spots. The list of reviewed papers on driver monitoring system is summarized in Table 8.
Table 8. Chosen publications, source title, and the number of citations related to driver monitoring
systems.
Number of
SI No. Ref. Year Source Title
Citations
2019 International Conference on Control,
1 [156] 2019 3
Automation and Information Sciences (ICCAIS)
IEEE Intelligent Transportation Systems
2 [157] 2019 1
Conference (ITSC)
3 [158] 2019 MDPI Electronics 16
International Symposium on Computer, Consumer,
4 [159] 2020 1
and Control (IS3C)
International Conference on Smart Electronics and
5 [160] 2020 -
Communication (ICOSEC)
5th International Conference on Electronics,
6 [161] 2021 1
Communication and Aerospace Technology (ICECA)
IEEE International Conference on Technology,
7 [162] 2021 Research, and Innovation for Betterment of -
Society (TRIBES)
8 [163] 2021 European Conference on Mobile Robots (ECMR) -
Zooming Innovation in Consumer Technologies
9 [164] 2021 -
Conference (ZINC)
IEEE 5th International Conference on Computer and
10 [165] 2022 -
Communication Engineering Technology (CCET)
IEEE Intl Conf on Dependable, Autonomic and Secure
Computing, Intl Conf on Pervasive Intelligence and
Computing, Intl Conf on Cloud and Big Data
11 [166] 2022 -
Computing, Intl Conf on Cyber Science and
Technology Congress
(DASC/PiCom/CBDCom/CyberSciTech)
IEEE 25th International Conference on Intelligent
12 [167] 2022 -
Transportation Systems (ITSC)
13 [168] 2022 MDPI Sensors 2
14 [169] 2022 MDPI Sensors 1
Sensors 2024, 24, 249 30 of 51
Figure9.9.Search
Figure Searchqueries
queriesfor
foreach
eachof
ofthe
thedatabases
databasesfor
forthe
theemergency
emergencybraking
brakingsystem.
system.The
Thedatabases
databases
includeIEEE
include IEEEXplore
Xploreand
andMDPI.
MDPI.
Floreset
Flores etal.
al.[170]
[170] propose
propose aa cooperative
cooperative car-following
car-followingandand emergency
emergencybraking
brakingsystem
system
using radar,
using radar,lidar,
lidar,and
andcameras
camerasto todetect
detectandandpredict
predictvehicle
vehicle and
and pedestrian
pedestrian movements.
movements.
ItItautomatically
automatically applies
applies the
the brakes
brakes to to prevent
prevent collisions
collisions while
while also
also facilitating
facilitating vehicle-to-
vehicle-to-
vehiclecommunication.
vehicle communication.Shin Shinetetal.al.[171]
[171]introduce
introduce anan adaptive
adaptive AEBAEB strategy
strategy utilizing
utilizing ra-
radar
dar cameras
and and cameras to detect
to detect and calculate
and calculate braking braking
forcesforces forand
for front front and
rear rear vehicle
vehicle collision
collision avoid-
avoidance.
ance. It considers
It considers speed, speed,
distance,distance, and vehicle
and vehicle dynamics dynamics for effective
for effective collisioncollision pre-
prevention.
vention.
Yang et al. [172] have developed an AEB-P system with radar and cameras, using ad-
vanced control
Yang et al.to[172]
determine braking forces
have developed for pedestrian
an AEB-P system collision
with radaravoidance, accounting
and cameras, using
for pedestrian
advanced speed,
control distance, and
to determine brakingvehicle dynamics.
forces Gao et collision
for pedestrian al. [173] present a hardware-
avoidance, account-
ing for pedestrian speed, distance, and vehicle dynamics. Gao et al. [173] present a hard-
ware-in-the-loop simulation platform for AEB system testing across various scenarios, en-
suring reliability and effectiveness. Guo et al. [174] introduce a variable time headway
AEB algorithm using predictive modeling, combining radar and cameras. It adapts time
headway for braking by considering speed, distance, and vehicle dynamics.
Sensors 2024, 24, 249 31 of 51
in-the-loop simulation platform for AEB system testing across various scenarios, ensuring
reliability and effectiveness. Guo et al. [174] introduce a variable time headway AEB algo-
rithm using predictive modeling, combining radar and cameras. It adapts time headway
for braking by considering speed, distance, and vehicle dynamics.
Leyrer et al. [175] propose a simulation-based robust AEBS design using optimiza-
tion techniques to enhance system performance and reliability. Yu et al. [176] introduce
an AEBC system utilizing radar and cameras, applying control algorithms to prevent
collisions at intersections considering vehicle and pedestrian speed, distance, and dynam-
ics. Izquierdo et al. [177] explore using MEMS microphone arrays for AEBS, improving
pedestrian detection through audio cues in a variety of environments.
Jin et al. [178] present an adaptive AEBC strategy for driverless vehicles in campus
environments, utilizing radar and cameras to prevent collisions by considering vehicle
and pedestrian characteristics and dynamics. Mannam and Rajalakshmi [179] assess AEBS
scenarios for autonomous vehicles using radar and cameras, determining collision interven-
tions based on vehicle and pedestrian detection, speed, and distance. Guo et al. [180] study
AEBS control for commercial vehicles, considering driving conditions alongside radar and
camera-based detection and control algorithms to avoid collisions based on vehicle and
pedestrian dynamics.
These papers all represent significant advances in the field of AEB systems. They
propose new methods for detecting and tracking vehicles, pedestrians, and environmental
features. They also propose new control algorithms for determining the optimal braking
force to apply to avoid a collision. These advances have the potential to make AEB systems
more effective and reliable and to help prevent traffic accidents.
All the systems discussed were evaluated in a variety of traffic scenarios, and they
were shown to be able to significantly reduce the number of accidents. The reviewed papers
collectively explore a diverse range of topics within the realm of autonomous emergency
braking (AEB) systems for enhanced road safety.
These topics include cooperative car-following, pedestrian avoidance, collision avoid-
ance with rear vehicles, longitudinal active collision avoidance, hardware-in-the-loop simu-
lation, variable time headway control, environmental feature recognition, simulation-based
robust design, inevitable collision state-based control, innovative sensor utilization (MEMS
microphone array), adaptive strategies for specific scenarios, determination of AEB-relevant
scenarios, and specialized AEB algorithms for commercial vehicles. These contributions
highlight the multi-faceted nature of AEB research, highlighting advancements in simu-
lation, sensing, control strategies, and contextual optimization and emphasizing safety,
prediction, algorithm optimization, and system validation. As autonomous vehicles con-
tinue to evolve, these papers will collectively contribute to enhancing the effectiveness and
reliability of AEB systems, thereby advancing road safety in modern transportation and
ultimately promoting safer and more reliable autonomous driving experiences. The list of
reviewed papers on emergency braking system is summarized in Table 9.
Table 9. Chosen publications, source title, and the number of citations related to the emergency
braking system.
ACC systems can be either speed-only or full-range systems. Speed-only systems only
adjust the vehicle’s speed, while full-range systems can also brake the vehicle to maintain a
safe following distance. Full-range systems are more advanced, and they are typically more
expensive. ACC systems can be set to a specific speed, or they can be set to follow the speed
of the vehicle ahead. ACC systems can also be set to a maximum following distance, and
the system will not allow the vehicle to get closer than the set distance to the vehicle ahead.
ACC systems are becoming increasingly common in vehicles, as they offer several
safety and convenience benefits such as reducing traffic congestion and improving fuel
efficiency. ACC systems can also help to prevent accidents by reducing the risk of rear-end
collisions. They are especially beneficial for long-distance driving, as they can help to
reduce driver fatigue. The benefits of ACC systems are as follows:
a. Reduced driver fatigue: ACC can help to reduce driver fatigue by taking over the
task of maintaining a safe following distance. This can be especially beneficial for
long-distance driving.
b. Increased safety: ACC can help prevent accidents by automatically adjusting the
vehicle’s speed to maintain a safe following distance.
c. Improved convenience: ACC can make driving more convenient by allowing the
driver to set a cruising speed and then relax.
d. Improved fuel efficiency: ACC systems can help to improve fuel efficiency by allowing
drivers to maintain a constant speed, which can reduce unnecessary acceleration
and braking.
Despite these benefits, ACC systems face numerous challenges, as they are (i) expen-
sive, especially in high-end vehicles, (ii) complex to install and calibrate, which can increase
the cost of ownership, and (iii) unreliable in poor weather conditions, such as rain or snow.
Overall, ACC systems are a valuable safety feature that can help to prevent accidents
and make driving more convenient. However, they are not without their challenges, such
as cost and complexity. As ACC systems become more affordable and reliable, they are
likely to become more widespread in vehicles.
and
Sensors 2024, 24, x FOR PEER REVIEW ‘intelligent cruise control’ were discovered. Figure 10 shows the complete search34
query
of 52
for each of the databases. The databases include IEEE Xplore and MDPI.
Figure10.
Figure 10. Search
Search queries
queriesfor
foreach
eachofofthe
thedatabases
databasesforfor
thethe
adaptive cruise
adaptive control
cruise system.
control The data-
system. The
bases include IEEE Xplore and MDPI.
databases include IEEE Xplore and MDPI.
G.Li
G. Liand
andD.D.Görges
Görges[181][181]propose
proposean aninnovative
innovativeapproach
approachcombining
combiningecological
ecologicalACC ACC
andenergy
and energymanagement
managementfor forHEVs
HEVsusing usingheuristic
heuristicdynamic
dynamicprogramming.
programming.The Thealgorithm
algorithm
optimizesspeed
optimizes speedprofiles,
profiles,considering
consideringtraffictrafficconditions,
conditions,state stateofofcharge,
charge,and anddriver
driver prefer-
prefer-
ences for fuel efficiency and comfort. S. Cheng et al. [182] discuss
ences for fuel efficiency and comfort. S. Cheng et al. [182] discuss a multiple-objective ACC a multiple-objective ACC
with dynamic velocity obstacle (DYC) prediction, optimizing
with dynamic velocity obstacle (DYC) prediction, optimizing speed, acceleration, safety, speed, acceleration, safety,
comfort,and
comfort, andfuel
fuelefficiency
efficiencyby byforecasting
forecastingsurrounding
surroundingvehicle vehicletrajectories.
trajectories.J.J.Lunze
Lunze[183][183]
introducesan
introduces anACC
ACCstrategy
strategyensuring
ensuringcollision
collisionavoidance
avoidancethroughthroughpredictive
predictivecontrol
controlusing
using
aacombination
combinationof ofpredictive
predictivecontrol
controland andMPCMPCto tooptimize
optimizevehicle
vehiclespeed speed profiles.
profiles. Woo,
Woo,H. H.
etetal.
al.[184]
[184]enhance
enhanceACC ACCsafety
safetyand andefficiency
efficiencythrough
throughoperation
operationcharacteristic
characteristicestimation
estimation
andtrajectory
and trajectoryprediction.
prediction.Their Theirworkworkadjusts
adjustsspeed
speedand andacceleration
accelerationconsidering
consideringvehicles’
vehicles’
dynamics and surroundings.
dynamics and surroundings.
Zhang,S.S.and
Zhang, andZhuan,
Zhuan,X.X.[185][185]developed
developedan anACCACCfor forBEVs
BEVsthat thataccounts
accountsfor forweight
weight
changes. Weight adjustments based
changes. based on on battery
batterydischarge
dischargeand andpassenger
passengerload loadare areused
usedto
to ensure
ensure safe
safe andand comfortable
comfortable driving.
driving. C. Zhai
C. Zhai et al.et[186]
al. [186] present
present an ecological
an ecological CACCAC strat-
strategy
egy for for HDVsHDVs with
with timetime delaysusing
delays usingdistributed
distributedalgorithms
algorithms for for platoon coordination,
coordination,
achieving
achievingfuel fuelefficiency
efficiencyand and ecological
ecological benefits.
benefits. Li Li
and andGörges
Görges [187] designed
[187] designed an ecological
an ecolog-
ACC
ical ACC for step-gear transmissions using reinforcement learning. It optimizesefficiency
for step-gear transmissions using reinforcement learning. It optimizes fuel fuel effi-
while
ciencymaintaining safety through
while maintaining learnedlearned
safety through intelligent controlcontrol
intelligent strategies. Jia, Jibrin,
strategies. and
Jia, Jibrin,
Görges
and Görges[188][188]
propose an energy-optimal
propose an energy-optimal ACC ACCfor for
EVs EVsusing
using linear
linear and
andnonlinear
nonlinearMPC MPC
techniques,
techniques,minimizing
minimizing energy energy consumption
consumption based based on ondynamic
dynamicdriving drivingandandtraffic
trafficcondi-
con-
ditions.
tions. Nie and Farzaneh [189] focus on eco-driving ACC with an MPC algorithm forfor
Nie and Farzaneh [189] focus on eco-driving ACC with an MPC algorithm re-
reduced
duced fuel fuelconsumption
consumption andand emissions
emissions while
while ensuring
ensuring safety
safety andand comfort.
comfort. Guo,Guo, Ge,
Ge, Sun,
Sun,
and and QiaoQiao [190]
[190] introduce
introduce anan MPC-based
MPC-based ACCwith
ACC withrelaxed
relaxedconstraints
constraints to to enhance
enhancefuel fuel
efficiency while considering speed limits and safety distances
efficiency while considering speed limits and safety distances for driving comfort. for driving comfort.
Liu,
Liu,Wang,
Wang,Hua, Hua,and andWangWang[191] [191]analyze
analyzeCACC CACCsafety safetywith
withcommunication
communicationdelays delays
using MPC and fuzzy logic to ensure stable and effective CACC
using MPC and fuzzy logic to ensure stable and effective CACC operation under real- operation under real-world
communication
world communication conditions. Lin et al.
conditions. [192]
Lin compare
et al. DRL and
[192] compare DRL MPC andforMPC
ACC, forsuggesting
ACC, sug-
agesting
hybrid aapproach
hybrid approach for improved fuel efficiency, comfort, and stability.etGunter
for improved fuel efficiency, comfort, and stability. Gunter al. [193]et
investigate the string stability of commercial ACC systems, highlighting
al. [193] investigate the string stability of commercial ACC systems, highlighting potential potential collision
risks in platooning
collision situations and
risks in platooning recommending
situations improvements.
and recommending Sawant et al. [194]
improvements. Sawant present
et al.
a[194]
robust CACC control algorithm using MPC and fuzzy logic to
present a robust CACC control algorithm using MPC and fuzzy logic to ensure safe ensure safe operation even
with limited data on preceding vehicle acceleration. Yang, Wang, and Yan [195] optimize
operation even with limited data on preceding vehicle acceleration. Yang, Wang, and Yan
ACC through a combination of MPC and ADRC, enhancing fuel efficiency and robustness
[195] optimize ACC through a combination of MPC and ADRC, enhancing fuel efficiency
to disturbances. Anselma [196] proposes a powertrain-oriented ACC considering fuel
and robustness to disturbances. Anselma [196] proposes a powertrain-oriented ACC con-
efficiency and passenger comfort using MPC and powertrain modeling.
sidering fuel efficiency and passenger comfort using MPC and powertrain modeling.
Chen [197] designed an ACC tailored to cut-in scenarios using MPC for fuel efficiency
optimization during lane changes. Hu and Wang [198] introduce a trust-based ACC with
individualization using a CBF approach, allowing vehicles to have personalized safety
requirements. Yan et al. [199] hybridized DDPG and CACC for optimized traffic flow,
Sensors 2024, 24, 249 34 of 51
Chen [197] designed an ACC tailored to cut-in scenarios using MPC for fuel effi-
ciency optimization during lane changes. Hu and Wang [198] introduce a trust-based
ACC with individualization using a CBF approach, allowing vehicles to have personalized
safety requirements. Yan et al. [199] hybridized DDPG and CACC for optimized traffic
flow, leveraging learning-based and cooperative techniques. Zhang et al. [200] created
a human-lead-platooning CACC to integrate human-driven vehicles into platoons. The
author of [201] presents a resilient CACC using ML to enhance robustness and adaptability
to uncertainties and disruptions. Kamal et al. [202] propose an ACC with look-ahead
anticipation for freeway driving, adjusting control inputs based on predicted traffic con-
ditions. Li et al. [203] leverage variable compass operator pigeon-inspired optimization
(VCPO-PIO) for ACC control input optimization. Petri et al. [204] address ACC for EVs
with FOC, considering unique characteristics and energy management needs. The list of
reviewed papers on adaptive cruise control is summarized in Table 10.
Table 10. Chosen publications, source title, and the number of citations related to adaptive cruise
control.
Number of
SI No. Ref. Year Source Title
Citations
1 [181] 2019 IEEE Transactions on Intelligent Transportation Systems 57
2 [182] 2019 IEEE Transactions on Vehicular Technology 54
3 [183] 2019 IEEE Transactions on Intelligent Transportation Systems 39
4 [184] 2019 MDPI Applied Sciences 9
5 [185] 2019 MDPI Symmetry 9
6 [186] 2020 IEEE Access 39
7 [187] 2020 IEEE Transactions on Intelligent Transportation Systems 29
8 [188] 2020 IEEE Transactions on Vehicular Technology 25
9 [189] 2020 MDPI Applied Sciences 29
10 [190] 2020 MDPI Applied Sciences 12
11 [191] 2020 MDPI Sustainability 11
12 [192] 2021 IEEE Transactions on Intelligent Vehicles 69
13 [193] 2021 IEEE Transactions on Intelligent Transportation Systems 68
14 [194] 2021 IEEE Transactions on Intelligent Transportation Systems 31
15 [195] 2021 MDPI Actuators 16
16 [196] 2021 MDPI Energies 13
17 [197] 2021 MDPI Applied Sciences 12
18 [198] 2022 IEEE Transactions on Intelligent Transportation Systems 12
IEEE Transactions on Automation Science
19 [199] 2022 10
and Engineering
20 [200] 2022 IEEE Transactions on Intelligent Transportation Systems 8
21 [201] 2022 IEEE Transactions on Intelligent Transportation Systems 8
22 [202] 2022 MDPI Applied Sciences 5
23 [203] 2022 MDPI Electronics 1
24 [204] 2022 MDPI Applied Sciences 1
Search
4.10.1.Terms
Searchand Recent
Terms andTrends
Recentin Around-View
Trends Monitoring
in Around-View Monitoring
‘Around
‘Around view monitoring’, ‘AVM’, and ‘surround view monitoring’
view monitoring’, ‘AVM’, and ‘surround view monitoring’ are
are the
thepromi-
promi-
nent search terms used to investigate this topic. The ‘OR’ operator was used
nent search terms used to investigate this topic. The ’OR’ operator was used to choose to choose
and
and combine
combine thethe most
most relevant
relevant andregularly
and regularlyused
usedapplicable
applicable phrases.
phrases. That
That is
is the
thesearch
search
phrases
phrases‘around
‘aroundview
viewmonitoring’,
monitoring’,‘AVM’,
‘AVM’,and
and‘surround
‘surroundviewviewmonitoring’
monitoring’were
werediscov-
discov-
ered. Figure 11 shows the complete search query for each of the databases. The databases
ered. Figure 11 shows the complete search query for each of the databases. The databases
include
includeIEEE
IEEEXplore
Xploreand
andMDPI.
MDPI.
Figure11.
Figure 11. Search
Search queries
queries for
for each
each of
of the
the databases
databasesfor
foraround
aroundview
viewmonitoring.
monitoring.The
Thedatabases in-
databases
clude IEEE Xplore and MDPI.
include IEEE Xplore and MDPI.
Ref.[205]
Ref. [205]introduces
introducesaanovel novelmethod
methodby byintegrating
integratingsemantic
semanticsegmentation
segmentationwith with
AVMfor
AVM forlane-level
lane-levellocalization.
localization. Utilizing
Utilizing visual
visual data
data and
andsemantic
semanticinformation,
information,aaDL DL
modelsegments
model segments lanes
lanes andand localizes
localizes the vehicle,
the vehicle, enhancing
enhancing navigation
navigation precisionprecision and
and safety.
safety.
Refs. Refs. [206,207]
[206,207] integrateintegrate
motion motionestimationestimation
into an into
AVM anforAVMADAS.for ADAS. The author
The author of [206]of
[206] employs
employs a Kalman a Kalman
filter tofilter to estimate
estimate motion,motion, improving
improving AVMAVM image image accuracy
accuracy by upby to
up
to 20%.
20%. The Theauthorauthor of [207]
of [207] focuses
focuses on homogeneous
on homogeneous surfaces,
surfaces, achieving
achieving 90% accuracy
90% accuracy with
image registration
with image and optical
registration flow.flow.
and optical Ref.Ref.[208] discusses
[208] discussesAVM/lidar
AVM/lidar sensor
sensorfusion
fusionforfor
parking-based
parking-basedSLAM. SLAM.The Thefusion
fusioncreates
createsaamap mapfor forSLAM
SLAMand andparking
parkingdetection,
detection,with withanan
improved
improvedloop loopclosure
closureaccuracy
accuracyofof95%. 95%.
Ref.
Ref.[209]
[209]proposes
proposesAVM-based
AVM-basedparking parkingspace spacedetection
detectionusingusingimage
imageprocessing
processingand and
machine learning, providing an effective solution. Ref. [210]
machine learning, providing an effective solution. Ref. [210] presents automatic AVM presents automatic AVM
camera
cameracalibration
calibrationusing using image
image processing
processing and andmachine
machinelearning,
learning,streamlining
streamliningthe process
the pro-
without a physical calibration rig. Ref. [211] enhances AVM
cess without a physical calibration rig. Ref. [211] enhances AVM image quality image quality via synthetic
via syn-
image
thetic learning for deblurring,
image learning addressing
for deblurring, blurriness
addressing and distortion.
blurriness Ref. [212]Ref.
and distortion. introduces
[212] in-
AVM calibration
troduces using unaligned
AVM calibration square boards,
using unaligned squaresimplifying the process
boards, simplifying theand increasing
process and in-
accuracy without a physical rig. Ref. [213] proposes an AVM-based
creasing accuracy without a physical rig. Ref. [213] proposes an AVM-based automatic automatic parking
system
parking using
systemparking
usingline detection,
parking offering anoffering
line detection, accurateanand efficient
accurate andsolution.
efficientRef. [214]
solution.
suggests a DL-based approach to detect parking and collision
Ref. [214] suggests a DL-based approach to detect parking and collision risk areas in au-risk areas in autonomous
parking
tonomous scenarios,
parkingimproving
scenarios,accuracy
improving and collisionand
accuracy assessment.
collision assessment.
The
The papers discussed above provide a good overviewofofthe
papers discussed above provide a good overview thecurrent
currentstate-of-the-art
state-of-the-art
approaches
approaches using AVM systems for lane-level localization, motion estimation,parking
using AVM systems for lane-level localization, motion estimation, parking
space
spacedetection,
detection,and andcollision
collisionriskriskarea
areadetection
detectionand andimproving
improvingthe theperformance
performanceofofAVM AVM
systems.
systems.The Themethods
methods proposed
proposed in these
in thesepapers
papershavehave
the potential
the potentialto significantly improve
to significantly im-
prove the safety and efficiency of AVM systems, which in turn improves driving and park-
ing efficiencies, and they are likely to become increasingly common in the future.
Sensors 2024, 24, 249 36 of 51
the safety and efficiency of AVM systems, which in turn improves driving and parking
efficiencies, and they are likely to become increasingly common in the future.
These amalgamations of these research papers collectively introduce innovative ap-
proaches ranging from semantic segmentation for lane-level localization to motion estima-
tion techniques for enhancing monitoring accuracy, and collectively focus on crucial aspects
such as automatic calibration, image-quality enhancement, parking-line detection, and
collision-risk assessment. Additionally, by employing advanced techniques like supervised
deblurring and DL, the integration of sensor fusion, such as AVM and lidar, significantly
improves AVM systems’ reliability, accuracy, and safety, offering promising outcomes for
applications like autonomous parking. The synthesis of these diverse techniques showcases
the recent advancements and growing potential of AVM in improving vehicle navigation,
parking, and overall safety, thus revolutionizing vehicle navigation, parking, and overall
driving experiences. The list of reviewed papers on around view monitoring is summarized
in Table 11.
Table 11. Chosen publications, source title, and the number of citations related to around-view
monitoring.
5. Discussion Datasets
The input data are the most important factor for the ADAS functionalities discussed in
this paper. The preparation of the dataset is essential for the DL approaches, particularly in
the training phase. The quality of the dataset preparation in the network model determines
how well the autonomous car can manage its behavior and make decisions.
A review of journal articles, conference papers, and book chapters found that many
studies used self-collected data or collected data online. Some researchers compiled their
own dataset for training and then compared it to a publicly available benchmark dataset.
Others only used self-collected data for training and validation. Still, others relied only on
publicly available datasets for training and validation.
The choice of dataset preparation method depends on the specific research and the
availability of resources. Self-collected data can be more representative of the specific
environment in which the autonomous car will be operating, but it can be more time-
consuming and expensive to collect. Publicly available datasets are more convenient to use,
but they may not be as representative of the specific environment. Table 12 lists various
public datasets used for different state-of-the-art methods discussed in Sections 4.1–4.10.
Sensors 2024, 24, 249 37 of 51
Table 12. Datasets employed by the references chosen in this review paper.
Besides employing publicly available, free-to-use open-source datasets, the most recent
state-of-the-art work uses a self-collected dataset and proposes datasets suitable for their
proposed works and makes their proposed dataset available for other researchers. For
instance, ref. [40] manually constructed a dataset containing 316 vehicle clusters and 224
non-vehicle clusters, ref. [47] used datasets generated from the transformed results that
demonstrate significant improvement, and ref. [62] initially generated a template of a
pedestrian from a training dataset. The template was then used to match pedestrians in
the lidar point cloud. The authors of the paper evaluated their method based on a dataset
of lidar point clouds. Additionally, ref. [63] was evaluated using their dataset and [67]
was evaluated using a dataset of images captured in hazy weather, ref. [66] was trained
and tested on a dataset of images captured in different weather conditions, ref. [67] was
trained on a dataset of images from rural roads, ref. [68] was trained on infrared images
captured during nighttime, and ref. [69] was trained on a dataset of images collected from
different scenarios, including urban roads, highways, and intersections. If a public dataset
is unavailable and the target is specific to a country, as was the case for [91], in which a
public dataset for Taiwan was not available, the author evaluated their method based on a
locally built dataset [248]. On the other hand, many publications do not mention exactly
which dataset was used, instead highlighting that ‘the proposed method was evaluated on
a publicly available dataset’ [94–96].
In addition to the state-of-the-art methods discussed in the above sections, some of
the other notable publications are:
The paper [249] provides a comprehensive overview of the advancements and tech-
niques in object detection facilitated by DL methodologies. The authors survey the state-of-
the-art approaches up to the time of publication in 2019, and discuss various DL architec-
tures and algorithms used for object detection, including two-stage detectors, one-stage
detectors, anchor-based and anchor-free methods, RetinaNet, and FPNs, along with method-
ologies handling small objects, occlusions, and cluttered backgrounds. Additionally, they
present some promising research directions for future work, such as multi-task learning,
attention mechanisms, weakly supervised learning, and domain adaptation. Addition-
ally, their paper explores the architectural evolution of DL models for object detection,
discussing the transition from traditional methods to the emergence of region-based and
Sensors 2024, 24, 249 39 of 51
a. Data requirements: Deep learning algorithms require large datasets of labeled data to
train. This can be a challenge to obtain, especially for rare or unusual objects.
b. Computational requirements: Deep learning algorithms can be computationally ex-
pensive, which can limit their use in real-time applications.
c. Interpretability: Deep learning algorithms are often difficult to interpret, which can
make it difficult to understand why they make certain decisions.
Researchers are working on developing newer algorithms and improvising the existing
algorithms and techniques to address these challenges. As a result, ADASs are becoming
increasingly capable of detecting and tracking objects in a variety of challenging conditions.
ADASs are still under development, but they have the potential to revolutionize the
way we drive. By making our roads safer and more efficient, ADASs can help to create a
better future for transportation.
ADASs are not without their drawbacks. They can be expensive, and they can some-
times malfunction. Additionally, drivers may become too reliant on ADASs and become
less attentive to their driving.
Overall, ADASs offer numerous potential benefits for safety and convenience. How-
ever, it is important to be aware of the drawbacks and to use these systems responsibly.
The ongoing continuous advancements and researches are focusing on overcoming
the existing drawbacks and the same can be foreseen as the future trends of ADAS.
a. Multi-sensor fusion: ADASs are increasingly using multiple sensors, such as cameras,
radar, and lidar, to improve the accuracy and reliability of object detection. Multi-
sensor fusion can help to overcome the limitations of individual sensors, such as
occlusion and poor weather conditions.
b. Deep learning: DL is rapidly becoming the dominant approach for object detection,
recognition, and tracking in ADAS. Deep learning algorithms are very effective at
learning the features that are important for identifying different objects.
c. Real-time performance: ADASs must be able to detect, recognize, and track objects
in real time. This is essential for safety-critical applications, as delays in detection or
tracking can lead to accidents.
d. Robustness to challenging conditions: ADASs must be able to operate in a variety
of challenging conditions, such as different lighting conditions, weather conditions,
and road conditions. Researchers are working on developing new algorithms and
techniques to improve the robustness of ADASs to challenging conditions.
e. Integration with other ADAS features: ADASs are seeing increased integration with
other ADAS features, such as collision avoidance, lane departure warning, and adap-
tive cruise control. This integration can help to improve the overall safety of vehicles.
These are just some of the future trends in object detection, recognition, and tracking
for ADAS. As research in this area continues, ADASs are becoming increasingly capable of
detecting and tracking objects in a variety of challenging conditions. This will help to make
vehicles safer and more reliable.
Some additional trends that are worth mentioning could be:
a. The use of synthetic data: Synthetic data are being used increasingly often to train
object detection, recognition, and tracking algorithms. Synthetic data are generated
by computer simulations, and they can be used to create training datasets that are
more diverse and challenging than the real-world datasets. This might enhance
the efficiency of the neural networks, as they can be trained with a combination of
real-world datasets supplemented with the synthetic datasets.
b. The use of edge computing: Edge computing is a distributed computing paradigm that
brings computation and storage closer to the edge of the network. Edge computing
can be used to improve the performance and efficiency of ADASs by performing object
detection, recognition, and local tracking on the vehicle, implying that the greater the
storage on the ADAS implement vehicles, the better the performance of the ADASs.
Sensors 2024, 24, 249 41 of 51
c. The use of 5G: 5G is the next generation of cellular network technology. 5G will offer
much higher bandwidth and lower latency than 4G, which will make it possible to
stream high-definition video from cameras to cloud-based servers for object detection,
recognition, and tracking. Thus, a better cellular network will aid in the continuous
training of the NNs and greatly improve the performance with newer data from real
environments.
These are just some of the future trends that are likely to shape the development of
object detection, recognition, and tracking for ADAS in the years to come.
Author Contributions: Conceptualization, V.M.S. and J.-I.G.; methodology, V.M.S. and J.-I.G.; valida-
tion, V.M.S. and J.-I.G.; formal analysis, V.M.S.; investigation, V.M.S.; resources, V.M.S. and J.-I.G.; data
curation, V.M.S.; writing—original draft preparation, V.M.S.; writing—review and editing, V.M.S.;
visualization, V.M.S.; supervision, J.-I.G.; project administration, J.-I.G.; funding acquisition, J.-I.G.
All authors have read and agreed to the published version of the manuscript.
Funding: This work is supported by the National Science and Technology Council (NSTC), Tai-
wan R.O.C. projects with grants 112-2218-E-A49-027-, 112-2218-E-002-042-, 111-2622-8-A49-023-,
111-2221-E-A49-126-MY3, 111-2634-F-A49-013-, and 110-2221-E-A49-145-MY3, and by the Satellite
Communications and AIoT Research Center/The Co-operation Platform of the Industry-Academia
Innovation School, National Yang Ming Chiao Tung University (NYCU), Taiwan R.OC. projects with
grants 111UC2N006 and 112UC2N006.
Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable.
Data Availability Statement: No data used in the article but only the state-of-the-art publications as
listed in the ‘References’ section.
Acknowledgments: We extend of sincere thanks to the National Yang Ming Chiao Tung University
(NYCU), Taiwan R.OC., National Science and Technology Council (NSTC), Taiwan R.O.C., and the
Satellite Communications and AIoT Research Center/The Co-operation Platform of the Industry-
Academia Innovation School, National Yang Ming Chiao Tung University (NYCU), Taiwan R.OC. for
their valuable support. We extend our heartfelt thanks to all the members and staff of the Intelligent
Vision System Laboratory (iVSL), National Yang Ming Chiao Tung University, Taiwan R.O.C.
Conflicts of Interest: Author Jiun-In Guo was employed by the company eNeural Technologies Inc.
All the authors declare that the research was conducted in the absence of any commercial or financial
relationships that could be construed as a potential conflict of interest.
References
1. Dewesoft. What Is ADAS? Dewesoft Blog. 8 March 2022. Available online: https://dewesoft.com/blog/what-is-adas (accessed
on 12 March 2022).
2. FEV Consulting. Forbes Honors FEV Consulting as One of the World’s Best Management Consulting Firms. FEV Media Center. 20
July 2022. Available online: https://www.fev.com/en/media-center/press/press-releases/news-article/article/forbes-honors-
fev-consulting-as-one-of-the-worlds-best-management-consulting-firms-2022.html (accessed on 17 March 2022).
3. Insurance Institute for Highway Safety. Effectiveness of advanced driver assistance systems in preventing fatal crashes. Traffic Inj.
Prev. 2019, 20, 849–858.
4. Traffic Safety Facts: 2021 Data. Available online: https://crashstats.nhtsa.dot.gov/Api/Public/ViewPublication/813001 (accessed
on 1 October 2022).
5. Palat, B.; Delhomme, P.; Saint Pierre, G. Numerosity heuristic in route choice based on the presence of traffic lights. Transp. Res.
Part F Traffic Psychol. Behav. 2014, 22, 104–112. [CrossRef]
6. Papadimitriou, E.; Lassarre, S.; Yannis, G. Introducing human factors in pedestrian crossing behaviour models. Transp. Res. Part F
Traffic Psychol. Behav. 2016, 36, 69–82. [CrossRef]
7. King, E.; Bourdeau, E.; Zheng, X.; Pilla, F. A combined assessment of air and noise pollution on the High Line, New York City.
Transp. Res. Part D Transp. Environ. 2016, 42, 91–103. [CrossRef]
8. Woodburn, A. An analysis of rail freight operational efficiency and mode share in the British port-hinterland container market.
Transp. Res. Part D Transp. Environ. 2017, 51, 190–202. [CrossRef]
9. Haybatollahi, M.; Czepkiewicz, M.; Laatikainen, T.; Kyttä, M. Neighbourhood preferences, active travel behaviour, and built
environment: An exploratory study. Transp. Res. Part F Traffic Psychol. Behav. 2015, 29, 57–69. [CrossRef]
Sensors 2024, 24, 249 42 of 51
10. Honda Worldwide. Honda Motor Co. Advanced Brake Introduced for Motorcycles by Honda ahead of Others. Available online:
https://web.archive.org/web/20160310200739/http://world.honda.com/motorcycle-technology/brake/p2.html (accessed on
30 November 2022).
11. American Honda. Combined Braking System (CBS). 9 December 2013. Available online: https://web.archive.org/web/20180710
010624/http://powersports.honda.com/experience/articles/090111c08139be28.aspx (accessed on 16 September 2022).
12. Blancher, A.; Zuby, D. Interview: Into the Future with ADAS and Vehicle Autonomy. Visualize, Verisk. 8 March 2023. Available
online: https://www.verisk.com/insurance/visualize/interview-into-the-future-with-adas-and-vehicle-autonomy/ (accessed
on 16 September 2022).
13. Yeong, D.J.; Velasco-Hernandez, G.; Barry, J.; Walsh, J. Sensor and Sensor Fusion Technology in Autonomous Vehicles: A Review.
Sensors 2021, 21, 2140. [CrossRef] [PubMed]
14. Continental, A.G. ADAS Challenges and Solutions. 2022. Available online: https://conf.laas.fr/WORCS13/Slides/WORCS-13_2
013-SergeBoverie.pdf (accessed on 8 March 2023).
15. Blanco, S. Advanced Driver-Assistance Systems. What the Heck Are They Anyway? Forbes. 26 May 2022. Available online:
https://www.forbes.com/wheels/advice/advanced-driver-assistance-systems-what-are-they/ (accessed on 20 May 2023).
16. Cortes, C.; Vapnik, V. Support-vector networks. Mach. Learn. 1995, 20, 273–297. [CrossRef]
17. Breiman, L. Random Forests. Mach. Learn. 2001, 45, 5–32. [CrossRef]
18. Swain, M.J.; Ballard, D.H. Color indexing. Int. J. Comput. Vis. 1991, 7, 11–32. [CrossRef]
19. Sobel, I.; Feldman, G. A 3 × 3 Isotropic Gradient Operator for Edge Detection; Presented at the Stanford Artificial Project; Stanford
University: Stanford, CA, USA, 1968.
20. Belongie, S.; Malik, J.; Puzicha, J. Shape matching and object recognition using shape contexts. IEEE Trans. Pattern Anal. Mach.
Intell. 2002, 24, 509–522. [CrossRef]
21. Kalman, R.E. A new approach to linear filtering and prediction problems. J. Basic Eng. 1960, 82, 35–45. [CrossRef]
22. Simon, D. Optimal State Estimation: Kalman, H Infinity, and Nonlinear Approaches; John Wiley & Sons: Hoboken, NJ, USA, 2006.
23. Wu, J.K.; Wong, Y.F. Bayesian Approach for Data Fusion in Sensor Networks. In Proceedings of the 2006 9th International
Conference on Information Fusion, Florence, Italy, 10–13 July 2006; pp. 1–5. [CrossRef]
24. Sun, Y.-Q.; Tian, J.-W.; Liu, J. Target Recognition using Bayesian Data Fusion Method. In Proceedings of the 2006 International
Conference on Machine Learning and Cybernetics, Dalian, China, 13–16 August 2006; pp. 3288–3292. [CrossRef]
25. Le Hegarat-Mascle, S.L.; Bloch, I.; Vidal-Madjar, D. Application of Dempster-Shafer evidence theory to unsupervised classification
in multisource remote sensing. IEEE Trans. Geosci. Remote Sens. 1997, 35, 1018–1031. [CrossRef]
26. Chen, C.; Jafari, R.; Kehtarnavaz, N. Improving Human Action Recognition Using Fusion of Depth Camera and Inertial Sensors.
IEEE Trans. Hum. Mach. Syst. 2015, 45, 51–61. [CrossRef]
27. Ding, B.; Wen, G.; Huang, X.; Ma, C.; Yang, X. Target Recognition in Synthetic Aperture Radar Images via Matching of Attributed
Scattering Centers. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2017, 10, 3334–3347. [CrossRef]
28. Gu, J.; Lind, A.; Chhetri, T.R.; Bellone, M.; Sell, R. End-to-End Multimodal Sensor Dataset Collection Framework for Autonomous
Vehicles. Sensors 2023, 23, 6783. [CrossRef] [PubMed]
29. RGBSI. What Is Sensor Fusion for Autonomous Driving Systems?—Part 1. RGBSI Blog. 15 February 2023. Available online:
https://blog.rgbsi.com/sensor-fusion-autonomous-driving-systems-part-1 (accessed on 30 April 2023).
30. Sasken. Sensor Fusion Paving the Way for Autonomous Vehicles. Sasken Blog. 22 February 2023. Available online: https:
//blog.sasken.com/sensor-fusion-paving-the-way-for-autonomous-vehicles (accessed on 18 May 2023).
31. Haider, A.; Pigniczki, M.; Köhler, M.H.; Fink, M.; Schardt, M.; Cichy, Y.; Zeh, T.; Haas, L.; Poguntke, T.; Jakobi, M.; et al.
Development of High-Fidelity Automotive LiDAR Sensor Model with Standardized Interfaces. Sensors 2022, 22, 7556. [CrossRef]
32. Waymo. The Waymo Driver Handbook: Teaching an Autonomous Vehicle How to Perceive and Understand the World around It.
Waymo Blog. 11 October 2021. Available online: https://waymo.com/blog/2021/10/the-waymo-driver-handbook-perception.
html (accessed on 18 May 2023).
33. Hu, X.; Xu, X.; Xiao, Y.; Chen, H.; He, S.; Qin, J.; Heng, P.-A. SINet: A Scale-Insensitive Convolutional Neural Network for Fast
Vehicle Detection. IEEE Trans. Intell. Transp. Syst. 2019, 20, 1010–1019. [CrossRef]
34. Hu, X.; Xu, X.; Xiao, Y.; Chen, H.; He, S.; Qin, J.; Heng, P.-A. Joint Monocular 3D Vehicle Detection and Tracking. In Proceedings
of the 2019 IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Republic of Korea, 27 October–2 November
2019; pp. 5389–5398. [CrossRef]
35. Chadwick, S.; Maddern, W.; Newman, P. Distant Vehicle Detection Using Radar and Vision. In Proceedings of the 2019
International Conference on Robotics and Automation (ICRA), Montreal, QC, Canada, 20–24 May 2019; pp. 8311–8317. [CrossRef]
36. López-Sastre, R.J.; Herranz-Perdiguero, C.; Guerrero-Gómez-Olmedo, R.; Oñoro-Rubio, D.; Maldonado-Bascón, S. Boosting
Multi-Vehicle Tracking with a Joint Object Detection and Viewpoint Estimation Sensor. Sensors 2019, 19, 4062. [CrossRef]
37. Zhang, F.; Li, C.; Yang, F. Vehicle Detection in Urban Traffic Surveillance Images Based on Convolutional Neural Networks with
Feature Concatenation. Sensors 2019, 19, 594. [CrossRef]
38. Gomaa, A.; Abdelwahab, M.M.; Abo-Zahhad, M.; Minematsu, T.; Taniguchi, R.-I. Robust Vehicle Detection and Counting
Algorithm Employing a Convolution Neural Network and Optical Flow. Sensors 2019, 19, 4588. [CrossRef] [PubMed]
39. Liu, H.; Ma, J.; Xu, T.; Yan, W.; Ma, L.; Zhang, X. Vehicle Detection and Classification Using Distributed Fiber Optic Acoustic
Sensing. IEEE Trans. Veh. Technol. 2020, 69, 1363–1374. [CrossRef]
Sensors 2024, 24, 249 43 of 51
40. Zhang, J.; Xiao, W.; Coifman, B.; Mills, J.P. Vehicle Tracking and Speed Estimation From Roadside Lidar. IEEE J. Sel. Top. Appl.
Earth Obs. Remote Sens. 2020, 13, 5597–5608. [CrossRef]
41. Wang, X.; Wang, S.; Cao, J.; Wang, Y. Data-Driven Based Tiny-YOLOv3 Method for Front Vehicle Detection Inducing SPP-Net.
IEEE Access 2020, 8, 110227–110236. [CrossRef]
42. Kim, T.; Park, T.-H. Extended Kalman Filter (EKF) Design for Vehicle Position Tracking Using Reliability Function of Radar and
Lidar. Sensors 2020, 20, 4126. [CrossRef] [PubMed]
43. Cao, J.; Song, C.; Song, S.; Peng, S.; Wang, D.; Shao, Y.; Xiao, F. Front Vehicle Detection Algorithm for Smart Car Based on
Improved SSD Model. Sensors 2020, 20, 4646. [CrossRef] [PubMed]
44. Mo, N.; Yan, L. Improved Faster RCNN Based on Feature Amplification and Oversampling Data Augmentation for Oriented
Vehicle Detection in Aerial Images. Remote Sens. 2020, 12, 2558. [CrossRef]
45. Zhang, R.; Ishikawa, A.; Wang, W.; Striner, B.; Tonguz, O.K. Using Reinforcement Learning with Partial Vehicle Detection for
Intelligent Traffic Signal Control. IEEE Trans. Intell. Transp. Syst. 2021, 22, 404–415. [CrossRef]
46. Hassaballah, M.; Kenk, M.A.; Muhammad, K.; Minaee, S. Vehicle Detection and Tracking in Adverse Weather Using a Deep
Learning Framework. IEEE Trans. Intell. Transp. Syst. 2021, 22, 4230–4242. [CrossRef]
47. Lin, C.-T.; Huang, S.-W.; Wu, Y.-Y.; Lai, S.-H. GAN-Based Day-to-Night Image Style Transfer for Nighttime Vehicle Detection.
IEEE Trans. Intell. Transp. Syst. 2021, 22, 951–963. [CrossRef]
48. Balamuralidhar, N.; Tilon, S.; Nex, F. MultEYE: Monitoring System for Real-Time Vehicle Detection, Tracking and Speed Estimation
from UAV Imagery on Edge-Computing Platforms. Remote Sens. 2021, 13, 573. [CrossRef]
49. Chen, Y.; Qin, R.; Zhang, G.; Albanwan, H. Spatial-Temporal Analysis of Traffic Patterns during the COVID-19 Epidemic by
Vehicle Detection Using Planet Remote-Sensing Satellite Images. Remote Sens. 2021, 13, 208. [CrossRef]
50. Li, H.; Zhao, S.; Zhao, W.; Zhang, L.; Shen, J. One-Stage Anchor-Free 3D Vehicle Detection from LiDAR Sensors. Sensors 2021, 21,
2651. [CrossRef] [PubMed]
51. Sun, Y.; Cao, B.; Zhu, P.; Hu, Q. Drone-Based RGB-Infrared Cross-Modality Vehicle Detection Via Uncertainty-Aware Learning.
IEEE Trans. Circuits Syst. Video Technol. 2022, 32, 6700–6713. [CrossRef]
52. Zhao, J.; Hao, S.; Dai, C.; Zhang, H.; Zhao, L.; Ji, Z.; Ganchev, I. Improved Vision-Based Vehicle Detection and Classification by
Optimized YOLOv4. IEEE Access 2022, 10, 8590–8603. [CrossRef]
53. Bell, A.; Mantecon, T.; Diaz, C.; Del-Blanco, C.R.; Jaureguizar, F.; Garcia, N. A Novel System for Nighttime Vehicle Detection
Based on Foveal Classifiers with Real-Time Performance. IEEE Trans. Intell. Transp. Syst. 2022, 23, 5421–5433. [CrossRef]
54. Humayun, M.; Ashfaq, F.; Jhanjhi, N.Z.; Alsadun, M.K. Traffic Management: Multi-Scale Vehicle Detection in Varying Weather
Conditions Using YOLOv4 and Spatial Pyramid Pooling Network. Electronics 2022, 11, 2748. [CrossRef]
55. Charouh, Z.; Ezzouhri, A.; Ghogho, M.; Guennoun, Z. A Resource-Efficient CNN-Based Method for Moving Vehicle Detection.
Sensors 2022, 22, 1193. [CrossRef]
56. Fan, Y.; Qiu, Q.; Hou, S.; Li, Y.; Xie, J.; Qin, M.; Chu, F. Application of Improved YOLOv5 in Aerial Photographing Infrared
Vehicle Detection. Electronics 2022, 11, 2344. [CrossRef]
57. National Highway Traffic Safety Administration. Traffic Safety Facts 2021 Data: Pedestrians. [Fact Sheet]; 27 June 2023. Available
online: https://crashstats.nhtsa.dot.gov/Api/Public/ViewPublication/813450 (accessed on 2 May 2023).
58. Liu, W.; Liao, S.; Ren, W.; Hu, W.; Yu, Y. High-Level Semantic Feature Detection: A New Perspective for Pedestrian Detection. In
Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA,
16–20 June 2019; pp. 5182–5191. [CrossRef]
59. Liu, S.; Huang, D.; Wang, Y. Adaptive NMS: Refining Pedestrian Detection in a Crowd. In Proceedings of the 2019 IEEE/CVF
Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 16–20 June 2019; pp. 6452–6461.
[CrossRef]
60. Pang, Y.; Xie, J.; Khan, M.H.; Anwer, R.M.; Khan, F.S.; Shao, L. Mask-Guided Attention Network for Occluded Pedestrian
Detection. In Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Republic of Korea,
27 October–2 November 2019; pp. 4966–4974. [CrossRef]
61. Dimitrievski, M.; Veelaert, P.; Philips, W. Behavioral Pedestrian Tracking Using a Camera and LiDAR Sensors on a Moving
Vehicle. Sensors 2019, 19, 391. [CrossRef]
62. Liu, K.; Wang, W.; Wang, J. Pedestrian Detection with Lidar Point Clouds Based on Single Template Matching. Electronics 2019,
8, 780. [CrossRef]
63. He, M.; Luo, H.; Hui, B.; Chang, Z. Pedestrian Flow Tracking and Statistics of Monocular Camera Based on Convolutional Neural
Network and Kalman Filter. Appl. Sci. 2019, 9, 1624. [CrossRef]
64. Li, G.; Yang, Y.; Qu, X. Deep Learning Approaches on Pedestrian Detection in Hazy Weather. IEEE Trans. Ind. Electron. 2020, 67,
8889–8899. [CrossRef]
65. Huang, X.; Ge, Z.; Jie, Z.; Yoshie, O. NMS by Representative Region: Towards Crowded Pedestrian Detection by Proposal Pairing.
In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 14–19
June 2020; pp. 10747–10756. [CrossRef]
66. Lin, C.; Lu, J.; Wang, G.; Zhou, J. Graininess-Aware Deep Feature Learning for Robust Pedestrian Detection. IEEE Trans. Image
Process. 2020, 29, 3820–3834. [CrossRef]
Sensors 2024, 24, 249 44 of 51
67. Barba-Guaman, L.; Eugenio Naranjo, J.; Ortiz, A. Deep Learning Framework for Vehicle and Pedestrian Detection in Rural Roads
on an Embedded GPU. Electronics 2020, 9, 589. [CrossRef]
68. Chen, Y.; Shin, H. Pedestrian Detection at Night in Infrared Images Using an Attention-Guided Encoder-Decoder Convolutional
Neural Network. Appl. Sci. 2020, 10, 809. [CrossRef]
69. Cao, J.; Song, C.; Peng, S.; Song, S.; Zhang, X.; Shao, Y.; Xiao, F. Pedestrian Detection Algorithm for Intelligent Vehicles in Complex
Scenarios. Sensors 2020, 20, 3646. [CrossRef]
70. Hsu, W.-Y.; Lin, W.-Y. Ratio-and-Scale-Aware YOLO for Pedestrian Detection. IEEE Trans. Image Process. 2021, 30, 934–947.
[CrossRef]
71. Stadler, D.; Beyerer, J. Improving Multiple Pedestrian Tracking by Track Management and Occlusion Handling. In Proceedings of
the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 20–25 June 2021; pp.
10953–10962. [CrossRef]
72. Yang, P.; Zhang, G.; Wang, L.; Xu, L.; Deng, Q.; Yang, M.-H. A Part-Aware Multi-Scale Fully Convolutional Network for Pedestrian
Detection. IEEE Trans. Intell. Transp. Syst. 2021, 22, 1125–1137. [CrossRef]
73. Cao, Z.; Yang, H.; Zhao, J.; Guo, S.; Li, L. Attention Fusion for One-Stage Multispectral Pedestrian Detection. Sensors 2021, 21, 4184.
[CrossRef]
74. Nataprawira, J.; Gu, Y.; Goncharenko, I.; Kamijo, S. Pedestrian Detection Using Multispectral Images and a Deep Neural Network.
Sensors 2021, 21, 2536. [CrossRef] [PubMed]
75. Chen, X.; Liu, L.; Tan, X. Robust Pedestrian Detection Based on Multi-Spectral Image Fusion and Convolutional Neural Networks.
Electronics 2022, 11, 1. [CrossRef]
76. Kim, J.U.; Park, S.; Ro, Y.M. Uncertainty-Guided Cross-Modal Learning for Robust Multispectral Pedestrian Detection. IEEE
Trans. Circuits Syst. Video Technol. 2022, 32, 1510–1523. [CrossRef]
77. Dasgupta, K.; Das, A.; Das, S.; Bhattacharya, U.; Yogamani, S. Spatio-Contextual Deep Network-Based Multimodal Pedestrian
Detection for Autonomous Driving. IEEE Trans. Intell. Transp. Syst. 2022, 23, 15940–15950. [CrossRef]
78. Held, P.; Steinhauser, D.; Koch, A.; Brandmeier, T.; Schwarz, U.T. A Novel Approach for Model-Based Pedestrian Tracking Using
Automotive Radar. IEEE Trans. Intell. Transp. Syst. 2022, 23, 7082–7095. [CrossRef]
79. Roszyk, K.; Nowicki, M.R.; Skrzypczyński, P. Adopting the YOLOv4 Architecture for Low-Latency Multispectral Pedestrian
Detection in Autonomous Driving. Sensors 2022, 22, 1082. [CrossRef]
80. Shao, Y.; Zhang, X.; Chu, H.; Zhang, X.; Zhang, D.; Rao, Y. AIR-YOLOv3: Aerial Infrared Pedestrian Detection via an Improved
YOLOv3 with Network Pruning. Appl. Sci. 2022, 12, 3627. [CrossRef]
81. Lv, H.; Yan, H.; Liu, K.; Zhou, Z.; Jing, J. YOLOv5-AC: Attention Mechanism-Based Lightweight YOLOv5 for Track Pedestrian
Detection. Sensors 2022, 22, 5903. [CrossRef]
82. Yuan, Y.; Xiong, Z.; Wang, Q. VSSA-NET: Vertical Spatial Sequence Attention Network for Traffic Sign Detection. IEEE Trans.
Image Process. 2019, 28, 3423–3434. [CrossRef]
83. Li, J.; Wang, Z. Real-Time Traffic Sign Recognition Based on Efficient CNNs in the Wild. IEEE Trans. Intell. Transp. Syst. 2019, 20,
975–984. [CrossRef]
84. Liu, Z.; Du, J.; Tian, F.; Wen, J. MR-CNN: A Multi-Scale Region-Based Convolutional Neural Network for Small Traffic Sign
Recognition. IEEE Access 2019, 7, 57120–57128. [CrossRef]
85. Tian, Y.; Gelernter, J.; Wang, X.; Li, J.; Yu, Y. Traffic Sign Detection Using a Multi-Scale Recurrent Attention Network. IEEE Trans.
Intell. Transp. Syst. 2019, 20, 4466–4475. [CrossRef]
86. Cao, J.; Song, C.; Peng, S.; Xiao, F.; Song, S. Improved Traffic Sign Detection and Recognition Algorithm for Intelligent Vehicles.
Sensors 2019, 19, 4021. [CrossRef] [PubMed]
87. Shao, F.; Wang, X.; Meng, F.; Zhu, J.; Wang, D.; Dai, J. Improved Faster R-CNN Traffic Sign Detection Based on a Second Region of
Interest and Highly Possible Regions Proposal Network. Sensors 2019, 19, 2288. [CrossRef] [PubMed]
88. Zhang, J.; Xie, Z.; Sun, J.; Zou, X.; Wang, J. A Cascaded R-CNN with Multiscale Attention and Imbalanced Samples for Traffic
Sign Detection. IEEE Access 2020, 8, 29742–29754. [CrossRef]
89. Tabernik, D.; Skočaj, D. Deep Learning for Large-Scale Traffic-Sign Detection and Recognition. IEEE Trans. Intell. Transp. Syst.
2020, 21, 1427–1440. [CrossRef]
90. Kamal, U.; Tonmoy, T.I.; Das, S.; Hasan, M.K. Automatic Traffic Sign Detection and Recognition Using SegU-Net and a Modified
Tversky Loss Function with L1-Constraint. IEEE Trans. Intell. Transp. Syst. 2020, 21, 1467–1479. [CrossRef]
91. Tai, S.-K.; Dewi, C.; Chen, R.-C.; Liu, Y.-T.; Jiang, X.; Yu, H. Deep Learning for Traffic Sign Recognition Based on Spatial Pyramid
Pooling with Scale Analysis. Appl. Sci. 2020, 10, 6997. [CrossRef]
92. Dewi, C.; Chen, R.-C.; Tai, S.-K. Evaluation of Robust Spatial Pyramid Pooling Based on Convolutional Neural Network for
Traffic Sign Recognition System. Electronics 2020, 9, 889. [CrossRef]
93. Nartey, O.T.; Yang, G.; Asare, S.K.; Wu, J.; Frempong, L.N. Robust Semi-Supervised Traffic Sign Recognition via Self-Training and
Weakly-Supervised Learning. Sensors 2020, 20, 2684. [CrossRef]
94. Dewi, C.; Chen, R.-C.; Liu, Y.-T.; Jiang, X.; Hartomo, K.D. Yolo V4 for Advanced Traffic Sign Recognition with Synthetic Training
Data Generated by Various GAN. IEEE Access 2021, 9, 97228–97242. [CrossRef]
95. Wang, L.; Zhou, K.; Chu, A.; Wang, G.; Wang, L. An Improved Light-Weight Traffic Sign Recognition Algorithm Based on
YOLOv4-Tiny. IEEE Access 2021, 9, 124963–124971. [CrossRef]
Sensors 2024, 24, 249 45 of 51
96. Cao, J.; Zhang, J.; Jin, X. A Traffic-Sign Detection Algorithm Based on Improved Sparse R-cnn. IEEE Access 2021, 9, 22774–122788.
[CrossRef]
97. Lopez-Montiel, M.; Orozco-Rosas, U.; Sánchez-Adame, M.; Picos, K.; Ross, O.H.M. Evaluation Method of Deep Learning-Based
Embedded Systems for Traffic Sign Detection. IEEE Access 2021, 9, 101217–101238. [CrossRef]
98. Zhou, K.; Zhan, Y.; Fu, D. Learning Region-Based Attention Network for Traffic Sign Recognition. Sensors 2021, 21, 686. [CrossRef]
99. Koh, D.-W.; Kwon, J.-K.; Lee, S.-G. Traffic Sign Recognition Evaluation for Senior Adults Using EEG Signals. Sensors 2021, 21, 4607.
[CrossRef] [PubMed]
100. Ahmed, S.; Kamal, U.; Hasan, M.K. DFR-TSD: A Deep Learning Based Framework for Robust Traffic Sign Detection Under
Challenging Weather Conditions. IEEE Trans. Intell. Transp. Syst. 2022, 23, 5150–5162. [CrossRef]
101. Xie, K.; Zhang, Z.; Li, B.; Kang, J.; Niyato, D.; Xie, S.; Wu, Y. Efficient Federated Learning with Spike Neural Networks for Traffic
Sign Recognition. IEEE Trans. Veh. Technol. 2022, 71, 9980–9999. [CrossRef]
102. Min, W.; Liu, R.; He, D.; Han, Q.; Wei, Q.; Wang, Q. Traffic Sign Recognition Based on Semantic Scene Understanding and
Structural Traffic Sign Location. IEEE Trans. Intell. Transp. Syst. 2022, 23, 15794–15807. [CrossRef]
103. Gu, Y.; Si, B. A Novel Lightweight Real-Time Traffic Sign Detection Integration Framework Based on YOLOv4. Entropy 2022,
24, 487. [CrossRef]
104. Liu, Y.; Shi, G.; Li, Y.; Zhao, Z. M-YOLO: Traffic Sign Detection Algorithm Applicable to Complex Scenarios. Symmetry 2022,
14, 952. [CrossRef]
105. Wang, X.; Guo, J.; Yi, J.; Song, Y.; Xu, J.; Yan, W.; Fu, X. Real-Time and Efficient Multi-Scale Traffic Sign Detection Method for
Driverless Cars. Sensors 2022, 22, 6930. [CrossRef] [PubMed]
106. Zhao, Y.; Mammeri, A.; Boukerche, A. A Novel Real-time Driver Monitoring System Based on Deep Convolutional Neural
Network. In Proceedings of the 2019 IEEE International Symposium on Robotic and Sensors Environments (ROSE), Ottawa, ON,
Canada, 17–18 June 2019; pp. 1–7. [CrossRef]
107. Hijaz, A.; Louie, W.-Y.G.; Mansour, I. Towards a Driver Monitoring System for Estimating Driver Situational Awareness. In
Proceedings of the 2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), New
Delhi, India, 14–18 October 2019; pp. 1–6. [CrossRef]
108. Kim, W.; Jung, W.-S.; Choi, H.K. Lightweight Driver Monitoring System Based on Multi-Task Mobilenets. Sensors 2019, 19, 3200.
[CrossRef] [PubMed]
109. Yoo, M.W.; Han, D.S. Optimization Algorithm for Driver Monitoring System using Deep Learning Approach. In Proceedings of
the 2020 International Conference on Artificial Intelligence in Information and Communication (ICAIIC), Fukuoka, Japan, 19–21
February 2020; pp. 043–046. [CrossRef]
110. Pondit, A.; Dey, A.; Das, A. Real-time Driver Monitoring System Based on Visual Cues. In Proceedings of the 2020 6th International
Conference on Interactive Digital Media (ICIDM), Bandung, Indonesia, 14–15 December 2020; pp. 1–6. [CrossRef]
111. Supraja, P.; Revati, P.; Ram, K.S.; Jyotsna, C. An Intelligent Driver Monitoring System. In Proceedings of the 2021 2nd International
Conference on Communication, Computing and Industry 4.0 (C2I4), Bangalore, India, 16–17 December 2021; pp. 1–5. [CrossRef]
112. Zhu, L.; Xiao, Y.; Li, X. Hybrid driver monitoring system based on Internet of Things and machine learning. In Proceedings of the
2021 IEEE International Conference on Consumer Electronics and Computer Engineering (ICCECE), Guangzhou, China, 15–17
January 2021; pp. 635–638. [CrossRef]
113. Darapaneni, N.; Parikh, B.; Paduri, A.R.; Kumar, S.; Beedkar, T.; Narayanan, A.; Tripathi, N.; Khoche, T. Distracted Driver
Monitoring System Using AI. In Proceedings of the 2022 Interdisciplinary Research in Technology and Management (IRTM),
Kolkata, India, 24–26 February 2022; pp. 1–8. [CrossRef]
114. Jeon, S.; Lee, S.; Lee, E.; Shin, J. Driver Monitoring System based on Distracted Driving Decision Algorithm. In Proceedings of the
2022 13th International Conference on Information and Communication Technology Convergence (ICTC), Jeju Island, Republic of
Korea, 19–21 October 2022; pp. 2280–2283. [CrossRef]
115. National Highway Traffic Safety Administration. NHTSA Orders Crash Reporting for Vehicles Equipped with Advanced Driver
Assistance Systems. 31 May 2023. Available online: https://www.nhtsa.gov/press-releases/nhtsa-orders-crash-reporting-
vehicles-equipped-advanced-driver-assistance-systems (accessed on 24 June 2023).
116. Hou, Y.; Ma, Z.; Liu, C.; Loy, C.C. Learning Lightweight Lane Detection CNNs by Self Attention Distillation. In Proceedings of
the 2019 IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Republic of Korea, 27 October–2 November
2019; pp. 1013–1021. [CrossRef]
117. Philion, J. FastDraw: Addressing the Long Tail of Lane Detection by Adapting a Sequential Prediction Network. In Proceedings
of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 16–20 June 2019;
pp. 11574–11583. [CrossRef]
118. Garnett, N.; Cohen, R.; Pe, T.; Lahav, R.; Levi, D. 3D-LaneNet: End-to-End 3D Multiple Lane Detection. In Proceedings of the
2019 IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Republic of Korea, 27 October–2 November 2019;
pp. 2921–2930. [CrossRef]
119. Liu, T.; Chen, Z.; Yang, Y.; Wu, Z.; Li, H. Lane Detection in Low-light Conditions Using an Efficient Data Enhancement: Light
Conditions Style Transfer. In Proceedings of the 2020 IEEE Intelligent Vehicles Symposium (IV), Las Vegas, NV, USA, 19
October–13 November 2020; pp. 1394–1399. [CrossRef]
Sensors 2024, 24, 249 46 of 51
120. Lu, Z.; Xu, Y.; Shan, X.; Liu, L.; Wang, X.; Shen, J. A Lane Detection Method Based on a Ridge Detector and Regional G-RANSAC.
Sensors 2019, 19, 4028. [CrossRef] [PubMed]
121. Yang, W.; Zhang, X.; Lei, Q.; Shen, D.; Xiao, P.; Huang, Y. Lane Position Detection Based on Long Short-Term Memory (LSTM).
Sensors 2020, 20, 3115. [CrossRef] [PubMed]
122. Wang, Q.; Han, T.; Qin, Z.; Gao, J.; Li, X. Multitask Attention Network for Lane Detection and Fitting. IEEE Trans. Neural Netw.
Learn. Syst. 2022, 33, 1066–1078. [CrossRef] [PubMed]
123. Cao, J.; Song, C.; Song, S.; Xiao, F.; Peng, S. Lane Detection Algorithm for Intelligent Vehicles in Complex Road Conditions and
Dynamic Environments. Sensors 2019, 19, 3166. [CrossRef]
124. Wang, X.; Qian, Y.; Wang, C.; Yang, M. Map-Enhanced Ego-Lane Detection in the Missing Feature Scenarios. IEEE Access 2020, 8,
107958–107968. [CrossRef]
125. Chen, Y.; Xiang, Z. Lane Mark Detection with Pre-Aligned Spatial-Temporal Attention. Sensors 2022, 22, 794. [CrossRef]
126. Lee, Y.; Park, M.-k.; Park, M. Improving Lane Detection Performance for Autonomous Vehicle Integrating Camera with Dual
Light Sensors. Electronics 2022, 11, 1474. [CrossRef]
127. Kim, D.-H. Lane Detection Method with Impulse Radio Ultra-Wideband Radar and Metal Lane Reflectors. Sensors 2020, 20, 324.
[CrossRef] [PubMed]
128. Suder, J.; Podbucki, K.; Marciniak, T.; Dabrowski,
˛ A. Low Complexity Lane Detection Methods for Light Photometry System.
Electronics 2021, 10, 1665. [CrossRef]
129. Kuo, C.Y.; Lu, Y.R.; Yang, S.M. On the Image Sensor Processing for Lane Detection and Control in Vehicle Lane Keeping Systems.
Sensors 2019, 19, 1665. [CrossRef] [PubMed]
130. Zou, Q.; Jiang, H.; Dai, Q.; Yue, Y.; Chen, L.; Wang, Q. Robust Lane Detection From Continuous Driving Scenes Using Deep
Neural Networks. IEEE Trans. Veh. Technol. 2020, 69, 41–54. [CrossRef]
131. Gao, Q.; Yin, H.; Zhang, W. Lane Departure Warning Mechanism of Limited False Alarm Rate Using Extreme Learning Residual
Network and ϵ-Greedy LSTM. Sensors 2020, 20, 644. [CrossRef]
132. Tabelini, L.; Berriel, R.; Paixão, T.M.; Badue, C.; De Souza, A.F.; Oliveira-Santos, T. Keep your Eyes on the Lane: Real-time
Attention-guided Lane Detection. In Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition
(CVPR), Nashville, TN, USA, 20–25 June 2021; pp. 294–302. [CrossRef]
133. Liu, L.; Chen, X.; Zhu, S.; Tan, P. CondLaneNet: A Top-to-down Lane Detection Framework Based on Conditional Convolution. In
Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision (ICCV), Montreal, QC, Canada, 10–17 October
2021; pp. 3753–3762. [CrossRef]
134. Dewangan, D.K.; Sahu, S.P. Driving Behavior Analysis of Intelligent Vehicle System for Lane Detection Using Vision-Sensor. IEEE
Sens. J. 2021, 21, 6367–6375. [CrossRef]
135. Haris, M.; Glowacz, A. Lane Line Detection Based on Object Feature Distillation. Electronics 2021, 10, 1102. [CrossRef]
136. Lu, S.; Luo, Z.; Gao, F.; Liu, M.; Chang, K.; Piao, C. A Fast and Robust Lane Detection Method Based on Semantic Segmentation
and Optical Flow Estimation. Sensors 2021, 21, 400. [CrossRef]
137. Ko, Y.; Lee, Y.; Azam, S.; Munir, F.; Jeon, M.; Pedrycz, W. Key Points Estimation and Point Instance Segmentation Approach for
Lane Detection. IEEE Trans. Intell. Transp. Syst. 2022, 23, 8949–8958. [CrossRef]
138. Zheng, T.; Huang, Y.; Liu, Y.; Tang, W.; Yang, Z.; Cai, D.; He, X. CLRNet: Cross-Layer Refinement Network for Lane Detection. In
Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA,
18–24 June 2022; pp. 888–897. [CrossRef]
139. Khan, M.A.-M.; Haque, M.F.; Hasan, K.R.; Alajmani, S.H.; Baz, M.; Masud, M.; Nahid, A.-A. LLDNet: A Lightweight Lane
Detection Approach for Autonomous Cars Using Deep Learning. Sensors 2022, 22, 5595. [CrossRef]
140. National Highway Traffic Safety Administration. Traffic Safety Facts 2020 Data: Crashes. 20 September 2021. Available online:
https://crashstats.nhtsa.dot.gov/Api/Public/ViewPublication/812801 (accessed on 19 July 2023).
141. Lee, K.; Kum, D. Collision Avoidance/Mitigation System: Motion Planning of Autonomous Vehicle via Predictive Occupancy
Map. IEEE Access 2019, 7, 52846–52857. [CrossRef]
142. Manghat, S.K.; El-Sharkawy, M. Forward Collision Prediction with Online Visual Tracking. In Proceedings of the 2019 IEEE
International Conference on Vehicular Electronics and Safety (ICVES), Cairo, Egypt, 4–6 September 2019; pp. 1–5. [CrossRef]
143. Yang, W.; Wan, B.; Qu, X. A Forward Collision Warning System Using Driving Intention Recognition of the Front Vehicle and
V2V Communication. IEEE Access 2020, 8, 11268–11278. [CrossRef]
144. Kumar, S.; Shaw, V.; Maitra, J.; Karmakar, R. FCW: A Forward Collision Warning System Using Convolutional Neural Network.
In Proceedings of the 2020 International Conference on Electrical and Electronics Engineering (ICE3), Gorakhpur, India, 14–15
February 2020; pp. 1–5. [CrossRef]
145. Wang, H.-M.; Lin, H.-Y. A Real-Time Forward Collision Warning Technique Incorporating Detection and Depth Estimation
Networks. In Proceedings of the 2020 IEEE International Conference on Systems, Man, and Cybernetics (SMC), Toronto, ON,
Canada, 11–14 October 2020; pp. 1966–1971. [CrossRef]
146. Lin, H.-Y.; Dai, J.-M.; Wu, L.-T.; Chen, L.-Q. A Vision-Based Driver Assistance System with Forward Collision and Overtaking
Detection. Sensors 2020, 20, 5139. [CrossRef] [PubMed]
147. Tang, J.; Li, J. End-to-End Monocular Range Estimation for Forward Collision Warning. Sensors 2020, 20, 5941. [CrossRef]
[PubMed]
Sensors 2024, 24, 249 47 of 51
148. Lim, Q.; Lim, Y.; Muhammad, H.; Tan, D.W.M.; Tan, U.-X. Forward collision warning system for motorcyclist using smartphone
sensors based on time-to-collision and trajectory prediction. J. Intell. Connect. Veh. 2021, 4, 93–103. [CrossRef]
149. Farhat, W.; Rhaiem, O.B.; Faiedh, H.; Souani, C. Cooperative Forward Collision Avoidance System Based on Deep Learning. In
Proceedings of the 2021 14th International Conference on Developments in eSystems Engineering (DeSE), Sharjah, United Arab
Emirates, 7–10 December 2021; pp. 515–519. [CrossRef]
150. Hong, S.; Park, D. Lightweight Collaboration of Detecting and Tracking Algorithm in Low-Power Embedded Systems for Forward
Collision Warning. In Proceedings of the 2021 Twelfth International Conference on Ubiquitous and Future Networks (ICUFN),
Jeju Island, Republic of Korea, 17–20 August 2021; pp. 159–162. [CrossRef]
151. Albarella, N.; Masuccio, F.; Novella, L.; Tufo, M.; Fiengo, G. A Forward-Collision Warning System for Electric Vehicles: Experi-
mental Validation in Virtual and Real Environment. Energies 2021, 14, 4872. [CrossRef]
152. Liu, Y.; Wang, X.; Zhang, Y.; Wang, Y. An effective target selection method for forward collision on a curve based on V2X. In
Proceedings of the 2022 7th International Conference on Intelligent Informatics and Biomedical Science (ICIIBMS), Nara, Japan,
24–26 November 2022; pp. 110–114. [CrossRef]
153. Yu, R.; Ai, H. Vehicle Forward Collision Warning based upon Low-Frequency Video Data: A hybrid Deep Learning Modeling
Approach. In Proceedings of the 2022 IEEE 25th International Conference on Intelligent Transportation Systems (ITSC), Macau,
China, 8–12 October 2022; pp. 59–64. [CrossRef]
154. Olou, H.B.; Ezin, E.C.; Dembele, J.M.; Cambier, C. FCPNet: A novel model to predict forward collision based upon CNN. In
Proceedings of the 2022 22nd International Conference on Control, Automation, and Systems (ICCAS), Jeju, Republic of Korea, 27
November–1 December 2022; pp. 1327–1332. [CrossRef]
155. Pak, J.M. Hybrid Interacting Multiple Model Filtering for Improving the Reliability of Radar-Based Forward Collision Warning
Systems. Sensors 2022, 22, 875. [CrossRef]
156. Bagi, S.S.G.; Garakani, H.G.; Moshiri, B.; Khoshnevisan, M. Sensing Structure for Blind Spot Detection System in Vehicles. In
Proceedings of the 2019 International Conference on Control, Automation and Information Sciences (ICCAIS), Chengdu, China,
24–27 October 2019; pp. 1–6. [CrossRef]
157. Sugiura, T.; Watanabe, T. Probable Multi-hypothesis Blind Spot Estimation for Driving Risk Prediction. In Proceedings of the
2019 IEEE Intelligent Transportation Systems Conference (ITSC), Auckland, New Zealand, 27–30 October 2019; pp. 4295–4302.
[CrossRef]
158. Zhao, Y.; Bai, L.; Lyu, Y.; Huang, X. Camera-Based Blind Spot Detection with a General Purpose Lightweight Neural Network.
Electronics 2019, 8, 233. [CrossRef]
159. Chang, I.-C.; Chen, W.-R.; Kuo, X.-M.; Song, Y.-J.; Liao, P.-H.; Kuo, C. An Artificial Intelligence-based Proactive Blind Spot
Warning System for Motorcycles. In Proceedings of the 2020 International Symposium on Computer, Consumer and Control
(IS3C), Taichung City, Taiwan, 13–16 November 2020; pp. 404–407. [CrossRef]
160. Naik, A.; Naveen, G.V.V.S.; Satardhan, J.; Chavan, A. LiEBiD—A LIDAR based Early Blind Spot Detection and Warning System for
Traditional Steering Mechanism. In Proceedings of the 2020 International Conference on Smart Electronics and Communication
(ICOSEC), Trichy, India, 10–12 September 2020; pp. 604–609. [CrossRef]
161. Singh, N.; Ji, G. Computer vision assisted, real-time blind spot detection based collision warning system for two-wheelers.
In Proceedings of the 2021 5th International Conference on Electronics, Communication and Aerospace Technology (ICECA),
Coimbatore, India, 2–4 December 2021; pp. 1179–1184. [CrossRef]
162. Shete, R.G.; Kakade, S.K.; Dhanvijay, M. A Blind-spot Assistance for Forklift using Ultrasonic Sensor. In Proceedings of the 2021
IEEE International Conference on Technology, Research, and Innovation for Betterment of Society (TRIBES), Raipur, India, 17–19
December 2021; pp. 1–4. [CrossRef]
163. Schlegel, K.; Weissig, P.; Protzel, P. A blind-spot-aware optimization-based planner for safe robot navigation. In Proceedings of
the 2021 European Conference on Mobile Robots (ECMR), Bonn, Germany, 31 August–3 September 2021; pp. 1–8. [CrossRef]
164. Kundid, J.; Vranješ, M.; Lukač, Ž.; Popović, M. ADAS algorithm for creating a wider view of the environment with a blind spot
display for the driver. In Proceedings of the 2021 Zooming Innovation in Consumer Technologies Conference (ZINC), Novi Sad,
Serbia, 26–27 May 2021; pp. 219–224. [CrossRef]
165. Sui, S.; Li, T.; Chen, S. A-pillar Blind Spot Display Algorithm Based on Line of Sight. In Proceedings of the 2022 IEEE 5th
International Conference on Computer and Communication Engineering Technology (CCET), Beijing, China, 19–21 August 2022;
pp. 100–105. [CrossRef]
166. Wang, Z.; Jin, Q.; Wu, B. Design of a Vision Blind Spot Detection System Based on Depth Camera. In Proceedings of the 2022 IEEE
Intl Conf on Dependable, Autonomic and Secure Computing, Intl Conf on Pervasive Intelligence and Computing, Intl Conf on
Cloud and Big Data Computing, Intl Conf on Cyber Science and Technology Congress (DASC/PiCom/CBDCom/CyberSciTech),
Falerna, Italy, 12–15 September 2022; pp. 1–5. [CrossRef]
167. Zhou, J.; Hirano, M.; Yamakawa, Y. High-Speed Recognition of Pedestrians out of Blind Spot with Pre-detection of Potentially
Dangerous Regions. In Proceedings of the 2022 IEEE 25th International Conference on Intelligent Transportation Systems (ITSC),
Macau, China, 8–12 October 2022; pp. 945–950. [CrossRef]
168. Seo, H.; Kim, H.; Lee, K.; Lee, K. Multi-Sensor-Based Blind-Spot Reduction Technology and a Data-Logging Method Using a
Gesture Recognition Algorithm Based on Micro E-Mobility in an IoT Environment. Sensors 2022, 22, 1081. [CrossRef]
Sensors 2024, 24, 249 48 of 51
169. Muzammel, M.; Yusoff, M.Z.; Saad, M.N.M.; Sheikh, F.; Awais, M.A. Blind-Spot Collision Detection System for Commercial
Vehicles Using Multi Deep CNN Architecture. Sensors 2022, 22, 6088. [CrossRef]
170. Flores, C.; Merdrignac, P.; de Charette, R.; Navas, F.; Milanés, V.; Nashashibi, F. A Cooperative Car-Following/Emergency Braking
System with Prediction-Based Pedestrian Avoidance Capabilities. IEEE Trans. Intell. Transp. Syst. 2019, 20, 1837–1846. [CrossRef]
171. Shin, S.-G.; Ahn, D.-R.; Baek, Y.-S.; Lee, H.-K. Adaptive AEB Control Strategy for Collision Avoidance Including Rear Vehicles. In
Proceedings of the 2019 IEEE Intelligent Transportation Systems Conference (ITSC), Auckland, New Zealand, 27–30 October 2019;
pp. 2872–2878. [CrossRef]
172. Yang, W.; Zhang, X.; Lei, Q.; Cheng, X. Research on Longitudinal Active Collision Avoidance of Autonomous Emergency Braking
Pedestrian System (AEB-P). Sensors 2019, 19, 4671. [CrossRef] [PubMed]
173. Gao, Y.; Xu, Z.; Zhao, X.; Wang, G.; Yuan, Q. Hardware-in-the-Loop Simulation Platform for Autonomous Vehicle AEB Prototyping
and Validation. In Proceedings of the 2020 IEEE 23rd International Conference on Intelligent Transportation Systems (ITSC),
Rhodes, Greece, 20–23 September 2020; pp. 1–6. [CrossRef]
174. Guo, L.; Ge, P.; Sun, D. Variable Time Headway Autonomous Emergency Braking Control Algorithm Based on Model Predictive
Control. In Proceedings of the 2020 Chinese Automation Congress (CAC), Shanghai, China, 6–8 November 2020; pp. 1794–1798.
[CrossRef]
175. Leyrer, M.L.; Stöckle, C.; Herrmann, S.; Dirndorfer, T.; Utschick, W. An Efficient Approach to Simulation-Based Robust Function
and Sensor Design Applied to an Automatic Emergency Braking System. In Proceedings of the 2020 IEEE Intelligent Vehicles
Symposium (IV), Las Vegas, NV, USA, 19 October–13 November 2020; pp. 617–622. [CrossRef]
176. Yu, L.; Wang, R.; Lu, Z. Autonomous Emergency Braking Control Based on Inevitable Collision State for Multiple Collision
Scenarios at Intersection. In Proceedings of the 2021 American Control Conference (ACC), New Orleans, LA, USA, 25–28 May
2021; pp. 148–153. [CrossRef]
177. Izquierdo, A.; Val, L.D.; Villacorta, J.J. Feasibility of Using a MEMS Microphone Array for Pedestrian Detection in an Autonomous
Emergency Braking System. Sensors 2021, 21, 4162. [CrossRef] [PubMed]
178. Jin, X.; Zhang, J.; Wu, Y.; Gao, J. Adaptive AEB control strategy for driverless vehicles in campus scenario. In Proceedings of the
2022 International Conference on Advanced Mechatronic Systems (ICAMechS), Toyama, Japan, 17–20 December 2022; pp. 47–52.
[CrossRef]
179. Mannam, N.P.B.; Rajalakshmi, P. Determination of ADAS AEB Car to Car and Car to Pedestrian Scenarios for Autonomous Vehi-
cles. In Proceedings of the 2022 IEEE Global Conference on Computing, Power and Communication Technologies (GlobConPT),
New Delhi, India, 23–25 September 2022; pp. 1–7. [CrossRef]
180. Guo, J.; Wang, Y.; Yin, X.; Liu, P.; Hou, Z.; Zhao, D. Study on the Control Algorithm of Automatic Emergency Braking System
(AEBS) for Commercial Vehicle Based on Identification of Driving Condition. Machines 2022, 10, 895. [CrossRef]
181. Li, G.; Görges, D. Ecological Adaptive Cruise Control and Energy Management Strategy for Hybrid Electric Vehicles Based on
Heuristic Dynamic Programming. IEEE Trans. Intell. Transp. Syst. 2019, 20, 3526–3535. [CrossRef]
182. Cheng, S.; Li, L.; Mei, M.-M.; Nie, Y.-L.; Zhao, L. Multiple-Objective Adaptive Cruise Control System Integrated with DYC. IEEE
Trans. Veh. Technol. 2019, 68, 4550–4559. [CrossRef]
183. Lunze, J. Adaptive Cruise Control with Guaranteed Collision Avoidance. IEEE Trans. Intell. Transp. Syst. 2019, 20, 1897–1907.
[CrossRef]
184. Woo, H.; Madokoro, H.; Sato, K.; Tamura, Y.; Yamashita, A.; Asama, H. Advanced Adaptive Cruise Control Based on Operation
Characteristic Estimation and Trajectory Prediction. Appl. Sci. 2019, 9, 4875. [CrossRef]
185. Zhang, S.; Zhuan, X. Study on Adaptive Cruise Control Strategy for Battery Electric Vehicle Considering Weight Adjustment.
Symmetry 2019, 11, 1516. [CrossRef]
186. Zhai, C.; Chen, X.; Yan, C.; Liu, Y.; Li, H. Ecological Cooperative Adaptive Cruise Control for a Heterogeneous Platoon of
Heavy-Duty Vehicles with Time Delays. IEEE Access 2020, 8, 146208–146219. [CrossRef]
187. Li, G.; Görges, D. Ecological Adaptive Cruise Control for Vehicles with Step-Gear Transmission Based on Reinforcement Learning.
IEEE Trans. Intell. Transp. Syst. 2020, 21, 4895–4905. [CrossRef]
188. Jia, Y.; Jibrin, R.; Görges, D. Energy-Optimal Adaptive Cruise Control for Electric Vehicles Based on Linear and Nonlinear Model
Predictive Control. IEEE Trans. Veh. Technol. 2020, 69, 14173–14187. [CrossRef]
189. Nie, Z.; Farzaneh, H. Adaptive Cruise Control for Eco-Driving Based on Model Predictive Control Algorithm. Appl. Sci. 2020,
10, 5271. [CrossRef]
190. Guo, L.; Ge, P.; Sun, D.; Qiao, Y. Adaptive Cruise Control Based on Model Predictive Control with Constraints Softening. Appl.
Sci. 2020, 10, 1635. [CrossRef]
191. Liu, Y.; Wang, W.; Hua, X.; Wang, S. Safety Analysis of a Modified Cooperative Adaptive Cruise Control Algorithm Accounting
for Communication Delay. Sustainability 2020, 12, 7568. [CrossRef]
192. Lin, Y.; McPhee, J.; Azad, N.L. Comparison of Deep Reinforcement Learning and Model Predictive Control for Adaptive Cruise
Control. IEEE Trans. Intell. Veh. 2021, 6, 221–231. [CrossRef]
193. Gunter, G.; Gloudemans, D.; Stern, R.E.; McQuade, S.; Bhadani, R.; Bunting, M.; Monache, M.L.D.; Lysecky, R.; Seibold, B.;
Sprinkle, J.; et al. Are Commercially Implemented Adaptive Cruise Control Systems String Stable? IEEE Trans. Intell. Transp. Syst.
2021, 22, 6992–7003. [CrossRef]
Sensors 2024, 24, 249 49 of 51
194. Sawant, J.; Chaskar, U.; Ginoya, D. Robust Control of Cooperative Adaptive Cruise Control in the Absence of Information About
Preceding Vehicle Acceleration. IEEE Trans. Intell. Transp. Syst. 2021, 22, 5589–5598. [CrossRef]
195. Yang, Z.; Wang, Z.; Yan, M. An Optimization Design of Adaptive Cruise Control System Based on MPC and ADRC. Actuators
2021, 10, 110. [CrossRef]
196. Anselma, P.G. Optimization-Driven Powertrain-Oriented Adaptive Cruise Control to Improve Energy Saving and Passenger
Comfort. Energies 2021, 14, 2897. [CrossRef]
197. Chen, C.; Guo, J.; Guo, C.; Chen, C.; Zhang, Y.; Wang, J. Adaptive Cruise Control for Cut-In Scenarios Based on Model Predictive
Control Algorithm. Appl. Sci. 2021, 11, 5293. [CrossRef]
198. Hu, C.; Wang, J. Trust-Based and Individualizable Adaptive Cruise Control Using Control Barrier Function Approach with
Prescribed Performance. IEEE Trans. Intell. Transp. Syst. 2022, 23, 6974–6984. [CrossRef]
199. Yan, R.; Jiang, R.; Jia, B.; Huang, J.; Yang, D. Hybrid Car-Following Strategy Based on Deep Deterministic Policy Gradient and
Cooperative Adaptive Cruise Control. IEEE Trans. Autom. Sci. Eng. 2022, 19, 2816–2824. [CrossRef]
200. Zhang, Y.; Wu, Z.; Zhang, Y.; Shang, Z.; Wang, P.; Zou, Q.; Zhang, X.; Hu, J. Human-Lead-Platooning Cooperative Adaptive
Cruise Control. IEEE Trans. Intell. Transp. Syst. 2022, 23, 18253–18272. [CrossRef]
201. Boddupalli, S.; Rao, A.S.; Ray, S. Resilient Cooperative Adaptive Cruise Control for Autonomous Vehicles Using Machine
Learning. IEEE Trans. Intell. Transp. Syst. 2022, 23, 15655–15672. [CrossRef]
202. Kamal, M.A.S.; Hashikura, K.; Hayakawa, T.; Yamada, K.; Imura, J.-i. Adaptive Cruise Control with Look-Ahead Anticipation for
Driving on Freeways. Appl. Sci. 2022, 12, 929. [CrossRef]
203. Li, Z.; Deng, Y.; Sun, S. Adaptive Cruise Predictive Control Based on Variable Compass Operator Pigeon-Inspired Optimization.
Electronics 2022, 11, 1377. [CrossRef]
204. Petri, A.-M.; Petreus, , D.M. Adaptive Cruise Control in Electric Vehicles with Field-Oriented Control. Appl. Sci. 2022, 12, 7094.
[CrossRef]
205. Deng, L.; Yang, M.; Hu, B.; Li, T.; Li, H.; Wang, C. Semantic Segmentation-Based Lane-Level Localization Using Around View
Monitoring System. IEEE Sens. J. 2019, 19, 10077–10086. [CrossRef]
206. Rasdi, M.H.F.B.; Hashim, N.N.W.B.N.; Hanizam, S. Around View Monitoring System with Motion Estimation in ADAS Applica-
tion. In Proceedings of the 2019 7th International Conference on Mechatronics Engineering (ICOM), Putrajaya, Malaysia, 30–31
October 2019; pp. 1–5. [CrossRef]
207. Hanizam, S.; Hashim, N.N.W.N.; Abidin, Z.Z.; Zaki, H.F.M.; Rahman, H.A.; Mahamud, N.H. Motion Estimation on Homoge-
nous Surface for Around View Monitoring System. In Proceedings of the 2019 7th International Conference on Mechatronics
Engineering (ICOM), Putrajaya, Malaysia, 30–31 October 2019; pp. 1–6. [CrossRef]
208. Im, G.; Kim, M.; Park, J. Parking Line Based SLAM Approach Using AVM/LiDAR Sensor Fusion for Rapid and Accurate Loop
Closing and Parking Space Detection. Sensors 2019, 19, 4811. [CrossRef]
209. Hsu, C.-M.; Chen, J.-Y. Around View Monitoring-Based Vacant Parking Space Detection and Analysis. Appl. Sci. 2019, 9, 3403.
[CrossRef]
210. Lee, Y.H.; Kim, W.-Y. An Automatic Calibration Method for AVM Cameras. IEEE Access 2020, 8, 192073–192086. [CrossRef]
211. Akita, K.; Hayama, M.; Kyutoku, H.; Ukita, N. AVM Image Quality Enhancement by Synthetic Image Learning for Supervised
Deblurring. In Proceedings of the 2021 17th International Conference on Machine Vision and Applications (MVA), Aichi, Japan,
25–27 July 2021; pp. 1–5. [CrossRef]
212. Lee, J.H.; Lee, D.-W. A Novel AVM Calibration Method Using Unaligned Square Calibration Boards. Sensors 2021, 21, 2265.
[CrossRef] [PubMed]
213. Lee, Y.; Park, M. Around-View-Monitoring-Based Automatic Parking System Using Parking Line Detection. Appl. Sci. 2021,
11, 11905. [CrossRef]
214. Lee, S.; Lee, D.; Kee, S.-C. Deep-Learning-Based Parking Area and Collision Risk Area Detection Using AVM in Autonomous
Parking Situation. Sensors 2022, 22, 1986. [CrossRef]
215. Geiger, A.; Lenz, P.; Urtasun, R. Are we ready for autonomous driving? The KITTI vision benchmark suite. In Proceedings of
the 2012 IEEE Conference on Computer Vision and Pattern Recognition, Providence, RI, USA, 16–21 June 2012; pp. 3354–3361.
[CrossRef]
216. Geiger, A.; Lenz, P.; Stiller, C.; Urtasun, R. Vision meets robotics: The KITTI dataset. Int. J. Robot. Res. 2013, 32, 1231–1237.
[CrossRef]
217. Chang, M.-F.; Ramanan, D.; Hays, J.; Lambert, J.; Sangkloy, P.; Singh, J.; Bak, S.; Hartnett, A.; Wang, D.; Carr, P.; et al. Argoverse:
3D Tracking and Forecasting with Rich Maps. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern
Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019; pp. 8740–8749. [CrossRef]
218. Caesar, H.; Bankiti, V.; Lang, A.H.; Vora, S.; Liong, V.E.; Xu, Q.; Krishnan, A.; Pan, Y.; Baldan, G.; Beijbom, O. nuScenes: A
Multimodal Dataset for Autonomous Driving. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern
Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020; pp. 11618–11628. [CrossRef]
219. Lyu, S.; Chang, M.-C.; Du, D.; Wen, L.; Qi, H.; Li, Y.; Wei, Y.; Ke, L.; Hu, T.; Del Coco, M.; et al. UA-DETRAC 2017: Report of
AVSS2017 & IWT4S Challenge on Advanced Traffic Monitoring. In Proceedings of the 2017 14th IEEE International Conference
on Advanced Video and Signal Based Surveillance (AVSS), Lecce, Italy, 29 August–1 September 2017; pp. 1–7. [CrossRef]
Sensors 2024, 24, 249 50 of 51
220. Wen, L.; Du, D.; Cai, Z.; Lei, Z.; Chang, M.C.; Qi, H.; Lim, J.; Yang, M.H.; Lyu, S. UA-DETRAC: A New Benchmark and Protocol
for Multi-Object Detection and Tracking. Comput. Vis. Image Underst. 2020, 193, 102907. [CrossRef]
221. Goyette, N.; Jodoin, P.-M.; Porikli, F.; Konrad, J.; Ishwar, P. Changedetection.net: A new change detection benchmark dataset. In
Proceedings of the 2012 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, Providence,
RI, USA, 16–21 June 2012; pp. 1–8. [CrossRef]
222. Razakarivony, S.; Jurie, F. Vehicle detection in aerial imagery: A small target detection benchmark. J. Vis. Commun. Image Represent.
2015, 26, 2289–2302. [CrossRef]
223. Kenk, M.A.; Hassaballah, M. DAWN: Vehicle detection in adverse weather nature. arXiv 2020, arXiv:2008.05402.
224. Lin, T.Y.; Maire, M.; Belongie, S.; Bourdev, L.; Girshick, R.; Hays, J.; Perona, P.; Zitnick, C.L.; Dollár, P. Microsoft COCO: Common
Objects in Context. In Computer Vision—ECCV 2014 Lecture Notes in Computer Science; Fleet, D., Pajdla, T., Schiele, B., Tuytelaars,
T., Eds.; Springer: Cham, Switzerland, 2014; Volume 8693. [CrossRef]
225. OpenStreetMap contributors. OpenStreetMap Database [PostgreSQL Via API]; OpenStreetMap Foundation: Cambridge, UK, 2023.
226. Li, J.; Sun, W. Drone-based RGB-Infrared Cross-Modality Vehicle Detection via Uncertainty-Aware Learning. arXiv 2020,
arXiv:2003.02437.
227. Song, H.; Liang, H.; Li, H.; Dai, Z.; Yun, X. Vision-based vehicle detection and counting system using deep learning in highway
scenes. Eur. Transp. Res. Rev. 2019, 11, 51. [CrossRef]
228. The Third “Aerospace Cup” National Innovation and Creativity Competition Preliminary Round, Proposition 2, Track 2, Optical
Target Recognition, Preliminary Data Set. Available online: https://www.atrdata.cn/#/customer/match/2cdfe76d-de6c-48f1
-abf9-6e8b7ace1ab8/bd3aac0b-4742-438d-abca-b9a84ca76cb3?questionType=model (accessed on 15 March 2023).
229. Zhang, S.; Benenson, R.; Schiele, B. CityPersons: A Diverse Dataset for Pedestrian Detection. In Proceedings of the 2017 IEEE
Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 4457–4465. [CrossRef]
230. Ferryman, J.; Shahrokni, A. PETS2009: Dataset and challenge. In Proceedings of the 2009 Twelfth IEEE International Workshop
on Performance Evaluation of Tracking and Surveillance, Snowbird, UT, USA, 7–12 December 2009; pp. 1–6. [CrossRef]
231. Dollar, P.; Wojek, C.; Schiele, B.; Perona, P. Pedestrian Detection: An Evaluation of the State of the Art. IEEE Trans. Pattern Anal.
Mach. Intell. 2012, 34, 743–761. [CrossRef] [PubMed]
232. Hwang, S.; Park, J.; Kim, N.; Choi, Y.; Kweon, I.S. Multispectral pedestrian detection: Benchmark dataset and baseline. In
Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June
2015; pp. 1037–1045. [CrossRef]
233. Houben, S.; Stallkamp, J.; Salmen, J.; Schlipsing, M.; Igel, C. Detection of traffic signs in real-world images: The German traffic
sign detection benchmark. In Proceedings of the 2013 International Joint Conference on Neural Networks (IJCNN), Dallas, TX,
USA, 4–9 August 2013; pp. 1–8.
234. Mathias, M.; Timofte, R.; Benenson, R.; Van Gool, L. Traffic sign recognition—How far are we from the solution? In Proceedings
of the 2013 International Joint Conference on Neural Networks (IJCNN), Dallas, TX, USA, 4–9 August 2013; pp. 1–8.
235. Sivaraman, S.; Trivedi, M.M. A general active-learning framework for on-road vehicle recognition and tracking. IEEE Trans. Intell.
Transp. Syst. 2010, 11, 267–276. [CrossRef]
236. Temel, D.; Kwon, G.; Prabhushankar, M.; AlRegib, G. CURE-TSD: Challenging unreal and real environments for traffic sign
recognition. In Proceedings of the NeurIPS Workshop on Machine Learning for Intelligent Transportation Systems, Long Beach,
CA, USA, 4–9 December 2017; pp. 1–6.
237. Zhu, Z.; Liang, D.; Zhang, S.; Huang, X.; Li, B.; Hu, S. Traffic-Sign Detection and Classification in the Wild. In Proceedings of the
2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 2110–2118.
[CrossRef]
238. Zhang, J.; Zou, X.; Kuang, L.D.; Wang, J.; Sherratt, R.S.; Yu, X. CCTSDB 2021: A more comprehensive traffic sign detection
benchmark. Hum.-Centric Comput. Inf. Sci. 2022, 12, 23. [CrossRef]
239. Bai, C.; Wu, K.; Wang, D.; Yan, M. A Small Object Detection Research Based on Dynamic Convolution Neural Network. Available
online: https://assets.researchsquare.com/files/rs-1116930/v1_covered.pdf?c=1639594752 (accessed on 14 August 2023).
240. Pan, X.; Shi, J.; Luo, P.; Wang, X.; Tang, X. Spatial as deep: Spatial CNN for traffic scene understanding. In Proceedings of the
AAAI Conference on Artificial Intelligenc, New Orleans, LA, USA, 2–7 February 2018. [CrossRef]
241. Tusimple Benchmark. Available online: https://github.com/%0ATuSimple/tusimple-benchmark (accessed on 1 January 2021).
242. Yu, F.; Chen, H.; Wang, X.; Xian, W.; Chen, Y.; Liu, F.; Madhavan, V.; Darrell, T. BDD100K: A Diverse Driving Dataset
for Heterogeneous Multitask Learning. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern
Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020; pp. 2633–2642. [CrossRef]
243. Mvirgo. Mvirgo/MLND-Capstone: Lane Detection with Deep Learning—My Capstone Project for Udacity’s ML Nanodegree.
GitHub. Available online: https://github.com/mvirgo/MLND-Capstone (accessed on 12 July 2022).
244. Bosch Automated Driving, Unsupervised Llamas Lane Marker Dataset. 2020. Available online: https://unsupervised-llamas.
com/llamas/ (accessed on 2 April 2023).
245. Passos, B.T.; Cassaniga, M.; Fernandes, A.M.R.; Medeiros, K.B.; Comunello, E. Cracks and Potholes in Road Images. Mendeley
Data, V4. 2020. Available online: https://data.mendeley.com/datasets/t576ydh9v8/4 (accessed on 13 August 2023).
246. Waymo LLC. Waymo Open Dataset. Available online: https://waymo.com/open (accessed on 29 July 2023).
Sensors 2024, 24, 249 51 of 51
247. Ess, A.; Leibe, B.; Van Gool, L. Depth and Appearance for Mobile Scene Analysis. In Proceedings of the 2007 IEEE 11th
International Conference on Computer Vision, Rio de Janeiro, Brazil, 14–20 October 2007; -8, pp. 1–8. [CrossRef]
248. Yen-Zhang, H. Building Traffic Signs Opens the Dataset in Taiwan and Verifies It by Convolutional Neural Network. Ph.D. Thesis,
National Taichung University of Science and Technology, Taichung, Taiwan, 2018.
249. Zhao, Z.-Q.; Zheng, P.; Xu, S.-T.; Wu, X. Object Detection with Deep Learning: A Review. IEEE Trans. Neural Netw. Learn. Syst.
2019, 30, 3212–3232. [CrossRef]
250. Khan, M.Q.; Lee, S. A Comprehensive Survey of Driving Monitoring and Assistance Systems. Sensors 2019, 19, 2574. [CrossRef]
251. Haq, Q.M.U.; Haq, M.A.; Ruan, S.-J.; Liang, P.-J.; Gao, D.-Q. 3D Object Detection Based on Proposal Generation Network Utilizing
Monocular Images. IEEE Consum. Electron. Mag. 2022, 11, 47–53. [CrossRef]
252. Haq, M.A.; Ruan, S.-J.; Shao, M.-E.; Haq, Q.M.U.; Liang, P.-J.; Gao, D.-Q. One Stage Monocular 3D Object Detection Utilizing
Discrete Depth and Orientation Representation. IEEE Trans. Intell. Transp. Syst. 2022, 23, 21630–21640. [CrossRef]
253. Faisal, M.M.; Mohammed, M.S.; Abduljabar, A.M.; Abdulhussain, S.H.; Mahmmod, B.M.; Khan, W.; Hussain, A. Object Detection
and Distance Measurement Using AI. In Proceedings of the 2021 14th International Conference on Developments in eSystems
Engineering (DeSE), Sharjah, United Arab Emirates, 7–10 December 2021; pp. 559–565. [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual
author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to
people or property resulting from any ideas, methods, instructions or products referred to in the content.