0% found this document useful (0 votes)
43 views6 pages

22 Paper-25

This paper presents an Intelligent Transportation System utilizing the YOLOv8 model for real-time vehicle detection, classification, and density analysis to address urban traffic congestion. By leveraging advanced computer vision techniques, the system enhances traffic management through accurate vehicle tracking and data collection, aiding policymakers in decision-making. The integration of YOLOv8 allows for efficient processing and improved accuracy in monitoring traffic conditions, ultimately contributing to better urban mobility and road safety.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
43 views6 pages

22 Paper-25

This paper presents an Intelligent Transportation System utilizing the YOLOv8 model for real-time vehicle detection, classification, and density analysis to address urban traffic congestion. By leveraging advanced computer vision techniques, the system enhances traffic management through accurate vehicle tracking and data collection, aiding policymakers in decision-making. The integration of YOLOv8 allows for efficient processing and improved accuracy in monitoring traffic conditions, ultimately contributing to better urban mobility and road safety.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 6

Development of an Intelligent Transportation

System for Detection, Classification, and Density


Analysis Using YOLOv8
Chevella Anil Kumar Vinay Kumar Chikoti Juluru Ramsai
Dept. Electronics and Softtech Engineer Resources Inc Dept. Electronics and
communication communication
Vinay.kf570@gmail.com VNR VJIET
VNR VJIET
Hyderabad Hyderabad
chevellaanilkumar@gmail.com ramsaijuluru@gmail.com
Abstract—

Rana Eswar Surya


Katla Arun Kumar Sree ram Gandla
Dept.Electronics and
Dept.Electronics and E-Giants Technologies LLC Communication
Communication Sreerammech334@gmail.com VNRVJIET
VNRVJIET
Hyderabad
Hyderabad
katlaarun18@gmail.com ranaeswarsurya@gmail.com

Traffic congestion is a persistent problem in cities, causing


Feature Pyramid Networks (FPNs) and the application of
longer travel times, higher fuel use, and poorer air quality. To
effectively manage traffic, it's essential to have real-time attention mechanisms [10][11] have enhanced the
information about traffic conditions, such as the number of performance of object detection models, making them more
vehicles on the road and their movement. This paper effective for real-time applications. Recently, there has been
introduces a new method that uses the YOLOv8 object progress in creating more lightweight and efficient models,
detection model to address these issues. By accurately such as YOLOv6 and YOLOx, which are designed to
identifying and tracking vehicles in real-time, our system deliver high performance while optimizing resource usage.
provides valuable data on traffic patterns, aiding traffic which are optimized for specific industrial applications [12]
engineers and policymakers in making informed decisions. [13]. The integration of these advanced techniques into
Keywords— YOLOv8, Traffic congestion, Vehicle detection,
vehicle density detection systems represents a significant
Multi-object tracking, Robustness leap forward in traffic management. By leveraging these
state-of-the-art models, it is possible to develop robust
I. INTRODUCTION systems that not only monitor traffic conditions but also
enhance traffic flow and reduce congestion [14][15]. This
The fast growth of two-wheelers has caused substantial functionality is crucial for managing the increasing
changes in the urban transportation sector. Road safety has complexity of urban traffic systems [16][17].
been impacted and congestion has been made worse by the II. MOTIVATION OF THE PAPER
complex traffic scenarios brought about by this rise in
vehicle density[1][2]. As urban areas expand and the number of vehicles increases,
By facilitating more effective traffic management and efficient traffic management is crucial for improving road
flow, real-time vehicle density measurement has become a safety and optimizing vehicle flow. Understanding and
crucial tool for resolving these traffic issues . In analyzing vehicle density are key to managing traffic
metropolitan settings, this type of analysis aids in reducing patterns effectively, allowing authorities to take timely
traffic and improving general traffic management techniques actions and plan infrastructure improvements. Traditional
[3][4].The creation of systems for vehicle density analysis methods of traffic monitoring often fall short in providing
has been greatly influenced by recent developments in real-time insights due to accuracy limitations and an
computer vision and machine learning. inability to adapt to fluctuating traffic conditions. Advanced
Object identification has historically relied on methods technologies, By incorporating computer vision and
like Histograms of Oriented Gradients (HOG) [5], but more machine learning, innovative approaches can significantly
recent advancements include Convolutional Neural enhance the precision of vehicle detection and traffic density
Networks (CNNs) and models like YOLO (You Only Look analysis. These technologies bring a new level of accuracy,
Once) [6][7]. making it possible to achieve more reliable results in
YOLO, specifically YOLOv5, YOLOv7, and YOLOv8, monitoring and managing traffic.
has significantly advanced object detection, offering real- The motivation behind this paper stems from the need for a
time performance and improved accuracy. This unified robust system that leverages the capabilities of YOLOv8, a
architecture enables simultaneous detection and state-of-the-art object detection framework known for its
classification, making it ideal for rapid and precise vehicle high speed and accuracy. YOLOv8 allows for real-time
density analysis. Techniques such as CSPNet have further processing and the simultaneous detection of various vehicle
improved both the accuracy and speed of entity detection[8] sizes, which is crucial for traffic management. Accurately
[9]. detecting different vehicle sizes not only aids in
In addition to YOLO, other entity detection methods and understanding traffic flow but also facilitates better safety
enhancements have contributed to the field. Techniques like measures by identifying potential risks associated with
larger vehicles, such as trucks and buses, that may impact localizing vehicles within the scene. The model’s
smaller vehicles and pedestrians. architecture allows for precise detection in various traffic
Moreover, integrating vehicle density analysis with the conditions, making it ideal for vehicle density analysis and
ability to categorize vehicles by size can provide valuable real-time traffic monitoring. This approach offers a robust
insights for traffic regulation, such as determining the need solution for urban traffic management, helping to alleviate
for weight limits in certain areas or optimizing traffic light congestion and optimize traffic flow efficiently[22].
timing based on the types of vehicles present. By addressing
these critical aspects, this paper aims to contribute to the A. YOLOv8 for Vehicle Detection
development of intelligent traffic management systems that YOLOv8 (You Only Look Once, version 8)
enhance urban mobility, reduce congestion, and improve represents a significant advancement in object detection
overall road safety. technology, particularly well-suited for real-time vehicle
detection. Its high accuracy is attributed to a sophisticated
III. RELATED WORKS architecture and extensive training on large datasets,
enabling it to detect vehicles of various sizes and
Although considerable research has been conducted on orientations, even in complex traffic scenarios. The model is
vehicle detection and speed estimation, the topic of vehicle designed to process information quickly, making it suitable
size estimation has seen limited attention. Essential for traffic management systems that need instant
functions like traffic detection, speed estimation, and traffic responses[23].
management play key roles in building effective systems.
For example, Berna, Swathi, and Devi (2020) introduced a
method for rapid and real-time estimation using a fixed
camera combined with a center of gravity approach. This
method effectively separates variables and thresholds,
calculating speed by measuring distance over defined time
intervals, and includes vehicle length estimation. Their
approach achieved an impressive 90% accuracy in vehicle
length estimation [18].Costa, Rauen, and Fronza (2020)
proposed another technique, employing an image-based
vehicle speed measurement system that eliminates the need
for reference signals. Rather than relying on traditional Fig. 1. YOLOv8
methods, they used the Image Scale Factor (ISF) to calculate
the distance of vehicles from the camera. Improving the YOLOv8’s advanced convolutional layers and attention
accuracy of vehicle speed estimation continues to be a key mechanisms enhance its ability to effectively extract and
focus in recent research, with some having very low utilize features from input images, leading to more precise
deviations. For instance, in Costa, Rauen, and Fronza vehicle detection and density estimation. For a clearer
(2020), a vehicle speed estimation system was presented that understanding of YOLOv8's capabilities, a flowchart
achieved a maximum error of 2.2%. Meanwhile, in another illustrating its architecture and workflow should be included,
study, Lin, Jeng and Liao (2021) proposed a real-time traffic highlighting the stages of image processing, feature
monitoring system integrating the Gaussian Mixture Model extraction, and vehicle detection.
(GMM) for background subtraction along with YOLOv4 to
classify vehicles achieving an accuracy of 98.91% for B. Preprocessing
estimating speeds of individual vehicles with an average Before vehicle detection with YOLOv8, pre-processing
error rate of 7.6% [19]. techniques play a crucial role in improving image quality.
Similarly, Another video-based vehicle speed measurement One such technique is denoising using non-adaptive
system was presented [20], which makes use of Haar thresholding. This method removes noise from grayscale
cascade technique for detection and a correlation tracker to images by applying a global threshold value, separating the
measure the speed, obtaining 95% and 92% accuracy foreground (vehicles) from the background. This pre-
respectively. Similarly, Alzubaidi et al. For vehicle detection processing step ensures that YOLOv8 receives clean,
and classification, (2021) performed multiple comparisons relevant input data, thus enhancing detection accuracy.
of the usefulness of Haar cascade vs YOLOv3 within their Including a flowchart that demonstrates the non-adaptive
book Machine Learning Techniques for Vehicle Detection, thresholding process would be beneficial, illustrating how
with findings showing enhanced recognition of vehicle noise is removed to prepare images for YOLOv8.
ground truth [21]. Another effective pre-processing technique is Histogram
Equalization with CLAHE. CLAHE analogous image
IV. METHODOLOGY contrast with adjusting intensity values within small regions,
This approach implements real-time vehicle detection and which improves YOLOv8’s ability to detect vehicles,
density estimation using YOLOv8, an advanced object especially under varied lighting conditions. Unlike
detection model. The process begins with pre-processing of traditional histogram equalization, CLAHE reduces
input images, where non-adaptive thresholding is applied to oversaturation and noise.
reduce noise and improve image clarity. The Threshold
Algorithm is then utilized to effectively segment vehicles
from the background, enhancing detection accuracy.
Once the features are extracted, YOLOv8 processes them
for real-time vehicle detection, accurately classifying and
The optimal threshold is then obtained by maximizing this
variance:

max 2
T optimal =arg (σ between (T ))
T

Once the threshold is determined, the original grayscale


image can be segmented into a binary image using the
Fig. 2. Proposed System Architecture for Vehicle Density Analysis equation:

C. Segmentation Using Threshold Algorithm


{
I binary (x,y)= 1 if I (x , y)≥ T
0if I (x , y)<T
Accurate vehicle detection and density analysis rely on
effective segmentation. To enhance edge detection, the here I(x,y) denotes that the pixel value at each coordinates
Improved Prewitt Edge Detection algorithm is used, which (x,y) in the default image.
improves edge identification by calculating gradient
magnitudes with a more precise approach. This technique D. Feature extraction and integration
selects the maximum value among eight possible gradients, Feature extraction is another critical step, with
resulting in clearer edge detection and reduced noise Convolutional Neural Networks (CNNs) playing a central
interference. Including a flowchart of this improved Prewitt role. CNNs are used to pre-process images and extract
edge detection process can help illustrate how enhanced significant features that YOLOv8 utilizes for vehicle
edges contribute to more precise vehicle localization. detection. By highlighting these features, CNNs ensure that
For segmentation in image processing, the threshold YOLOv8 receives well-processed input data, improving
algorithm provides a straightforward method to spliting detection outcomesConvolutional Neural Networks are
foront objects from the background. This involves complex architectures designed to efficiently analyze visual
converting a grayscale image to a binary image by selecting information. CNNs stack convolutional and pooling layers
a suitable threshold, which adjusts pixel intensities in that extract salient features while reducing computational
multiples. The simplest approach is the global threshold, expense. The initial layers identify basic elements, applying
applied uniformly across all images. The threshold value can filters to input images forming multidimensional feature
be chosen based on the image histogram, which represents maps. Patterns emerge as neighboring pixels are analyzed,
the distribution of pixel intensities. A common method for their correlations detected. Subsequent max pooling
selecting the threshold (T) is the Otsu method, which aims consolidates the most prominent activations per region into
to minimize inter-class variance while maximizing the simplified representations. This downsampling decreases
dimensionality and the number of parameters, helping avoid
distinction between pixel classes[24].
overfitting the model to training data.
Deeper layers combine lower-level features, detecting
σw²(T) = ω₀(T)σ₀²(T) + ω₁(T)σ₁²(T)
increasingly complex patterns through additional
convolutions and pools. The final layers classify images or
The probabilities for each class are
detect objects based on the hierarchical collection of features
calculated as follows:
T distinctly characterizing each class. This transformation can
ω 0 (T )=∑ p (i) be expressed mathematically as:
i=0
y = f(W * x + b)

To introduce non-linearity into the model, activation


ω 1(T )=1−ω 0(T ) functions such as ReLU (Rectified Linear Unit) are applied:
Next, the class means are determined: f(x) = max(0, x)
T
After the convolution operation, a Batch Normalization
∑ i∗p(i) (BN) layer normalizes the activations to stabilize and
μ0 (T )= i =0 accelerate training. The normalization process is given by:
ω0 (T )
L−1
âᵢ = (aᵢ - μ_B) / (σ_B² + ε)
∑ i∗p( i)
μ1 (T )= i=T +1 followed by scaling and shifting:
ω1 (T ) yᵢ = γ âᵢ + β

The objective of Otsu's method is to find the threshold T E. Vehicle Classification


that maximizes the inter-class variance: The YOLOv8 pretrained model detects objects in video
frames and classifies them according to the COCO dataset
2 2
σ between (T )=ω 0 (T )∙ ω 1 (T )∙ (μ 0 (T )−μ 1 (T )) into 80 object categories. Since our system works on
vehicles, we need to detect only four classes that represent
the most common vehicle types, namely (car, motorcycle,
bus, and truck); these classes correspond to numbers 2, 3, 5, Model Recall (r mAP (%) F1 Score
and 7, respectively. YOLOv8 will generate class scores for %) (%)
the four predefined detected vehicles. The process of
classification provided in the class number of the detected YOLOv8n 60.5 62.4 62.8
object; Class indices from the COCO dataset facilitate YOLOv8s 68.3 70.0 70.0
classification, with each index corresponding to a specific YOLOv8m 75.6 75.0 75.0
class. Visual differentiation is achieved through unique color YOLOv8l 80.2 79.5 79.5
assignments, as seen in Figure 3. YOLOv8x 85.3 83.9 83.8
Custom 79.0 76.5 77.7
YOLOv8n model

Fig. 3. Real-time Vehicle Detection and Counting in a Traffic Scene

F. Vehicle Counting Stage


The vehicle counting process operates in real-time, tallying
vehicles as they pass through designated ROIs. Vehicle
counting is performed by tracking the center point of each
detected vehicle as it crosses the predefined Region of Fig. 4. Performance Comparision
Interest (ROI) boundaries.The center point is derived from
the bounding box coordinates, as expressed in Equation. The YOLOv8 model achieved impressive performance
metrics across all evaluation metrics. It exhibited high
. (cx, cy) = ( (x + (x + w)) / 2, (y + (h + y)) / 2 ) precision and recall rates, indicating its ability to correctly
identify objects while minimizing false positives and false
negatives. The model's strong performance in terms of mAP
and F1-score further underscores its robustness and overall
The system detects objects in a video stream and tracks their
accuracy.
bounding boxes. Each bounding box is defined by its top-
left corner (x, y) and its width (w) and height (h). The
system calculates the center point (cx, cy) of each
object.The system also counts the number of vehicles in
different classes and displays this information on the video.
Additionally, it logs the total count of each vehicle class to a
CSV file. The video stream with overlaid information is
displayed continuously..

V. RESULTS AND DISCUSSION

The evaluation of the proposed YOLOv8 model featured Fig. 5. Real-time Vehicle Detection and Counting in a Traffic Scene
extensive experiments focused on vehicle density and
helmet detection across a diverse dataset highlighting
difficult situations. Challenging conditions incorporated
varying weather, lighting environments with occlusions and
distinct vehicle varieties. As summarized in Table I, results TABLE 2. TRAFFIC DENSITY AND VEHICLE COUNTS ACROSS
demonstrated the model's effectiveness with elevated mean DIFFERENT LOCATIONS AND TIMES
Average Precision, precision and recall indicating robust Location Date and Four Four Two Two
detection and classification abilities spanning motorcycles, Time wheeler wheeler Wheeler wheeler
automobiles and protective headgear. The model performed count densiity count density
well across a range of complicated scenarios as seen in the JNTUH 04/02/25 3 0.07 3 0.07
dataset, showing potential for real-world implementation 11:45 AM
where such difficulties regularly occur. Kukkatpally 01/02/25 4 0.09 5 0.11
05:25 PM
VNR VJIET 28/01/25 6 0.14 3 0.07
03:10 PM
TABLE 1. MODEL PERFORMANCE COMPARISON FOR VEHICLE DENSITY
ESTIMATION
PragathiNagar 22/01/25 2 0.04 8 0.18 [1] Farooq, M.S., & Kanwal, S., 2023. Traffic Road Congestion
10:00 AM System using by the Internet of Vehicles (IoV). Networking and
Internet Architecture, Cornell University, New York,
arXiv:2306.00395, pp.1-9.
[2] Gupta, U., Kumar, U., Kumar, S., Shariq, M., & Kumar, R.,
2022. Vehicle speed detection system in highway. International
Research Journal of Modernization in Engineering Technology
and Science, 4(5), pp.406-411.
[3] Costa, L.R., Rauen, M.S., & Fronza, A.B., 2020. Car speed
estimation based on image scale factor. Forensic Science
International, 310, p.110229.
[4] Berna, S.J., Swathi S., Devi, C.Y., & Varalakshmi, D., 2020.
Distance and speed estimation of moving object using video
processing. International Journal for Research in Applied
Science and Engineering Technology (IJRASET), 8(5),
pp.2605-2612.
[5] Dalal, N., & Triggs, B., 2005. Histograms of oriented gradients
for human detection. In: 2005 IEEE Computer Society
Fig. 6. Traffic Density Comparision in different locations Conference on Computer Vision and Pattern Recognition
(CVPR’05), Volume 1, San Diego, CA, USA, pp. 886–893.
doi:10.1109/CVPR.2005.177.
[6] Redmon, J., Divvala, S., Girshick, R., & Farhadi, A., 2016. You
only look once: Unified, real-time object detection. In:
Proceedings of the IEEE Conference on Computer Vision and
Pattern Recognition (CVPR), Las Vegas, NV, USA, pp. 779–
788. doi:10.1109/CVPR.2016.91.
[7] Jocher, G., Chaurasia, A., & Qiu, J., 2023. YOLOv8 Docs by
Ultralytics (Version 8.0.0). Available from:
https://docs.ultralytics.com [Last accessed on 2023 Aug 07].
[8] Redmon, J., & Farhadi, A., 2017. YOLO9000: Better, faster,
stronger. In: 2017 IEEE Conference on Computer Vision and
Pattern Recognition (CVPR), Honolulu, HI, USA, pp. 6517–
6525. doi:10.1109/CVPR.2017.690.
[9] Wang, C.-Y., Bochkovskiy, A., & Liao, H.-Y. M., 2022.
YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for
real-time object detectors. arXiv preprint arXiv:2207.02696.
Fig. 7. Vehicle count Comparision in different locations
[10] Jocher, G., Chaurasia, A., & Qiu, J., 2023. YOLOv8: The latest
version of YOLO with improved performance. Available from:
The table and graphs displays the results of a traffic analysis https://github.com/ultralytics/yolov8 [Last accessed on 2024
Aug 26].
project conducted using YOLOv8. It highlights data from
[11] He, K., Zhang, X., Ren, S., & Sun, J., 2016. Deep residual
various locations, including vehicle counts and densities for learning for image recognition. In: 2016 IEEE Conference on
both two-wheelers and four-wheelers. The observations, Computer Vision and Pattern Recognition (CVPR), Las Vegas,
NV, USA, pp. 770–778. doi:10.1109/CVPR.2016.90.
recorded at specific times and places, provide insights into
traffic patterns, helping improve traffic monitoring and [12] Lin, T.-Y., Dollár, P., Girshick, R., He, K., Hariharan, B., &
Belongie, S., 2017. Feature pyramid networks for object
management. detection. In: 2017 IEEE Conference on Computer Vision and
Pattern Recognition (CVPR), Honolulu, HI, USA, pp. 936–944.
VI. CONCLUSION AND FUTURE SCOPE doi:10.1109/CVPR.2017.106.
The proposed YOLOv8-based system demonstrated [13] Redmon, J., & Farhadi, A., 2018. YOLOv3: An incremental
exceptional performance in real-time vehicle density improvement. CoRR abs/1804.02767. URL:
http://arxiv.org/abs/1804.02767.
estimation, achieving high accuracy and robustness across
diverse traffic scenarios. By leveraging the power of deep [14] Wang, C.-Y., & Liao, H.-Y. M., 2021. YOLOX: Exceeding
learning, this system provides valuable insights for traffic YOLO series in 2021. arXiv preprint arXiv:2107.08430.
management and analysis. [15] Ge, Z., Liu, S., Li, Z., & Sun, J., 2021. OTA: Optimal transport
Future research directions include addressing the assignment for object detection. In: 2021 IEEE/CVF Conference
on Computer Vision and Pattern Recognition (CVPR),
challenge of handling occlusions, optimizing the model's Nashville, TN, USA, pp. 303–312.
inference speed for real-time deployment, developing doi:10.1109/CVPR46437.2021.00037.
techniques for multi-object tracking, and integrating the
[16] Li, X., Wang, W., Wu, L., Chen, S., Hu, X., Li, J., Tang, J., &
model with other sensors to improve detection accuracy and Yang, J., 2020. Generalized focal loss: Learning qualified and
robustness. By exploring these avenues, we can further distributed bounding boxes for dense object detection. Advances
enhance the capabilities of vehicle density estimation in Neural Information Processing Systems 33.
systems and contribute to safer and more efficient [17] Wang, C.-Y., Liao, H.-Y. M., Wu, Y.-H., & Tseng, H.-W.,
transportation systems 2021. CSPNet: A new backbone that can enhance learning
capability of CNN. In: Proceedings of the IEEE Conference on
REFERENCES Computer Vision and Pattern Recognition (CVPR), pp. 390–
401. doi:10.1109/CVPR46437.2021.00049.
[18] Zhang, Z., Xu, J., Zhang, Z., & Liu, S., 2020. A survey on deep [22] Costa, L.R., Rauen, M.S., & Fronza, A.B. (2020). Car speed
learning-based object detection. Frontiers in Computer Science. estimation based on image scale factor. Forensic Science
International, 310, p. 110229.
[19] Woo, S., Park, J., Lee, J.-Y., & Kweon, I.S., 2018. CBAM:
Convolutional block attention module. In: Proceedings of the
European Conference on Computer Vision (ECCV), pp. 3–19. [23] Lin, C.J., Jeng, S.Y., & Liao, H.W. (2021). A real-time vehicle
[20] Hu, J., Shen, L., & Sun, G., 2018. Squeeze-and-excitation counting, speed estimation, and classification system based on
networks. In: Proceedings of the IEEE Conference on Computer virtual detection zone and YOLO. Mathematical Problems of
Vision and Pattern Recognition (CVPR), pp. 7132–7141. Applied System Innovations for IoT Applications, 2021, p.
doi:10.1109/CVPR.2018.00746. 1577614.

[21] Berna, S.J., Swathi, S., & Devi, C.Y. (2020). Distance and speed [24] Gupta, U., Kumar, U., Kumar, S., Shariq, M., & Kumar, R.
estimation of moving object using video processing. (2022). Vehicle speed detection system in highways.
International Journal for Research in Applied Science and International Research Journal of Modernization in
Engineering Technology (IJRASET), 8(5), pp. 2605-2612. Engineering Technology and Science, 4(5), pp. 406-411.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy