22 Paper-25
22 Paper-25
max 2
T optimal =arg (σ between (T ))
T
The evaluation of the proposed YOLOv8 model featured Fig. 5. Real-time Vehicle Detection and Counting in a Traffic Scene
extensive experiments focused on vehicle density and
helmet detection across a diverse dataset highlighting
difficult situations. Challenging conditions incorporated
varying weather, lighting environments with occlusions and
distinct vehicle varieties. As summarized in Table I, results TABLE 2. TRAFFIC DENSITY AND VEHICLE COUNTS ACROSS
demonstrated the model's effectiveness with elevated mean DIFFERENT LOCATIONS AND TIMES
Average Precision, precision and recall indicating robust Location Date and Four Four Two Two
detection and classification abilities spanning motorcycles, Time wheeler wheeler Wheeler wheeler
automobiles and protective headgear. The model performed count densiity count density
well across a range of complicated scenarios as seen in the JNTUH 04/02/25 3 0.07 3 0.07
dataset, showing potential for real-world implementation 11:45 AM
where such difficulties regularly occur. Kukkatpally 01/02/25 4 0.09 5 0.11
05:25 PM
VNR VJIET 28/01/25 6 0.14 3 0.07
03:10 PM
TABLE 1. MODEL PERFORMANCE COMPARISON FOR VEHICLE DENSITY
ESTIMATION
PragathiNagar 22/01/25 2 0.04 8 0.18 [1] Farooq, M.S., & Kanwal, S., 2023. Traffic Road Congestion
10:00 AM System using by the Internet of Vehicles (IoV). Networking and
Internet Architecture, Cornell University, New York,
arXiv:2306.00395, pp.1-9.
[2] Gupta, U., Kumar, U., Kumar, S., Shariq, M., & Kumar, R.,
2022. Vehicle speed detection system in highway. International
Research Journal of Modernization in Engineering Technology
and Science, 4(5), pp.406-411.
[3] Costa, L.R., Rauen, M.S., & Fronza, A.B., 2020. Car speed
estimation based on image scale factor. Forensic Science
International, 310, p.110229.
[4] Berna, S.J., Swathi S., Devi, C.Y., & Varalakshmi, D., 2020.
Distance and speed estimation of moving object using video
processing. International Journal for Research in Applied
Science and Engineering Technology (IJRASET), 8(5),
pp.2605-2612.
[5] Dalal, N., & Triggs, B., 2005. Histograms of oriented gradients
for human detection. In: 2005 IEEE Computer Society
Fig. 6. Traffic Density Comparision in different locations Conference on Computer Vision and Pattern Recognition
(CVPR’05), Volume 1, San Diego, CA, USA, pp. 886–893.
doi:10.1109/CVPR.2005.177.
[6] Redmon, J., Divvala, S., Girshick, R., & Farhadi, A., 2016. You
only look once: Unified, real-time object detection. In:
Proceedings of the IEEE Conference on Computer Vision and
Pattern Recognition (CVPR), Las Vegas, NV, USA, pp. 779–
788. doi:10.1109/CVPR.2016.91.
[7] Jocher, G., Chaurasia, A., & Qiu, J., 2023. YOLOv8 Docs by
Ultralytics (Version 8.0.0). Available from:
https://docs.ultralytics.com [Last accessed on 2023 Aug 07].
[8] Redmon, J., & Farhadi, A., 2017. YOLO9000: Better, faster,
stronger. In: 2017 IEEE Conference on Computer Vision and
Pattern Recognition (CVPR), Honolulu, HI, USA, pp. 6517–
6525. doi:10.1109/CVPR.2017.690.
[9] Wang, C.-Y., Bochkovskiy, A., & Liao, H.-Y. M., 2022.
YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for
real-time object detectors. arXiv preprint arXiv:2207.02696.
Fig. 7. Vehicle count Comparision in different locations
[10] Jocher, G., Chaurasia, A., & Qiu, J., 2023. YOLOv8: The latest
version of YOLO with improved performance. Available from:
The table and graphs displays the results of a traffic analysis https://github.com/ultralytics/yolov8 [Last accessed on 2024
Aug 26].
project conducted using YOLOv8. It highlights data from
[11] He, K., Zhang, X., Ren, S., & Sun, J., 2016. Deep residual
various locations, including vehicle counts and densities for learning for image recognition. In: 2016 IEEE Conference on
both two-wheelers and four-wheelers. The observations, Computer Vision and Pattern Recognition (CVPR), Las Vegas,
NV, USA, pp. 770–778. doi:10.1109/CVPR.2016.90.
recorded at specific times and places, provide insights into
traffic patterns, helping improve traffic monitoring and [12] Lin, T.-Y., Dollár, P., Girshick, R., He, K., Hariharan, B., &
Belongie, S., 2017. Feature pyramid networks for object
management. detection. In: 2017 IEEE Conference on Computer Vision and
Pattern Recognition (CVPR), Honolulu, HI, USA, pp. 936–944.
VI. CONCLUSION AND FUTURE SCOPE doi:10.1109/CVPR.2017.106.
The proposed YOLOv8-based system demonstrated [13] Redmon, J., & Farhadi, A., 2018. YOLOv3: An incremental
exceptional performance in real-time vehicle density improvement. CoRR abs/1804.02767. URL:
http://arxiv.org/abs/1804.02767.
estimation, achieving high accuracy and robustness across
diverse traffic scenarios. By leveraging the power of deep [14] Wang, C.-Y., & Liao, H.-Y. M., 2021. YOLOX: Exceeding
learning, this system provides valuable insights for traffic YOLO series in 2021. arXiv preprint arXiv:2107.08430.
management and analysis. [15] Ge, Z., Liu, S., Li, Z., & Sun, J., 2021. OTA: Optimal transport
Future research directions include addressing the assignment for object detection. In: 2021 IEEE/CVF Conference
on Computer Vision and Pattern Recognition (CVPR),
challenge of handling occlusions, optimizing the model's Nashville, TN, USA, pp. 303–312.
inference speed for real-time deployment, developing doi:10.1109/CVPR46437.2021.00037.
techniques for multi-object tracking, and integrating the
[16] Li, X., Wang, W., Wu, L., Chen, S., Hu, X., Li, J., Tang, J., &
model with other sensors to improve detection accuracy and Yang, J., 2020. Generalized focal loss: Learning qualified and
robustness. By exploring these avenues, we can further distributed bounding boxes for dense object detection. Advances
enhance the capabilities of vehicle density estimation in Neural Information Processing Systems 33.
systems and contribute to safer and more efficient [17] Wang, C.-Y., Liao, H.-Y. M., Wu, Y.-H., & Tseng, H.-W.,
transportation systems 2021. CSPNet: A new backbone that can enhance learning
capability of CNN. In: Proceedings of the IEEE Conference on
REFERENCES Computer Vision and Pattern Recognition (CVPR), pp. 390–
401. doi:10.1109/CVPR46437.2021.00049.
[18] Zhang, Z., Xu, J., Zhang, Z., & Liu, S., 2020. A survey on deep [22] Costa, L.R., Rauen, M.S., & Fronza, A.B. (2020). Car speed
learning-based object detection. Frontiers in Computer Science. estimation based on image scale factor. Forensic Science
International, 310, p. 110229.
[19] Woo, S., Park, J., Lee, J.-Y., & Kweon, I.S., 2018. CBAM:
Convolutional block attention module. In: Proceedings of the
European Conference on Computer Vision (ECCV), pp. 3–19. [23] Lin, C.J., Jeng, S.Y., & Liao, H.W. (2021). A real-time vehicle
[20] Hu, J., Shen, L., & Sun, G., 2018. Squeeze-and-excitation counting, speed estimation, and classification system based on
networks. In: Proceedings of the IEEE Conference on Computer virtual detection zone and YOLO. Mathematical Problems of
Vision and Pattern Recognition (CVPR), pp. 7132–7141. Applied System Innovations for IoT Applications, 2021, p.
doi:10.1109/CVPR.2018.00746. 1577614.
[21] Berna, S.J., Swathi, S., & Devi, C.Y. (2020). Distance and speed [24] Gupta, U., Kumar, U., Kumar, S., Shariq, M., & Kumar, R.
estimation of moving object using video processing. (2022). Vehicle speed detection system in highways.
International Journal for Research in Applied Science and International Research Journal of Modernization in
Engineering Technology (IJRASET), 8(5), pp. 2605-2612. Engineering Technology and Science, 4(5), pp. 406-411.