0% found this document useful (0 votes)
9 views27 pages

Major Project Report Final 8thsem

smart traffic management system using yolo

Uploaded by

Nikhil Sihare
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views27 pages

Major Project Report Final 8thsem

smart traffic management system using yolo

Uploaded by

Nikhil Sihare
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 27

Smart Traffic Management System using YOLO

MAJOR PROJECT REPORT(PHASE-2)


Submitted in partial fulfillment for the award of the degree of
BACHELOR OF TECHNOLOGY
(Department of Computer Science and Engineering)

Submitted to
INDIAN INSTITUTE OF INFORMATION TECHNOLOGY
BHOPAL (M.P.)

Submitted by
Nikhil Sihare (21U02067)

Under the supervision of


Dr. Saurabh jain
Assistant Professor
(CSE)

May-2025
INDIAN INSTITUTE OF INFORMATION TECHNOLOGY
BHOPAL (M.P.)

CERTIFICATE

This is to certify that the work embodied in this report entitled “Smart Traffic
Management System using YOLO” has been satisfactorily completed by
Nikhil Sihare ( 21U02067). It is a bonafide piece of work, carried out under
our guidance in the Department of Computer Science and Engineering,
Indian Institute of Information Technology, Bhopal for the partial fulfillment
of the Bachelor of Engineering during the academic year 2024-25.

Date: 30 April 2025

Dr. Saurabh jain Dr. Saurabh jain


Asst. Professor , CSE
Major Project Coordinator
IIIT Bhopal (M.P.) Department of Computer Science and Engineering
IIIT Bhopal (M.P
INDIAN INSTITUTE OF
INFORMATION TECHNOLOGY
BHOPAL (M.P.)

DECLARATION

We hereby declare that the following major project synopsis entitled “Smart Traffic
Management System using YOLO” presented in the is the partial fulfillment of the
requirements for the award of the degree of Bachelor of Technology in Computer Science
and Engineering. It is an authentic documentation of our original work carried out under the
able guidance of Dr. Saurabh Jain. The work has been carried out entirely at the Indian
Institute of Information Technology, Bhopal. The project work presented has not been
submitted in part or whole to award of any degree or professional diploma in any other
institute or organization.

We, with this, declare that the facts mentioned above are true to the best of our knowledge. In
case of any unlikely discrepancy that may occur, we will be the ones to take responsibility.

Nikhil Sihare (21U02067) Sign


AREA OF WORK

Our project focuses on the creation of a Smart Traffic Management System using Convolutional
Neural Networks (CNNs), advanced computer vision algorithms like YOLO (You Only Look
Once), and tracking algorithms like DeepSort. Unlike traditional traffic control methods that use
fixed schedules, our system dynamically adjusts signal timings based on real-time vehicle density
data.

CNNs play an important role in our project by allowing for accurate and efficient vehicle detection
and tracking. By incorporating algorithms such as YOLO and DeepSort, we hope to improve the
system's ability to detect and track vehicles in real time, even in complex traffic scenarios.

Our project aims to transform traditional traffic management systems by combining CNN-based
computer vision and real-time analytics. Our innovative approach promises to improve traffic flow,
reduce congestion, and increase overall road safety, ushering in a new era of intelligent
transportation systems.
TABLE OF CONTENT
S.no Title Page No.

Certificate

Declaration

Abstract

1 Introduction 2

2 Literature review or Survey 3

3 Methodology & Work Description 4

4 Proposed algorithm 6

5 Proposed flowchart/ DFD/ Block Diagram 8

6 Tools & Technology Used 10

7 Implementation & Coding 12

8 Result Analysis 15

9 Conclusion & Future Scope 18


10 References 20
LIST OF FIGURES
Fig Description Page no.

1 Working of YOLO with DeepSort to count 6


vehicles.

Flow Diagram 8
2
Output after the analysis on traffic video 15
3
Original input frame from the video stream.
4.a
Output frame after applying object detection 16
4.b
and tracking, with bounding boxes, vehicle IDs,
class labels and confidence scores.

4.c Four lane traffic 17


LIST OF TABLES

Table No Description Page no.


1 Table showing Lane Density and corresponding signal color 17
ABSTRACT

Traffic congestion is a persistent challenge in urban areas, leading to increased travel times, environmental
pollution, and economic losses. This project presents a Smart Traffic Management System that leverages
cutting-edge computer vision and deep learning techniques to address these challenges. The system employs
the You Only Look Once (YOLO) object detection algorithm for accurate vehicle classification and
counting, and the DeepSort multiple objects tracking algorithm for comprehensive traffic flow analysis.

The workflow begins with acquiring real-time video feeds from strategically placed cameras at intersections
or roadways. These video frames are then processed through YOLO for vehicle detection and classification
into categories such as cars, trucks, buses, and motorcycles. YOLO's precise object localization capabilities
enable accurate vehicle counting, providing valuable insights into traffic density.

To analyze traffic flow dynamics, the detected vehicles are passed to the DeepSort algorithm for robust
multiple object tracking. DeepSort assigns unique identities to each vehicle and tracks their movements
across consecutive frames, enabling the system to monitor traffic patterns, trajectories, and velocities.

Based on the real-time traffic data obtained, the system can optimize signal timings at intersections, identify
congestion hotspots, and implement proactive traffic management strategies. Data visualization tools present
the processed information in an intuitive manner, allowing traffic operators to make informed decisions and
monitor traffic conditions in real-time.

The Smart Traffic Management System demonstrates the potential of integrating advanced computer vision
and deep learning techniques for efficient traffic management. By providing accurate vehicle classification,
precise vehicle counting, and comprehensive traffic flow analysis, the system contributes to improved
mobility, reduced congestion, and enhanced overall transportation efficiency in urban environments.

Future work involves extending the system to handle complex four-way lane scenarios, integrating with
existing traffic management infrastructure, exploring edge computing solutions for real-time processing, and
incorporating additional data sources such as weather and event information to further enhance the system's
capabilities and adaptability.

1
INTRODUCTION

In the field of urban infrastructure management, the need for intelligent solutions to reduce traffic
congestion and improve road safety has grown. Traditional traffic control methods, which rely on fixed
time intervals, frequently fail to adapt to real-time conditions, causing inefficiencies and frustration for
commuters. However, emerging computer vision technologies, particularly the You Only Look Once
(YOLO) algorithm, combined with density-based traffic management, present a promising avenue for
revolutionizing traffic control. [1]

The Smart Traffic Management System is a ground-breaking approach to congestion reduction and
traffic flow optimization. This innovative system uses the power of YOLO and density-based traffic
analysis to dynamically adjust traffic signals at intersections based on actual vehicle density. Unlike its
static counterparts, which follow predetermined schedules, the Smart Traffic Management System
continuously monitors live video feeds to accurately assess traffic conditions in real time. [1]

The system uses sophisticated YOLO-based vehicle detection and tracking algorithms such as DeepSort
to analyze lane-specific car density. This granular understanding enables it to intelligently allocate green
signals to lanes with the highest vehicle concentration, thereby optimizing traffic flow and reducing
motorist wait times. By prioritizing congested areas and dynamically adapting signal timings, the system
improves overall road safety and commuter experience. [1]

This report delves into the conceptual framework, technical implementation, and potential benefits of the
Smart Traffic Management System. We hope to highlight its importance as a transformative tool in
modern urban planning and infrastructure management by delving deeply into its underlying principles
and functionalities. By leveraging cutting-edge technologies like YOLO and density-based algorithms,
we envision a future in which traffic management is not only more efficient and effective, but also more
responsive to the changing needs of urban environments. [1]

2
LITERATURE REVIEW

Traffic congestion is a significant challenge, leading to increased travel times and environmental
impacts[1]. Researchers have explored smart traffic management solutions leveraging computer
vision and deep learning. The YOLO algorithm [3] has been effective for real-time vehicle detection
and classification. Complementing this, the DeepSort algorithm [4] enables robust multi-object
tracking, utilizing techniques like the Hungarian algorithm.[5]

Proposed systems integrate YOLO and DeepSort for tasks like traffic monitoring, congestion
detection, and signal optimization [2]. Researchers have explored graph neural networks for traffic
flow prediction [113] and deep learning for vehicle re-identification across cameras). Integrating
real-time data into existing infrastructure, like coordinating signals, has been investigated. Edge
computing solutions have been explored for low-latency processing

Despite progress, challenges remain in handling complex environments, infrastructure integration,


and real-time processing. Foundational resources on deep learning [1] , [7], object detection [11] ,
and object tracking [4] provide valuable insights for further research and development in this field.

3
PROPOSED METHODOLOGY AND WORK
DESCRIPTION

In response to the inefficiencies of traditional traffic management systems, we propose the development
of a smart traffic management system that utilizes vehicle density as a key metric for signal control.
Traditional systems often rely on fixed timing intervals for signal changes, leading to congestion and
longer travel times, especially during peak hours. Our solution aims to address these challenges by
dynamically adjusting signal timings based on real-time vehicle density at intersections.

The core technology behind our system includes advanced computer vision and deep learning
algorithms. We will leverage the You Only Look Once (YOLO) algorithm for vehicle detection and
classification. YOLO's fast and accurate object detection capabilities will enable our system to classify
vehicles into various categories such as cars, trucks, and buses, allowing for more granular control over
traffic flow.

Additionally, we will integrate the DeepSort library for real-time object tracking. DeepSort excels at
tracking multiple objects simultaneously, which is crucial for analyzing traffic movement patterns and
identifying congestion hotspots. By accurately tracking vehicles, our system will be able to provide
valuable insights into traffic flow dynamics and make informed decisions regarding signal control.

The development process will involve several stages, including algorithm integration, module
development, system integration, and pilot deployment. We will conduct extensive testing to ensure the
accuracy and reliability of our algorithms in real-world traffic scenarios. Once the system is ready, we
will deploy it at select intersections for pilot testing and evaluation.

Throughout the development process, scalability and adaptability will be key considerations. We aim to
create a solution that can be easily scaled up for broader deployment across multiple intersections and
urban areas. Continuous monitoring and optimization will ensure that the system remains effective and
responsive to changing traffic conditions.

In conclusion, our vehicle density-based smart traffic management system represents a significant
advancement in traffic control technology. By leveraging cutting-edge algorithms and real-time data
analysis, we aim to improve traffic flow efficiency, reduce congestion, and enhance overall road safety
in urban areas. With careful development and testing, we believe that our system has the potential to
revolutionize traffic management practices and improve the quality of life for commuters.

Work Flow:

Step 1: Video Acquisition: The system will acquire real-time video feeds from strategically
placed cameras at intersections or roadways. These video feeds serve as the input data for the
subsequent processing steps.[4]

Step 2: Object Detection and Classification with YOLO: The acquired video frames will be
processed through the YOLO (You Only Look Once) object detection algorithm. YOLO will
scan each frame and detect the presence of vehicles, categorizing them into different classes such
as cars, trucks, buses, motorcycles, etc. YOLO's convolutional neural network architecture
allows for accurate and efficient object detection and classification. [4]

Step 3: Vehicle Counting: Leveraging YOLO's object detection capabilities, the system will
count the number of vehicles detected in each frame. By tracking the inflow and outflow of
4
vehicles at intersections or specific regions of interest, the system can compute real-time vehicle
counts. This data is crucial for analysing traffic density and making informed decisions
regarding signal timings. [4]

Step 4: Multiple Object Tracking with DeepSort: The detected vehicles from the previous
step will be passed to the DeepSort algorithm for multiple object tracking. DeepSort assigns
unique identities to each vehicle and tracks their movements across consecutive video frames.
This tracking capability enables the system to analyze traffic flow patterns, trajectories, and
velocities of individual vehicles. [4]

Step 5: Traffic Flow Analysis: With the tracking data provided by DeepSort, the system can
perform comprehensive traffic flow analysis. This includes identifying congestion hotspots,
monitoring traffic movement patterns, and detecting potential bottlenecks or disruptions in
traffic flow. The system can also compute metrics such as average vehicle speeds and travel
times for specific road segments. [4]

Step 6: Signal Timing Optimization: Based on the real-time traffic data obtained from the
previous steps, the system can optimize signal timings at intersections. By analysing traffic
density, flow patterns, and congestion levels, the system can dynamically adjust signal cycles
and green light durations to facilitate smoother traffic flow and reduce waiting times for
commuters. [4]

Step 7: Data Visualization and Reporting: The system will integrate data visualization tools to
present the processed traffic data in an intuitive and user-friendly manner. Traffic operators will
have access to real-time dashboard displaying vehicle counts, traffic flow animations, congestion
heat maps, and signal status information. Additionally, the system will generate detailed reports
and analytics, allowing for historical data analysis and performance evaluation. [4]

Step 8: Real-time Monitoring and Decision Support: Traffic operators will monitor the real-
time data visualizations and reports provided by the system. Based on the insights gained, they
can make informed decisions regarding traffic management strategies, such as adjusting signal
timings, deploying traffic control personnel, or implementing temporary road closures or
diversions. [4]

Step 9: Continuous Improvement and System Updates: The system will be designed to
continuously learn and improve its performance over time. By analysing historical data and
incorporating feedback from traffic operators, the system can refine its object detection, tracking,
and traffic flow analysis algorithms. Additionally, the system can be updated with new data
sources or integrated with other smart city initiatives for enhanced traffic management
capabilities. [4]

Throughout the workflow, the system will leverage the strengths of YOLO for accurate object
detection and classification, and DeepSort for robust multiple object tracking. The combination
of these cutting-edge techniques, along with real-time data processing, visualization, and
decision support, will enable efficient and data-driven traffic management, ultimately leading to
improved mobility and reduced congestion in urban areas. [4]

5
PROPOSED ALGORITHMS

Fig1. Working of YOLO with DeepSort to count vehicles

YOLOv5 (You Only Look Once Version 5):


YOLOv5 is an object detection algorithm based on a convolutional neural network (CNN) architecture.
It is designed to detect and classify objects in real-time from images or video streams. The algorithm
works as follows: [2]

 Input: The algorithm takes an input image or video frame.


 Preprocessing: The input is resized to the dimensions required by the YOLOv5 model.
 Feature Extraction: The preprocessed input is passed through a series of convolutional layers,
which extract relevant features from the image.
 Object Detection: The feature maps from the convolutional layers are passed to the detection
head, which consists of several parallel prediction layers. These layers predict bounding boxes
for potential objects, their corresponding class probabilities, and confidence scores.
 Non-Maximum Suppression (NMS): To eliminate redundant and overlapping bounding boxes,
the NMS algorithm is applied. It selects the most confident bounding box for each object and
removes the others.
 Output: The algorithm outputs the remaining bounding boxes, class labels, and confidence
scores for the detected objects.

NOTE: YOLOv5 is pre-trained on various datasets, such as COCO (Common Objects in Context), to
recognize a wide range of object classes, including vehicles like cars, trucks, and buses.[2]

6
DeepSort:
DeepSort is an object tracking algorithm that associates detections from an object detector (like
YOLOv5) across multiple frames in a video sequence. It assigns unique IDs to each tracked object,
enabling the counting and tracking of objects over time. DeepSort combines two key components: [3]

 Deep Association Metric: This is a deep neural network that computes a feature vector for each
detected object. These feature vectors are used to measure the similarity between detections in
consecutive frames, enabling the association of detections belonging to the same object.
 Kalman Filter: DeepSort employs a Kalman filter to predict the future locations of tracked
objects based on their previous movements. This helps in handling occlusions and missed
detections.
 Hungarian Algorithm: The Hungarian algorithm in DeepSort is used to efficiently assign
detected objects to existing tracks. It works by minimizing the cost of assigning each object to a
track while satisfying constraints. This optimization process involves iteratively updating a cost
matrix and selecting the lowest-cost assignments. By employing the Hungarian algorithm,
DeepSort achieves accurate and robust object tracking in real-time scenarios.

The YOLOv5 and DeepSort models work together in the following way:

 Object Detection: YOLOv5 detects objects (vehicles) in each frame of the input video or image
sequence.

 Vehicle Classification: For each detected object, YOLOv5 classifies it into a specific vehicle
type (e.g., car, truck, bus) based on its pre-trained weights.

 Object Tracking: The bounding boxes and class labels from YOLOv5 are passed to DeepSort,
which associates detections across frames and assigns unique IDs to each tracked vehicle.
 Vehicle Counting: As vehicles enter or exit a defined region of interest (ROI) in the video frame,
their unique IDs are recorded. This information is used to update the count of vehicles moving in
each direction (e.g., up count, down count).

 Vehicle Type Counting: In addition to counting the total number of vehicles, the code keeps
separate counts for different vehicle types (e.g., car count, truck count) based on the class labels
provided by YOLOv5.

 Visualization: The detected and tracked vehicles, along with their IDs and class labels, are
visualized on the output video or image frames. The vehicle counts and other relevant
information can also be displayed on the output.

The pre-trained weights used by YOLOv5 play a crucial role in the classification of vehicle types. These
weights are obtained by training the YOLOv5 model on a large dataset of annotated images containing
various vehicle types and other objects.

By combining the object detection capabilities of YOLOv5 with the tracking and association abilities of
DeepSort, the algorithm can effectively detect, classify, and count vehicles in real-time video streams or
image sequences. The pre-trained weights of YOLOv5 provide the necessary knowledge for vehicle type
classification, while DeepSort ensures accurate tracking and unique identification of vehicles, enabling
reliable counting and analysis of traffic density.

7
PROPOSED FLOWCHART/ DFD/ BLOCK DIAGRAM

Fig.2. Flow Diagram

 The system starts by capturing live traffic video from multiple lanes, typically using cameras
installed at the traffic junction or road segment being monitored.
 The live video feed from Lane 1, Lane 2, Lane 3, and Lane 4 is captured simultaneously.
 The captured video is then processed using a technique called Dashcam, which is likely an
object detection and tracking algorithm capable of identifying and following multiple vehicles
simultaneously in the video stream.
 After detecting the vehicles, the system applies the YOLO (You Only Look Once) algorithm to
classify and label the different types of vehicles present in the video, such as cars, trucks, buses,
etc.
 With the vehicles identified and classified, the system counts the number of vehicles present in
each lane at any given time.

8
 Based on the vehicle count per lane, the system evaluates the vehicle density for each lane. It
compares the vehicle density in a specific lane (Lane L) against the vehicle density of the other
three lanes.
 If the vehicle density in Lane L is lower or equal to the vehicle density of the other three lanes, a
green signal is displayed for that lane, indicating a normal or low traffic flow.
 However, if the vehicle density in Lane L is higher than the vehicle density of the other three
lanes, a red signal is displayed for that lane, indicating a higher traffic density or congestion.
 This traffic density information can be used by traffic management systems to take appropriate
actions, such as adjusting signal timings, diverting traffic, or informing drivers about congestion
levels and alternative routes.

9
TOOLS AND TECHNOLOGY USED

a) YOLOv5 Object Detection Library:


YOLOv5 is a state-of-the-art object detection model known for its speed and accuracy. Key
advantages of YOLOv5 for object detection tasks include:
1. Real-Time Performance: YOLOv5 is designed for real-time inference, allowing for fast
and efficient object detection in video streams and live feeds.
2. High Accuracy: YOLOv5 achieves high levels of accuracy on object detection
benchmarks, making it suitable for applications where precision is critical.
3. Easy Deployment: YOLOv5 models are lightweight and easy to deploy, making them
ideal for resource-constrained environments such as edge devices and embedded systems.

b) CNN(Convolutional Neural Network)[6]:


1. YOLO (You Only Look Once) primarily utilizes deep learning technology, specifically
convolutional neural networks (CNNs), for object detection tasks. The YOLO algorithm
is based on a single neural network that processes the entire image at once and predicts
bounding boxes and class probabilities directly.
2. This approach allows YOLO to achieve real-time object detection by efficiently
leveraging the power of CNNs. Additionally, YOLO may also utilize other supporting
technologies such as GPU acceleration to further optimize its performance for real-time
applications.

c) DeepSort Object Tracking Library:


DeepSort is a deep learning-based object tracking algorithm that excels at tracking multiple
objects in real time. Key capabilities of DeepSort for object tracking include:
1. Multi-Object Tracking(MOT): DeepSort can simultaneously track multiple objects across
frames, maintaining unique identities for each object throughout the video sequence.
2. Feature Extraction: DeepSort extracts rich features from tracked objects, enabling robust
and accurate tracking even in challenging scenarios with occlusions and clutter.
3. Scalability: DeepSort is highly scalable and can handle large numbers of tracked objects
efficiently, making it suitable for applications with high-density traffic and crowded
scenes.

d) Python Programming Language:


Python was chosen as the primary programming language for this project due to several reasons:
1. Ease of Use: Python is known for its simple and readable syntax, making it easy for
developers to write and maintain code.
2. Large Ecosystem: Python has a vast ecosystem of libraries and frameworks for various
tasks, including image processing, deep learning, and data analysis.
3. Community Support: Python has a large and active community of developers, providing
access to a wealth of resources, tutorials, and support forums.
4. Flexibility: Python's versatility allows for seamless integration with other technologies
and platforms, making it ideal for building complex systems like the smart traffic
management system.

e) OpenCV Library for Image and Video Processing:


OpenCV (Open-Source Computer Vision Library) is a powerful open-source library for image
and video processing tasks. Key features and functionality of OpenCV used in the project
include:

10
1. Image Processing: OpenCV provides a wide range of functions and algorithms for image
manipulation, including resizing, cropping, filtering, and morphological operations.
2. Video Processing: OpenCV allows for the capture, processing, and analysis of video
streams from various sources, such as cameras and video files.
3. Object Detection: OpenCV offers pre-trained models and algorithms for object detection,
including Haar cascades, HOG (Histogram of Oriented Gradients), and deep learning-
based methods.

f) PyTorch Library for Deep Learning[5]:


PyTorch is a popular deep learning framework known for its flexibility and ease of use. In this
project, PyTorch plays a crucial role in implementing the deep learning models YOLOv5 and
DeepSort. Specifically:
1. YOLOv5 Implementation: PyTorch provides the foundation for training and deploying
the YOLOv5 object detection model. PyTorch's extensive functionality for neural
network design and optimization enables the development of robust and accurate object
detection systems.
2. DeepSort Integration: PyTorch facilitates the integration of the DeepSort algorithm for
object tracking. By leveraging PyTorch's capabilities for model development and
optimization, DeepSort can be seamlessly integrated into the overall system architecture.

These tools and technologies collectively enable the development of a sophisticated vehicle density-
based smart traffic management system, leveraging the power of deep learning and computer vision to
optimize traffic flow and enhance road safety.

11
IMPLEMENTATION

Pseudo Code:

Initialize global variables


up_count = 0
down_count = 0
car_count = 0
truck_count = 0
tracker1 = [] # List to track object IDs moving South
tracker2 = [] # List to track object IDs moving North
dir_data = {} # Dictionary to store previous vertical position of each object ID

Define function detect(options):


# Initialization
Load YOLOv5 model and DeepSort model
Set up paths and directories for input/output
Configure OpenCV for video input/output

for each frame:


# Object Detection
Perform object detection using YOLOv5 model
Get bounding boxes, confidence scores, and class predictions

# Non-Maximum Suppression
Apply non-maximum suppression to remove overlapping detections
if detections exist:
# Rescale Bounding Boxes
Rescale bounding boxes from model's input size to original image size
# Object Tracking
Update DeepSort object tracker with new detections
Get updated object tracks and IDs

for each tracked object:


# Draw Bounding Boxes and Labels
Draw bounding box and label (ID, class, confidence) on the frame
# Count Objects
Call count_obj function to count vehicles by direction and type

# Save Results (optional)


If save_txt is enabled, save detection results in MOT format

12
else:
Update DeepSort tracker for undetected objects
# Display/Save Results
If show_vid is enabled, display processed frame
If save_vid is enabled, write processed frame to output video

# Print Performance Statistics


Print average processing time for each stage (preprocessing, inference, NMS,
DeepSort)

If save_txt or save_vid is enabled, print output file path


If on macOS, open the output file

Define function count_obj(box, w, h, id, direct, cls):


# Calculate center coordinates of the bounding box
cx, cy = calculate_center(box)

# Ignore objects in the top half of the frame


if cy <= h // 2:
return
if direct == "South":
if cy > h - 300: # Below a certain height threshold
if id not in tracker1: # New object ID for this direction
Print object ID and direction
down_count += 1
tracker1.append(id) # Add ID to tracker list

# Count vehicle type


if cls == 2: # Car
car_count += 1
elif cls == 7: # Truck
truck_count += 1

elif direct == "North":


if cy < h - 150: # Above a certain height threshold
if id not in tracker2: # New object ID for this direction
Print object ID and direction
up_count += 1
tracker2.append(id) # Add ID to tracker list

# Count vehicle type


if cls == 2: # Car
car_count += 1

13
elif cls == 7: # Truck
truck_count += 1

Define function direction(id, y):


if id not in dir_data:
# Store initial vertical position for this object ID
dir_data[id] = y
else:
# Calculate difference in vertical position
diff = dir_data[id] - y

if diff < 0:
return "South" # Object moved downwards
else:
return "North" # Object moved upwards

Define main function:


Parse command-line arguments
Adjust image size if necessary

Call detect function with parsed arguments inside a torch.no_grad context

Code repository: https://github.com/VishalJx/VishalJx-Smart-Traffic-Detection-Using-YOLO-v5

14
RESULT DISCUSSION AND ANALYSIS

Output:

Fig.3. Output after the analysis on traffic video

Above output shows following:


1. Direction of the vehicle moving in a two-way lane
a. North(inwards)
b. South(outwards)
2. Vehicle density*: No. of vehicles per lane
3. Confidence Score: The confidence scores shown in the output, like "YOLOv5 (0.164),
DeepSort: (0.204)", refer to the model's confidence or probability that the detected objects (cars
and trucks) are correctly identified
4. Vehicle’s Class: Either car or truck
5. Vehicle ID

Analysis has two parts:


1. Vehicle Detection and Lane Density Calculation
2. Lane density comparison and Signal Assignment

Vehicle Detection and Lane Density Calculation:


The image comparison below illustrates the system's performance, contrasting the original input
frame with the processed output, where vehicles are highlighted with bounding boxes, confidence
scores, and unique IDs assigned by DeepSort.

15
BEFORE TRAFFIC ANALYSIS

Fig.4.(a). Original input frame from the video stream.

AFTER TRAFFIC ANALYSIS

Fig.4.(b). Output frame after applying object detection and tracking, with bounding boxes, vehicle IDs, class labels and
confidence scores.

The data represented in the after analysis of traffic video for the single two-way lane are:
1. Traffic density on each side of the lane
2. The data above the bounding boxes represent:
i. Vehicle ID
ii. Vehicle Class
iii. Confidence Score
16
Lane density comparison and Signal Assignment:
How does the vehicle density in a lane helps in optimized traffic signal management?

Fig.4.(c). Four lane traffic

Table showing Lane Density and corresponding signal


Lane Lane Density(no. of vehicles per lane) Signal
Top 10 GREEN
Bottom 3 RED
Left 8 RED
Right 4 RED

In the above illustration the top lane has relatively highest traffic density, hence it has to be given
highest priority to pass – Green Signal

Rest vehicles will face red signal until:-


1. Current Green signal lane’s density is reduced
2. The density of remaining three lanes crosses highest lane density

By dynamically adjusting the traffic signal timings based on real-time vehicle density data, the traffic
management system can optimize the flow of traffic and reduce congestion. This not only improves
travel times and reduces delays for commuters but also helps in reducing emissions and fuel
consumption caused by idling vehicles in traffic jams.

17
CONCLUSION AND FUTURE SCOPE

The Smart Traffic Management System developed in this project demonstrates the effective
integration of cutting-edge computer vision techniques, namely YOLOv5 for object detection and
DeepSort for object tracking, to address the challenges of traffic monitoring and management. By
leveraging these algorithms, the system can accurately detect and track vehicles in real-time video
streams, assigning unique identities to each vehicle and maintaining their trajectories across
consecutive frames.
In its current form, the system's capabilities were showcased in a single two-way lane scenario,
where vehicles were successfully identified, enclosed within bounding boxes, and assigned
confidence scores and unique IDs. This visual representation highlights the system's potential for
various traffic management applications, such as vehicle counting, congestion monitoring, and
incident detection.
While the project focused on a basic implementation for a single two-way lane, the system's
scalability was demonstrated through a simulated visualization of a four-lane traffic scenario. By
analysing vehicle densities, trajectories, and traffic flow patterns across multiple lanes and
intersections, the system can provide valuable insights for optimizing traffic signal timings,
suggesting alternative routes, and informing urban planning decisions.

The Smart Traffic Management System developed in this project represents a significant step
towards intelligent and data-driven traffic management solutions. However, its capabilities can be
further enhanced to handle more complex scenarios and integrate with existing infrastructure,
ultimately leading to a comprehensive and adaptive traffic management solution for urban
environments.[7-10]

One potential avenue for future development is extending the system to handle complex four-way
lane scenarios, where vehicle density and traffic flow analysis can be performed across multiple
intersections and directions. This would involve deploying additional cameras and sensors at
strategic locations to capture video feeds from various angles and perspectives. The object detection
and tracking algorithms would need to be refined to accurately identify and track vehicles in these
intricate scenarios, where traffic patterns may be more intricate and involve multiple turning
movements. [7-10]

To achieve this, advanced computer vision techniques and deep learning models could be
employed, such as instance segmentation and multi-object tracking algorithms. These models
would be trained on diverse datasets encompassing various traffic scenarios, enabling the system to
accurately detect and track vehicles in complex environments. Additionally, the system's traffic
flow analysis module would need to be enhanced to process and analyze data from multiple
intersections simultaneously, identifying potential bottlenecks and optimizing signal timings across
the entire network. [7-10]

Furthermore, integrating the Smart Traffic Management System with existing traffic management
infrastructure could significantly enhance its impact and effectiveness. This could involve
interfacing with traffic signal controllers, variable message signs, and other intelligent
transportation systems (ITS) components. By leveraging the real-time data and insights provided by
the system, traffic operators could dynamically adjust signal timings, disseminate traffic
information to commuters, and implement proactive strategies to mitigate congestion. [7-10]

Another potential area for future development is exploring edge computing solutions for real-time
processing. By deploying edge computing devices at strategic locations, such as intersections or
traffic control centers, the system could process video feeds and perform object detection and
tracking locally, reducing latency and enabling faster response times. This approach could be
particularly beneficial in scenarios where low-latency decision-making is crucial, such as dynamic
18
signal timing adjustments or emergency vehicle prioritization. [7-10]

Additionally, incorporating additional data sources could further enhance the system's capabilities
and adaptability. For instance, integrating weather data could enable the system to anticipate and
proactively manage traffic patterns influenced by adverse weather conditions. Similarly,
incorporating event information, such as concerts, sports games, or construction activities, could
allow the system to anticipate and plan for increased traffic volumes and implement appropriate
traffic management strategies. [7-10]

Overall, the future development of the Smart Traffic Management System involves addressing
increasingly complex scenarios, integrating with existing infrastructure, leveraging edge computing
for real-time processing, and incorporating additional data sources. These enhancements would
contribute to creating a comprehensive and adaptive traffic management solution that can
effectively address the challenges of urban mobility, reduce congestion, and enhance overall
transportation efficiency in densely populated areas. [7-10]

19
REFERENCE

1. https://ops.fhwa.dot.gov/congestion_report/chapter2.htm
2. https://drive.google.com/file/d/1AI2dK6sK-Tm_9M1s60owSNarmCJE4fbH/view?usp=drivesdk
3. https://www.v7labs.com/blog/yolo-object-detection
4. https://deci.ai/blog/object-tracking-with-deepsort-and-yolo-nas-practitioners-guide/#sort-and-
deepsort-algorithms
5. https://www.thinkautonomous.ai/blog/hungarian-algorithm/
6. https://pytorch.org/tutorials/beginner/deep_learning_60min_blitz.html?ref=blog.paperspace.com
7. https://cs231n.github.io/convolutional-networks/?ref=blog.paperspace.com
8. https://drive.google.com/file/d/1ALWZwqZEH5cJoonArEvoMbKYR6TPTwUR/view?usp=drivesdk
9. https://drive.google.com/file/d/1AD8BXcBh-XVM122tuRqabwwygbbx61Vj/view?usp=drivesdk
10. https://drive.google.com/file/d/1AXxWwG8KplOdzWl_N47ydFIinb4aCDEL/view?usp=drivesdk
11. https://www.datacamp.com/blog/yolo-object-detection-explained
12. https://deci.ai/blog/object-tracking-with-deepsort-and-yolo-nas-practitioners-
guide/#:~:text=Deep%20SORT%20is%20a%20state,in%20crowded%20and%20complex%20enviro
nments.
13. https://iopscience.iop.org/article/10.1088/1742-6596/1995/1/012046/meta

20

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy