Research Paper 2
Research Paper 2
https://doi.org/10.22214/ijraset.2024.63115
International Journal for Research in Applied Science & Engineering Technology (IJRASET)
ISSN: 2321-9653; IC Value: 45.98; SJ Impact Factor: 7.538
Volume 12 Issue VI June 2024- Available at www.ijraset.com
Abstract: Surveillance robotic cars, equipped with a myriad of modules, cameras, and communication systems, traverse diverse
environments, providing real time data for surveillance purposes. These vehicles, equipped with PiRGB arrays and high
resolution cameras, capture detailed, real time imagery for effective threat detection. Using Haarcascade Classifier and OpenCV,
they track human faces. Algorithms like Pigpio, Haarcascade, and Local Binary Patterns Histograms helps in identifying face
detection and motion tracking, while GPS Neo 6M, GSM SIM 800L, HC-05 Bluetooth module, and Arduino Uno ensure
accurate location tracking of the vehicle. Hardware such as the Arduino Uno and Motor Driver L293D supports precise
movement of the vehicle. The metal detector, paired with a buzzer, identifies metallic objects present on the navigation path of
the vehicle. These surveillance robotic cars can be deployed in various environments, from urban areas to dense forests.
Keywords: Surveillance, real-time, monitor, detection, robotic, security, Haarcascade, Local Binary Patterns Histogram
I. INTRODUCTION
Advances in technology in over the past few years have greatly boosted the capabilities of surveillance systems, increasing their
effectiveness and adaptability. One such invention is the surveillance robotic car, which combines a number of current innovations
to provide a complete security and monitoring solution. Using a GPS NEO-6M module, an Arduino Uno, a GSM SIM 800L module
for real-time communication and precise location tracking, and a Raspberry Pi connected camera module for human
movement recognition. The prototype of surveillance robotic car is shown in the figure 1,this paper describes the design and
execution of a surveillance robotic car. The device also features a metal detector to detect possible dangers like weapons of mass
destruction, guaranteeing a strong security system.
Using a Raspberry Pi 3 Model b+, camera module with 180 degree tiltable servo motor is used to recognize faces. This function
uses OpenCV and Haarcascade techniques to take pictures and process them so the system can identify people around it. Local
Binary Patterns Histograms is an effective algorithm for facial recognition. It is relatively simple and performs well
under different lighting conditions. Applications in restricted regions, where it's necessary to monitor and efficiently regulate
unwanted access, require this functionality.
The integration of GPS and GSM with SMS communication enhances surveillance robotic cars by providing real time location
updates to the controlling unit, enabling remote monitoring and control. This feature ensures operational efficiency through instant
updates and timely interventions, even in areas with limited network coverage. The ability to receive location updates via SMS
enhances security by providing a fail-safe mechanism to track and disable the vehicle in case of theft or unauthorized access.
Surveillance robotic cars, equipped solely with a metal detector sensor and a buzzer.
©IJRASET: All Rights are Reserved | SJ Impact Factor 7.538 | ISRA Journal Impact Factor 7.894 | 403
International Journal for Research in Applied Science & Engineering Technology (IJRASET)
ISSN: 2321-9653; IC Value: 45.98; SJ Impact Factor: 7.538
Volume 12 Issue VI June 2024- Available at www.ijraset.com
Metal detectors efficiently identify metallic components in potential threats. Upon metal detection, the buzzer signals, improving
situational awareness. This setup is cost effective and adaptable. Metal detectors work by emitting electromagnetic fields from a
search coil into the ground, interacting with metallic objects to induce eddy currents, which create a secondary magnetic field. By
analysing changes in this field, detectors can identify the presence and approximate depth of metal objects, ensuring effective
detection.
A. Motor Movement
TABLE I
MOTOR MOVEMENT
Movement Right Left Forward Backward
Wheel 1 1 0 1 0
Wheel 2 0 1 0 1
Wheel 3 1 0 0 1
Wheel 4 0 1 1 0
©IJRASET: All Rights are Reserved | SJ Impact Factor 7.538 | ISRA Journal Impact Factor 7.894 | 404
International Journal for Research in Applied Science & Engineering Technology (IJRASET)
ISSN: 2321-9653; IC Value: 45.98; SJ Impact Factor: 7.538
Volume 12 Issue VI June 2024- Available at www.ijraset.com
Using the L293D motor driver, a surveillance robotic car’s motors can be controlled by making a few connections. Pins 9, 10, 11,
and 12 of the L293D motor driver must first be connected to digital pins on the Arduino Uno in order to complete the control
connections. The motor’s direction and speed are managed by means of these connections. The motor outputs of the L293D are then
connected to the relevant motors of the robotic car in the motor connections step. It allows the motor driver to supply each motor
with its own power and control. likewise, the power input of the L293D needs to be connected to an external power supply that
meets the voltage needs of the motors.
TABLE II
CAMERA SPECIFICATIONS
Specification Values
Resolution 640x480 pixels
Pan servo 2
Tilt servo 3
Pan pos 1250
Tilt pos 1500
In order to configure the camera, firmly fasten it to the Camera Serial Interface (CSI) port of the Raspberry Pi. Verify that the ribbon
cable is attached correctly to avoid disconnections. After the hardware configuration is finished, use the terminal command ”sudo
raspi-config” or the Raspberry Pi Configuration tool to enable the camera interface. Install the necessary libraries for controlling the
camera after that. These libraries include ’picamera’ for taking pictures and videos and ’Haarcascade’, which is a component of
the ’OpenCV’ library, for face identification tasks. The coordinates like (x,y) for the top-left corner and (x+w, y+h) for the bottom-
right corner to divide the frame precisely for face detection. This will allow for correct segmentation of faces that are detected
within the frame. This thorough configuration guarantees both software and hardware.
D. Location Tracking
The code initializes pins for an LED, a relay, and software serial ports for GSM and GPS modules, setting up GSM for text
communication. In the main loop, it listens for GSM messages, turning on/off the relay with ”ON”/”OFF” commands, and retrieves
GPS coordinates with ”GETLOC”. The smart Delay function waits for GPS data, extracting latitude and longitude into variables.
Upon ”GETLOC” command, it sends a Google Maps link via GSM. Expected outputs include setup messages and responses to
commands. The equations for real-time location tracking use latitude and longitude to determine the robot’s position, aiding in
precise location identification.
©IJRASET: All Rights are Reserved | SJ Impact Factor 7.538 | ISRA Journal Impact Factor 7.894 | 405
International Journal for Research in Applied Science & Engineering Technology (IJRASET)
ISSN: 2321-9653; IC Value: 45.98; SJ Impact Factor: 7.538
Volume 12 Issue VI June 2024- Available at www.ijraset.com
III. RESULTS
A. Human Face Detection
This surveillance robotic system uses a Raspberry Pi, Camera, Haar Cascade Classifier, and LBPH face recognizer for real-time
facial recognition and tracking. Initial servo positions are set for pan (1250µs) and tilt (1500µs), and GPIO controls servos on pins 2
and 3. The camera captures 640x480 pixel video frames, adjusting servo positions based on face detection to keep the face centred.
If trained, the system recognizes faces with a confidence threshold of 70. The setup ensures efficient automated tracking and
monitoring, combining accessible hardware and open source software tools for enhanced surveillance capabilities.
The human face detection system illustrated in Figure 4 demonstrates exceptional accuracy and versatility across various conditions.
The system effectively detects human faces in both light and dark themes, ensuring reliable performance regardless of ambient
lighting conditions. The system achieves an impressive accuracy rate of approximately 99 percent, making it highly reliable for
diverse applications. Additionally, it maintains high accuracy even when the subjects are wearing spectacles, in both light and dark
environments. By setting up GPIO for servo motor control and utilizing ‘pigpio‘, the system dynamically adjusts the pan and tilt
positions of the camera based on the detected face’s location within the frame. This ensures that the face remains centred, enhancing
the surveillance capability. The code also includes a mechanism to recognize faces using a pre trained model, with a confidence
threshold to determine recognition accuracy. Overall, the system demonstrates a practical approach to implementing automated,
intelligent tracking and monitoring using accessible hardware and open source software tools.
©IJRASET: All Rights are Reserved | SJ Impact Factor 7.538 | ISRA Journal Impact Factor 7.894 | 406
International Journal for Research in Applied Science & Engineering Technology (IJRASET)
ISSN: 2321-9653; IC Value: 45.98; SJ Impact Factor: 7.538
Volume 12 Issue VI June 2024- Available at www.ijraset.com
For surveillance robotic cars to monitor locations accurately, GPS technology is a must. Through the integration of a GPS module
with the Arduino, the vehicle is able to ascertain its exact geographical coordinates, which is essential for efficient navigation and
monitoring. Operators are able to track the car’s movements and get real time updates. Furthermore, by including a GSM module,
the Arduino sends position data via SMS communication to a mobile device, guaranteeing ongoing tracking even in places with
spotty network coverage. The position data is continuously logged by the car’s internal computer and then sent to a centralized
control station for analysis and monitoring.
Because SMS connection allows real time monitoring and control from a remote place, it improves the functionality of surveillance
robotic automobiles. These cars provide real time location updates through SMS transmission, which allows for prompt intervention
or modification when necessary. This function improves security measures even further by offering a backup plan for tracking the
vehicle in the event of theft or unlawful entry. To stop abuse or any security breaches, authorities have the ability to remotely
disable the car or monitor its movements. The capabilities of surveillance robotic cars are generally improved by the integration of
GPS, GSM, and SMS communication, increasing their efficacy in navigation, operational efficiency, and security enforcement.
C. Metal Detector
The metal detector detects objects within a specific range in distance like 10 to 16 inch, typically determined by the sensitivity
settings or the design of the detector. It can detect various metals such as iron, nickel, copper, aluminium, gold, and silver,
depending on their conductive properties and the strength of the detector’s electromagnetic field as shown in the figure6. The depth
detection capability of a metal detector depends on various factors, including the size and conductivity of the target, the detector’s
frequency, and the soil composition. Generally, higher frequencies are more sensitive to small targets but have shallower detection
depths, while lower frequencies penetrate deeper but are less sensitive to small targets. A metal detector with a frequency of 6.5 kHz
may detect a coin sized target at a depth of approximately 10 to 16 inches. However, larger objects or those with higher conductivity,
such as large relics or deep treasures, can be detected at much greater depths
The relationship between detection depth and frequency can be approximated by the following equation:
D=k√f
where: - D is the maximum detection depth (in inches or meters)
- f is the frequency of the metal detector (in kHz)
- k is a constant determined by various factors including target size, conductivity, and soil conditions
This equation suggests that detection depth decreases as frequency increases. However, the actual relationship can vary based on
factors such as soil mineralization and detector technology. Additionally, other factors such as coil size and shape, sensitivity
settings, and ground balance adjustments can also influence detection depth.
©IJRASET: All Rights are Reserved | SJ Impact Factor 7.538 | ISRA Journal Impact Factor 7.894 | 407
International Journal for Research in Applied Science & Engineering Technology (IJRASET)
ISSN: 2321-9653; IC Value: 45.98; SJ Impact Factor: 7.538
Volume 12 Issue VI June 2024- Available at www.ijraset.com
Future developments for robotic surveillance cars seem bright, with multiple breakthroughs maybe in the works. Robotic
manipulation is one notable advancement. These automobiles might be equipped with robotic arms or manipulators with specialized
equipment, which would enable them to safely handle suspicious materials and allow for the remote disposal of explosives. Swarm
intelligence is another exciting development. The ability of many robotic vehicles to work together and coordinate their movements
improves surveillance applications. Swarms of autonomous surveillance cars may operate in the future to share information, cover
greater regions more effectively, and work together on tasks like perimeter monitoring and search and rescue. With enhanced
human-robot interface skills, these vehicles will be able to work with human operators and responders more efficiently. To promote
intuitive communication and control, haptic feedback interfaces, gesture recognition, and natural language processing may be
included. By concentrating on these developments, surveillance robotic cars will continue to be developed in the future, increasing
their usefulness and expanding their range of uses, ultimately becoming essential instruments in contemporary security and
surveillance operations.
V. ACKNOWLEDGMENT
The authors would like to express their profound gratitude to the Electronics and Communication Department of R V College of
Engineering for their significant support and assistance in facilitating the development of the surveillance robotic car.
REFERENCES
[1] Sali, Safa Mohammed and Joy, K. R., 2021 8th International Conference on Smart Computing and Communications (ICSCC), Smart Buggy: An IoT Based
Smart Surveillance Robotic Car Using Raspberry Pi, 2021, pp. 280-285
[2] Medjdoubi, Abdelkader, Meriem Meddeber, and Khadidja Yahyaoui. “Smart City Surveillance: Edge Technology Face Recognition Robot Deep Learning
Based.” International Journal of Engineering 37.1 (2024): 25-36.
[3] Sali, Safa Mohammed, and K. R. Joy. “Intelligent Rover: An IoT Based Smart Surveillance Robotic Car for Military.” 2023 2nd International Conference on
Computational Systems and Communication (ICCSC). IEEE, 2023.
[4] Janani, K., et al. “Vision Based Surveillance Robot for Military Applications.” 2022 8th International Conference on Advanced Computing and Communication
Systems (ICACCS). Vol. 1. IEEE, 2022.
[5] P. J. Narayana Raju, V. Gaurav Pampana, V. Pandalaneni, G. Gugapriya and C. Baskar, “Design and implementation of a rescue and surveillance robot using
cross-platform application,” 2022 International Conference on Inventive Computation Technologies (ICICT), Nepal, 2022, pp. 644-648.
[6] Rawat, Romil, et al. “Surveillance robot in cyber intelligence for vulnerability detection.” Machine Learning for Robotics Applications (2021): 107-123.
[7] Rao, Bandi Narasimha, et al. “Movable surveillance camera using IoT and Raspberry Pi.” 2020 11th international conference on computing, communication
and networking technologies (ICCCNT). IEEE, 2020.
[8] Kumar, Nitesh, et al. “Surveillance Robocar Using IoT and Blynk App.” Advances in Communication, Devices and Networking: Proceedings of ICCDN 2019
3. Springer Singapore, 2020.
[9] A. R. Nayak, S. C. Ayyar, O. Aiswarya, M. CH and N. Mohankumar, “Security Surveillance Bot for Remote Observation During Pandemics,” 2020 5th
International Conference on Communication and Electronics Systems (ICCES), Coimbatore, India, 2020, pp. 635-640.
[10] Akilan, T., et al. “Surveillance robot in hazardous place using IoT technology.” 2020 2nd International conference on advances in computing, communication
control and networking (ICACCCN). IEEE, 2020.
[11] Meddeb, Houda, Zouhaira Abdellaoui, and Firas Houaidi. “Development of surveillance robot based on face recognition using Raspberry-PI and IOT.”
Microprocessors and Microsystems 96 (2023): 104728.
[12] Sowmya, K. V., Harshavardhan Jamedar, and Pradeep Godavarthi. “Remote Monitoring System of Robotic Car Based on Internet of Things Using Raspberry
Pi.” Journal of Computational and Theoretical Nanoscience 17.5 (2020): 2288-2295.
[13] Kumari, Mona, Ajitesh Kumar, and Ritu Singhal. “Design and analysis of IoT-based intelligent robot for real-time monitoring and control.” 2020 international
conference on power electronics and IoT applications in renewable energy and its control (PARC). IEEE, 2020.
[14] Popli, Nakshtra, et al. “Surveillance Car Bot Future of Surveillance Car Bot.” International Journal of Engineering Research and Technology (IJERT) ISSN
(2021): 2278-0181.
[15] Mahmud, Hasan, Jamal Uddin Ahamed, and Mohammed Nazrul Islam Khan. “An Autonomous Surveillance Robot with IoT based Rescue System
Enhancement.” International Journal on Emerging Technologies 11.5 (2020): 489-494.
[16] Singh, Rishabh, et al. “Wireless Surveillance Robot for Industrial Application.” Machine Learning, Image Processing, Network Security and Data Sciences:
Select Proceedings of 3rd International Conference on MIND 2021. Singapore: Springer Nature Singapore, 2023.
[17] Telkar, Aishwarya K., and Baswaraj Gadgay. “IoT based smart multi application surveillance robot.” 2020 Second International Conference on Inventive
Research in Computing Applications (ICIRCA). IEEE, 2020.
[18] Akilan, T., et al. “High secure wireless video surveillance robot using IOT technology.” 2021 3rd International Conference on Advances in Computing,
Communication Control and Networking (ICAC3N). IEEE, 2021.
[19] Sirasanagandla, Sushma, Mounisha Pachipulusu, and Ramesh Jayaraman. “Development of surveillance robot to monitor the work performance in hazardous
area.” 2020 International Conference on Communication and Signal Processing (ICCSP). IEEE, 2020.
[20] Mohana , HV Ravish Aradhya “Object Detection and Tracking using Deep Learning and Artificial Intelligence for Video Surveillance Applications”,
International Journal of Advanced Computer Science and Applications, doi 10.14569/IJACSA.2019.0101269,The Science and Information Organization, 2019.
©IJRASET: All Rights are Reserved | SJ Impact Factor 7.538 | ISRA Journal Impact Factor 7.894 | 408