0% found this document useful (0 votes)
29 views15 pages

Assist Ive Technology For Visual Impairment

Uploaded by

kannan R
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
29 views15 pages

Assist Ive Technology For Visual Impairment

Uploaded by

kannan R
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 15

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/374229722

BLIND ASSISTANCE SYSTEM: OBSTACLE DETECTION WITH DISTANCE AND


VOICE ALERTS

Technical Report · September 2023

CITATIONS READS

0 109

2 authors:

Millicent Mugwenhi Tawanda Mudawarima


Harare Institute of Technology Harare Institute of Technology
1 PUBLICATION 0 CITATIONS 2 PUBLICATIONS 0 CITATIONS

SEE PROFILE SEE PROFILE

All content following this page was uploaded by Millicent Mugwenhi on 27 September 2023.

The user has requested enhancement of the downloaded file.


BLIND ASSISTANCE SYSTEM: OBSTACLE DETECTION WITH DISTANCE
AND VOICE ALERTS
Millicent Mugwenhi Tawanda Mudawarima

School of Information Science and Technology School of Information Science and Technology

Harare Institute of Technology Harare Institute of Technology

millicentmugwenhi@gmail.com tawamudah@gmail.com

Abstract
This technical paper presents a real image detection system designed to assist blind
individuals in navigating their environment. The system utilizes computer vision
techniques to analyse images captured by a camera and provide audio feedback to the
user, describing the objects and obstacles in their surroundings. The system
incorporates machine learning algorithms to improve accuracy over time and can detect
a wide range of indoor objects . The proposed system has the potential to significantly
improve the independence and safety of visually impaired in various environments.

Keywords
Visually impaired , blind , navigation , detection , machine learning , objects
Introduction
Blindness is a challenging disability that affects millions of people worldwide, limiting
their ability to navigate and interact with their environment. While there are various
assistive technologies available to aid in mobility and independence, many of these
solutions rely on tactile feedback, which can be limiting in certain situations. In recent
years, computer vision techniques have shown great promise in developing assistive
technologies for the visually impaired Manduchi , 2012. This paper presents a real
image detection system designed to assist blind individuals in navigating their
environment using computer vision and machine learning algorithms. The system
provides audio feedback describing the objects and obstacles in the user's surroundings,
improving their independence and safety. The proposed system has the potential to
significantly enhance the quality of life for blind individuals, enabling them to navigate
their environment with greater ease and confidence. The purpose of this paper was to
recognize existing solutions and to find a way to try to close gaps on the existing
solutions . The systematic reviews helped to recognize what is known in the concerned
topic. The research was manually done by searching through the available data sources,
search engines and digital libraries
Literature Review
Quite a number of visually impaired people suffered from navigation-related activities
due to experiences that demotivate them from going out for social activities and
interactions. As compared to the outdoors, traveling inside public spaces is a different
story, external cues as well cannot be used and have their own set of difficulties.

Navigation aids like the traditional walking stick and guide dogs are still being used by
the blind people despite numerous computer aided designs designed to support them.
Visually impaired people can perceive and use environmental cues from the white cane
outside, but many environmental cues inside public settings cannot be exploited and
present their own set of challenges according to Jeamwatthanachai ,et al 2019. These
people can navigate freely thanks to several technologies.

The main goal of ETAs was the detection of obstacles in front or around the user by
use of sensor technology and conveying this information to the user by using sound or
touch-based (haptic) signals. A mobile ultrasonic ranging system was developed as an
electronic travelling aid for individuals with visual impairment. It was used to expand
the environmental detection range, Batarseh et al ,1997. This system used a pulse of
ultrasonic waves to determine the distance to obstacles , allowing the blind person to
walk safely autonomously outdoors. However, it is important to note that the device
was not a substitute for good orientation and mobility skills and training, which are
essential for safe and effective travel.

Electronic Orientation Aids gave user more information about their environment and
enables them to make decisions much more quickly, allowing them to move around
more safely, confidently and effectively; in full control. The EOAs were used for
guidance and instructions about best clear path. The white cane is by far the most
widely used assistive device for visually impaired orientation and mobility today. It
uses sensors and/or cameras to detect obstacles and giving tactile feedback to the user
through two vibrating buttons on the handle over which the user places their thumb.

An advancement to the ultrasonic ranging system ,Point Locus Wearable GPS


Pathfinder was designed L. Kay 1964 specialized as a finding aid for the visually
impaired individuals as they travel outdoors. The system recorded GPS location data
and use current location of the user, and the desired destination to form a path from
one point to the other. This was a cost effective solution that improved user’s way
finding ability, making them much more independent when travelling. However, the
GPS was ineffective for accurate positioning in indoor environment because of walls
that significantly interfere with transmissions.

Numerous suggestions some mentioned above just to name a few , have been made in
the field of aiding location and mobility for individuals with visual impairments
Another existing tool used as suggested by Goddard et al ,1982 and Koda ,2013 , Guide
dogs which are highly trained animals providing navigation assistance to visually
impaired individuals though they are expensive to train and maintain, and may not be
suitable for all individuals. In addition Wearable devices, such as smart glasses or
wristbands, provided haptic feedback to help individuals navigate their environment
naming a few . However, these devices are often expensive and may not be widely
accessible. Overally, while there are several existing blind navigation systems available,
each had its own limitations .
Materials and Methodology
1. Object detection

The YOLO (You Only Look Once), The system was designed in a way that the
user uses an Android device to capture live video frames and send them to a
networked server that runs on a laptop. The server will then process the data and
perform all the necessary calculations. The program that will run on the laptop-
based server is designed to detect specific objects using a pre-existing algorithm
called a pre-trained SSD detection model and YOLO (You Only Look Once) ,
an object detection algorithm that uses deep learning to identify

How YOLO works :

• The algorithm takes an image or a video frame as input.


• The image is divided into a grid of cells. Each cell is responsible for
detecting objects that appear within it.
• The algorithm uses anchor boxes, which are pre-defined rectangles of
different sizes and shapes. Each anchor box is associated with a specific
grid cell.
• For each grid cell, the algorithm predicts the probability of an object
appearing within that cell and the coordinates of a bounding box that
surrounds the object. This prediction is based on the features extracted
from the image using a convolutional neural network (CNN).
• To eliminate redundant detections, the algorithm uses non-max
suppression. It compares the probabilities of overlapping bounding
boxes and keeps only the box with the highest probability.
2. Text-to-Speech using pyttsx3
Pyttsx3 is a Python library for text-to-speech conversion that can be used in
blind navigation systems to provide audio feedback to the user. Each object
name is converted into voice notes. With the help of Pytorch which is a
primarily machine learning library. Pytorch is mainly applied to the audio
domain and it helps in loading the voice file in mp3 format.
3. Depth/Distance
These algorithms can estimate the distance of objects from a camera or other
sensor. Depth estimation or extraction is a technique and which aims to obtain
a representation of the spatial structure of a scene. It is used to calculate the
distance between two objects. The blind navigation prototype is used to assist
the blind people which aims to issue warning to the blind people about the
hurdles coming on their way. In order to do this, the system need to warn the
user how much distance the obstacle and person are located in any real time
situation
The system was designed in a way that the user uses an Android device to capture live video
frames and send them to a networked server that runs on a laptop. The server will then process
the data and perform all the necessary calculations. The program that will run on the laptop-
based server is designed to detect specific objects using a pre-existing algorithm called a pre-
trained SSD detection model and YOLO (You Only Look Once) . This algorithm has been
trained to recognize objects using a dataset called COCO (Common Objects in Context), and
it is a large dataset that contains images with labelled objects containing common indoor
objects .After and image is identified , the system gives a voice feedback In addition, the system
will alert the blind person about their distance from the object using voice prompts and units
of measurement, whether they are very close to the object or safe distance .

Flowchart of the system

Store Captured
Camera Input
Images

Preprocess of the
image

Identify object
from image

Generate audio
feedback Send audio to user
signal

The above general scheme of the system, shows that the user must take into account the audio
or alerts presented by the system ,while the device is in charge of processing the information
acquired from the environment
Results
Discussion
One of the main advantages of an Android-based navigation system with obstacle
detection is that it can accurately identify and locate obstacles in the user's path. Which
is typically accomplished using computer vision algorithms that analyze the images
captured by the camera on the device. By contrast, many other blind navigation systems
rely solely on GPS data, which can be less precise and may not provide detailed
information about the user's surroundings. Moreso , this particular system can calculate
the distance between the user and objects in real-time. This can be extremely helpful
for users who are visually impaired, as it allows them to make informed decisions about
how to navigate their surroundings. In contrast, many other blind navigation systems
may provide only limited information about distance or may rely on users to estimate
distance based on auditory cues .Another advantage of an Android-based navigation
system is that it is widely available and can be used on a variety of devices, including
smartphones, laptops and tablets. This makes it highly accessible for a wide range of
users, regardless of their location or financial resources. Additionally, Android-based
navigation systems are typically highly customizable, allowing users to tailor the
system to their specific needs and preferences.

Most importantly this navigation system gives a blind person the opportunity to gain a
better understanding of the objects in their surroundings. For example, if the system
identifies an object as a chair, the person can use their sense of touch to explore and
familiarize themselves with what a chair feels like.

However the blind navigation system requires devices with high processing power ,
storage , a camera with better pixels and large battery life for accuracy
Recommendations for Future Studies

1. It's crucial to put the needs of the user at the center of the design process. This means
involving blind and visually impaired individuals in the design and testing of the
navigation system to ensure that it meets their specific needs and preferences.

2. Blind navigation systems should not rely on a single sensor or technology. Instead, it's
important to incorporate multiple sensors such as cameras, GPS, and proximity sensors
to provide the most accurate and comprehensive information about the user's
surroundings.

3. Users have unique preferences and needs when it comes to navigating their
environment. Future developers should consider providing customization options that
allow users to personalize the navigation system to their specific needs and preferences.

4. Blind navigation systems are often used for extended periods of time and must be
reliable. Developers should prioritize battery life and accessibility features such as
screen reader compatibility to ensure that the system can be used for extended periods
without interruption.
Conclusion

In conclusion, the development of a blind navigation system can significantly improve the
mobility and independence of visually impaired individuals. The system relies on various
technologies such as computer vision, image processing, and machine learning to recognize
the environment and provide accurate and reliable navigation guidance.

The technical paper has highlighted the key components and algorithms used in the
development of a blind navigation system, including object detection, semantic segmentation,
and obstacle avoidance

Despite the unforeseen challenges that may rise, the blind navigation system has shown great
potential in assisting visually impaired individuals to navigate their surroundings safely and
efficiently. The system can be further improved through ongoing research and development,
including the integration of new technologies and the refinement of existing algorithms.
References

1. Manduchi R, Coughlan J. (Computer) Vision without Sight. Commun ACM. 2012


Jan;55(1):96-104. doi: 10.1145/2063176.2063200. PMID: 22815563; PMCID:
PMC3398697.
2. Koda N., Kubo M., Ishigami T. Assessment of Dog Guides by Users in Japan and
Suggestions forImprovement. J.Vis.Impair.Blind. 2011
3. Goddard M. E., & Beilharz R. G. (1982). Genetic and environmental factors affecting
the suitability of dogs as guide dogs for the blind. Theoretical and Applied Genetics,
62, 97–102.
4. J. C. Torrado, G. Montoro and J. Gomez, "Easing the integration: A feasible indoor
wayfinding system for cognitive impaired people", Pervasive Mobile Comput., vol. 31,
pp. 137-146, Sep. 2016.
5. Vision Impairment and Blindness. [(accessed on 13 January 2020)];
Availableonline:https://www.who.int/news-room/fact-sheets/detail/blindness-and-
visual-im....
6. Jeamwatthanachai , W., Wald, M., & Wills, G. (2019). Indoor navigation by blind
people: Behaviors and challenges in unfamiliar spaces and buildings. British Journal of
Visual Impairment, 37(2), 140–153. https://doi.org/10.1177/0264619619833723
7. D. Dakopoulos and N. G. Bourbakis, ‘‘Wearable obstacle avoidance electronic travel
aids for blind: A survey,’’ IEEE Trans. Syst., Man, Cybern. C, Appl. Rev., vol. 40, no.
1, pp. 25–35, Jan. 2010.
8. A. S. Rao, J. Gubbi, M. Palaniswami, and E. Wong. A vision-based system to detect
potholes and uneven surfaces for assisting blind people.In 2016 IEEE International
Conference on Communications (ICC),pages 1–6, May 2016.
9. V. C. Sekhar, S. Bora, M. Das, P. K. Manchi, S. Josephine, and R. Paily.Design and
implementation of blind assistance system using real timestereo vision algorithms. In
2016 29th International Conference on VLSI Design and 2016 15th International
CConference on Embedded Systems (VLSID), pages 421–426, Jan 2016.
10. L. Renier and A. G. De Volder, ‘‘Vision substitution and depth perception: Early blind
subjects experience visual perspective through their ears,’’ Disab. Rehabil. Assistive
Technol., vol. 5, no. 3, pp. 175–183, May 2010.
11. Dakopoulos, D., & Bourbakis, N. G. (2010). Wearable obstacle avoidance electronic travel aids
for blind: A survey. IEEE Transactions on Systems, Man, and Cybernetics - Part C: Applications
and Reviews, 40(1), 25–35.
12. Rao, A. S., Gubbi, J., Palaniswami, M., & Wong, E. (2016). A vision-based system to detect
potholes and uneven surfaces for assisting blind people. In 2016 IEEE International Conference
on Communications (ICC) (pp. 1–6).
13. Sekhar, V. C., Bora, S., Das, M., Manchi, P. K., Josephine, S., & Paily, R. (2016). Design and
implementation of blind assistance system using real-time stereo vision algorithms. In 2016
29th International Conference on VLSI Design and 2016 15th International Conference on
Embedded Systems (VLSID) (pp. 421–426).
14. Renier, L., & De Volder, A. G. (2010). Vision substitution and depth perception: Early blind
subjects experience visual perspective through their ears. Disability and Rehabilitation:
Assistive Technology, 5(3), 175–183.

View publication stats

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy