Ijireeice 50
Ijireeice 50
INTERNATIONAL JOURNAL OF INNOVATIVE RESEARCH IN ELECTRICAL, ELECTRONICS, INSTRUMENTATION AND CONTROL ENGINEERING
Vol. 4, Issue 5, May 2016
Abstract: In few years ROBOTS are going to decree the cosmos. Man supremacy is abridged by replacing robots. In
the existing system, human hand movements is sensed by the robot through sensors and it imitates the same, but this is
made probable by placing the sensor in the accelerometer and which is to be carried by the person(user’s hand). As the
person moves his/her hand, the accelerometer also moves accordingly which in turn displaces the sensor and this sensor
senses the constraints according to the position of hand and this sensed constraints are given to the sensors placed in
robotic arm which makes the movements of robot arm accordingly. Hand gestures and even the intact body posture
play a very vital role in human communication. The intention of this paper primarily is to generate a script in vision
assistant and real time image through the vision acquisition acquired. The acquired image is to be transformed into
digital signal using image processing in LabVIEW. Interfacing the robot with LabVIEW is processed through DAQ.
The digital indication of the robot is made to control by several hand gestures and drive the robot in different direction.
I. INTRODUCTION
Computer analysis of hand gestures has become a key These real time images are acquired into the simulation
research spot in the Human Computer Interface (HCI), and processed.
which is not restricted to the keyboard, mouse or some
other direct input devices. Ubiquitous Computing, Smart
Rooms, Virtual Reality and many other important
technical fields include gestures as a way of
communication; therefore, research on computer analysis
of hand gestures and body posture has significant
scientific value [1]. IMAQ vision toolbox presents a
comprehensive set of digital image processing and Figure1: Block diagram of the Model
acquisition functions that improve the efficiency of the
projects and diminish the programming effort of the users The digital signal of the images are processed and
obtaining better results in shorter time [2]. A vision based converted in to the analog signal. The analog signals are
hand gesture interface, where by hand states (open/close) taken out using the DAQ and NI-ELVIS. These signals are
and a class of hand postures in a taxonomy of grass types given to the dc gear motor to move the robot.
can be detected and recognized with a single passive
camera. Inferring the full articulations of the fingers form III. LABVIEW TOOLS
a single camera is an exigent problem due to the
complexity of the hand articulations, the occlusions of the a) Vision Assistant
fingers, and complications in segmentation of the hand The NI Vision Development Module is designed develop
from the background image. In the past, several and deploy machine vision applications. It includes
researchers have developed hand pose estimation methods hundreds of functions to acquire images from a multitude
for vision-based gesture interfaces [3]. Using the captured of cameras and to process images by enhancing them,
data, several methods have been proposed for the robot to checking for presence, locating features, identifying
classify human emotions or to follow a human teacher’s objects, and measuring parts. Vision Assistant is a tool for
instruction [4]. In this project we used the technique of prototyping and testing image processing applications. To
edge detection in the vision assistant. The edges of the prototype an image processing application, build custom
images are detected by the change in contrast of the image. algorithms with the Vision Assistant scripting feature. The
diverse hand gestures are taken for the faultless scripting feature records every step of the processing
recognition of the images. these gestures are progression algorithm. After completing the algorithm, you can test it
to acquire the analog voltage through daq from the nielvis on other images to make sure it works.
and interfacing it with the robot.
The algorithm is recorded in a Builder file, which is an
II. PROCESS DESCRIPTION ASCII text file that lists the processing functions and
relevant parameters for an algorithm prototype in Vision
The distinct hand gestures are exposed before the web Assistant. Using the LabVIEW VI Creation Wizard, you
camera. The web camera is worn to acquire the image can create a LabVIEW VI that performs the prototype
from the real time. which has been created in Vision Assistant.
INTERNATIONAL JOURNAL OF INNOVATIVE RESEARCH IN ELECTRICAL, ELECTRONICS, INSTRUMENTATION AND CONTROL ENGINEERING
Vol. 4, Issue 5, May 2016
INTERNATIONAL JOURNAL OF INNOVATIVE RESEARCH IN ELECTRICAL, ELECTRONICS, INSTRUMENTATION AND CONTROL ENGINEERING
Vol. 4, Issue 5, May 2016
d) Browse Image:
The next step is browse image. To view the images that
are taken is seen by clicking the browse image icon. In
that we can delete the unwanted images by selecting the
unwanted image and clicking the close image icon. Open
image icon is used to replace the selected image or insert
the new image from the save images in the computer.
e) Process Image:
The final step is to process the acquired images. There are
many processing functions to get clear image detection.
We are using the edge detector from the machine vision.
In the edge detector setup edge detector and the look for
boxes are changed in to simple edge tool and all edges
respectively. After getting the clear edges it is saved as a
script. This script is acquired in the LabVIEW.
b)Vision assistant:
Vision Assistant is a tool for prototyping and testing image
processing applications. To prototype an image processing
application, build custom algorithms with the Vision
Assistant scripting feature. The scripting feature records
every step of the processing algorithm. After completing
the algorithm, you can test it on other images to make sure
it works.
c) Acquiring image:
To create a script on vision assistant the first step
is to acquire the image from the camera. The types of Figure5: Process image
acquiring the image are
f) LabVIEW programming
Acquires image The vision acquisition is placed in the block diagram by
Acquire image (IEEE 1394 or GigE) click and drag from vision express. This opens the dialog
Acquires image (smart camera) box of NI vision acquisition express. Work to be done in
Acquire image (RT) vision acquisition is explained in previous topic. The
Simulate Acquisition vision assistant also placed in the block diagram. It opens
the ni vision assistant dialog box. In this the script we
As we are using the web camera it comes under the type of saved is opened and clicks the select controls. The output
IEEE 1394 or GigE. On selecting the type of acquiring a to be taken out is selected and clicks the finish button.
window with the icons like acquire continuous image,
acquire single image will be opened. Click on acquire For the movement of the robot case structure is used. The
continuous image to get a view of the finger and click on output from the vision assistant is in the form of numeric it
acquire single image. Keep doing it for getting desired is given to the case structure for changing the cases. In
image. After getting the image click the store acquired each case two DAQ is placed for controlling two robots. In
image in Browser to save the images. the daq the voltage for the motor is generated by generate
signals. In the generate signals voltage and the output port
is selected. For each motor separate port is used. After
selecting the port generation mode is selected as 1sample
(on demand). Give the voltage for the motor and click the
run for testing the DAQ. Then click ok for creating the
DAQ. By repeating these, create the cases for the different
motion of the robot on giving varying voltage to the DAQ.
The output image from the vision acquisition is given as
the input for the vision assistant. The number of edges
from the vision assistant is given to the case structure.
INTERNATIONAL JOURNAL OF INNOVATIVE RESEARCH IN ELECTRICAL, ELECTRONICS, INSTRUMENTATION AND CONTROL ENGINEERING
Vol. 4, Issue 5, May 2016
Figure8: Hardware
INTERNATIONAL JOURNAL OF INNOVATIVE RESEARCH IN ELECTRICAL, ELECTRONICS, INSTRUMENTATION AND CONTROL ENGINEERING
Vol. 4, Issue 5, May 2016
VIII. CONCLUSION
REFERENCES