0% found this document useful (0 votes)
48 views24 pages

Unit 3 - Sensors and Vision Systems

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
48 views24 pages

Unit 3 - Sensors and Vision Systems

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 24

SCHOOL OF ELECTRICAL AND ELECTRONICS ENGINEERING

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING

UNIT III- SENSORS AND VISION SYSTEM-SCSA1406


UNIT III
SENSORS AND VISION SYSTEMS

Unit 3: Sensors and Vision System


Sensor: Contact and Proximity, Position, Velocity, Force, Tactile etc. Introduction to Cameras, Camera
calibration, Geometry of Image formation, Euclidean/Similarity/Affine/Projective transformations
Vision applications in robotics

Sensors are devices that can sense and measure physical properties of the environment,

E.g. temperature, luminance, resistance to touch,


weight, size, etc. The key phenomenon is
transduction

Transduction (engineering) is a process that converts one type of energy to another

Transducer

a device that converts a primary form of energy into a corresponding signal with a
different energy form Primary Energy Forms: mechanical, thermal, electromagnetic,
optical, chemical, etc.

take form of a sensor or an


Actuator Sensor (e.g.,
thermometer)

a device that detects/measures a signal


or stimulus acquires information from
the “real world”

Tactile sensing

Touch and tactile sensor are devices which measures the parameters of a contact
between the sensor and an object. This interaction obtained is confined to a small
defined region. This contrasts with a force and torque sensor that measures the total
forces being applied to an object. In the consideration of tactile and touch sensing, the
following definitions are commonly used:

Touch Sensing

This is the detection and measurement of a contact force at a defined point. A touch
sensor can also be restricted to binary information, namely touch, and no touch.
Tactile Sensing
This is the detection and measurement of the spatial distribution of forces
perpendicular to a predetermined sensory area, and the subsequent interpretation of
the spatial information. A tactile-sensing array can be considered to be a coordinated
group of touch sensors.

Force/torque sensors

Force/torque sensors are often used in combination with tactile arrays to provide
information for force control. A single force/torque sensor can sense loads anywhere
on the distal link of a manipulator and, not being subject to the same packaging
constraints as a “skin” sensor, can generally provide more precise force measurements
at higher bandwidth. If the geometry of the manipulator link is defined, and if single-
point contact can be assumed (as in the case of a robot finger with a hemispherical tip
contacting locally convex surfaces), then a force/torque sensor can provide
information about the contact location by ratios of forces and moments in a technique
called “intrinsic tactile sensing”

Proximity sensor

A proximity sensor is a sensor able to detect the presence of nearby objects without
any physical contact. A proximity sensor often emits an electromagnetic field or a
beam of electromagnetic radiation (infrared, for instance), and looks for changes in
the field or return signal. The object being sensed is often referred to as the proximity
sensor's target. Different proximity sensor targets demand different sensors. For
example, a capacitive or photoelectric sensor might be suitable for a plastic target; an
inductive proximity .sensor always requires a metal target. The maximum distance
that this sensor can detect is defined "nominal range". Some sensors have adjustments
of the nominal range or means to report a graduated detection distance. Proximity
sensors can have a high reliability and long functional life because of the absence of
mechanical parts and lack of physical contact between sensor and the sensed object.

Proximity sensors are commonly used on smart phones to detect (and skip) accidental
touch screen taps when held to the ear during a call. They are also used in machine
vibration monitoring to measure the variation in distance between a shaft and its
support bearing. This is common in large steam turbines, compressors, and motors
that use sleeve-type bearings.
Fig.3.1 Types of Proximity Sensors

Fig.3.2 Capacitive Proximity Sensor


Ranging sensors

Ranging sensors include sensors that require no physical contact with the object being
detected. They allow a robot to see an obstacle without actually having to come into
contact with it. This can prevent possible entanglement, allow for better obstacle
avoidance (over touch-feedback methods), and possibly allow software to distinguish
between obstacles of different shapes and sizes. There are several methods used to allow a
sensor to detect obstacles from a distance. Below are a few common methods ranging in
complexity and capability from very basic to very intricate. The following examples are
only made to give a general understanding of many common types of ranging and
proximity sensors as they commonly apply to robotics.

Sensors used in Robotics

Fig 3.3 Industrial Robot with Sensor

The use of sensors in robots has taken them into the next level of creativity. Most
importantly, the sensors have increased the performance of robots to a large extent. It also
allows the robots to perform several functions like a human being. The robots are even made
intelligent with the help of Visual Sensors (generally called as machine vision or computer
vision), which helps them to respond according to the situation. The Machine Vision system
is classified into six sub-divisions such as Pre-processing, Sensing, Recognition, Description,
Interpretation, and Segmentation.

Different types of sensors:

This type of sensor is capable of pointing out the availability of a component. Generally, the
proximity sensor will be placed in the robot moving part such as end effector. This sensor
will be turned ON at a specified distance, which will be measured by means of feet or
millimeters. It is also used to find the presence of a human being in the work volume so that
the accidents can be reduced.
Range Sensor:

Range Sensor is implemented in the end effector of a robot to calculate the distance between
the sensor and a work part. The values for the distance can be given by the workers on visual
data. It can evaluate the size of images and analysis of common objects. The range is
measured using the Sonar receivers & transmitters or two TV cameras.

Tactile Sensors:

A sensing device that specifies the contact between an object, and sensor is considered as the
Tactile Sensor. This sensor can be sorted into two key types namely: Touch Sensor and Force
Sensor.

Fig 3.4 Touch Sensor and Force Sensor


The touch sensor has got the ability to sense and detect the touching of a sensor and object.
Some of the commonly used simple devices as touch sensors are micro – switches, limit
switches, etc. If the end effector gets some contact with any solid part, then this sensor will be
handy one to stop the movement of the robot. In addition, it can be used as an inspection
device, which has a probe to measure the size of a component.

The force sensor is included for calculating the forces of several functions like the machine
loading & unloading, material handling, and so on that are performed by a robot. This sensor
will also be a better one in the assembly process for checking the problems. There are several
techniques used in this sensor like Joint Sensing, Robot – Wrist Force Sensing, and Tactile
Array Sensing.

Robotic applications of a machine vision system

A machine vision system is employed in a robot for recognizing the objects. It is commonly
used to perform the inspection functions in which the industrial robots are not involved. It is
usually mounted in a high speed production line for accepting or rejecting the work parts.
The rejected work parts will be removed by other mechanical apparatuses that are in contact
with the machine vision system.
Camera Calibration:

Camera calibration is a necessary step in 3D computer vision. • A calibrated camera can be


used as a quantitative sensor • It is essential in many applications to recover 3D quantitative
measures about the observed scene from 2D images. Such as 3D Euclidean structure • From a
calibrated camera we can measure how far an object is from the camera, or the height of the
object, etc. e.g., object avoidance in robot navigation

Fig 3.5 Camera Calibration


Camera Models and Calibration:
Cameras provide a crucial sensing modality in the context of robotics. This is generally due to
the fact that images inherently contain an enormous amount of information about the
environment. However, while images do contain a lot of information, extracting the
information that is relevant to the robot is quite challenging. One of the most basic tasks
related to image processing is determining how a particular point in the scene maps to a point
in the camera image, which is sometimes referred to as perspective projection. Last chapter,
the pinhole camera model and the thin lens model were presented, and in this chapter the
pinhole camera model is leveraged to further explore perspective projection4 . 4 All results
also hold under the thin lens model, assuming the camera is focused at ∞. 8.1 Perspective
Projection The pinhole camera model, shown graphically in Figure 8.1, can be used to
mathematically define relationships between points P in the scene and points p on the image
plane. Notice that any point P in the scene can represented in two ways: in camera frame
coordinates (denoted as PC) or in world frame coordinates (denoted as PW). The overall
objective of this section is to find derive a mathematical model that can be used to map a point
PW expressed in world frame coordinates to a point p on the image plane. To accomplish these
two transformations are combined together, namely a transformation of P from world frame
coordinates to camera frame coordinates (PW to PC) and a transformation from camera
coordinates to image coordinates (PC to p)
Fig.3.6 Graphical representation of the pinhole camera model

Mapping World Coordinates to Camera Coordinates (PW −→ PC)


Recalling from Figure 3.6 that a point P in the scene can either be expressed in terms of
camera frame coordinates PC or world frame coordinates PW. While the previous section
discussed the use of the pinhole model to map PC coordinates to pixel coordinates p, this
section will discuss the mapping between the camera and world frame coordinates of the point
P as in Figure 3.7 and it can be seen that PC can be written as: PC = t + q
Fig.3.7 Graphical representation of the pinhole camera model

where t is the vector from OC to OW expressed in camera frame coordinates and q is the
vector from OW to P expressed in camera frame coordinates. However, the vector q is in fact
the same vector as PW, just expressed in different coordinates (i.e. with respect to a different
frame). The coordinates can be related by a rotation:

where R is the rotation matrix relating the camera frame to world frame and is defined as:

where i, j, and k are the unit vectors that define the camera frame and iw, jw, and kw are the
unit vectors that define the world frame. To summarize, the point PW can be mapped to
camera frame coordinates PC as:

where t is the vector in camera frame coordinates from OC to OW and R is the rotation matrix.
Similar to the previous section, these expressions can also be equivalently expressed for the
case where the points PW and PC are expressed in homogeneous coordinates:
Geometry of Image Formation

• The two parts of the image formation process


The geometry of image formation which determines where in the image plane theprojection of
a point in the scene will be located.

The physics of light which determines the brightness of a point in the image plane asa function of
illumination and surface properties.

• A simple model
- The scene is illuminated by a single source.

- The scene reflects radiation towards the camera.

- The camera senses it via chemicals on film.

Fig. 3.8 Simple model


• Camera Geometry
- The simplest device to form an image of a 3D scene on a 2D surface is the "pinhole"camera.

- Rays of light pass through a "pinhole" and form an inverted image of the object onthe image
plane.

Fig. 3.9. Camera Geometry

Camera Optics

- In practice, the aperture must be larger to admit more light.

- Lens are placed in the aperture to focus the bundle of rays from each scene pointonto the
corresponding point in the image plane.

Fig. 3.10. Camera Optics


• Diffraction and Pinhole Optics

Fig. 3.11. Different pinhole positions

- If we use a wide pinhole, light from the source spreads across the image (i.e., notproperly
focused), making it blurry.

- If we narrow the pinhole, only a small amount of light is let in.

* the image sharpness is limited by diffraction.

* when light passes through a small aperture, it does not travel in a straight line.

* it is scattered in many direction (this is a quantum effect).

- In general, the aim of using lens is to duplicate the pinhole geometry without resort-ing to
undesirable small apertures.

• Human Vision
- At high light levels, pupil (aperture) is small and blurring is due to diffraction.

- At low light levels, pupil is open and blurring is due to lens imperfections.
• CCD Cameras
- An array of tiny solid state cells convert light energy into electrical charge.

- Manufactured on chips typically measuring about 1cm x 1cm (for a 512x512 array, each element
has a real width of roughly 0.001 cm).

- The output of a CCD array is a continuous electric signal (video signal) which is generated by
scanning the photo-sensors in a given order (e.g., line by line) and read- ing out their voltages.
Fig. 3.9. Camera Geometry
Fig. 3.9. Camera Geometry

Fig. 3.12. CCD Camera


• Frame grabber
- The video signal is sent to an electronic device called the frame grabber.

- The frame grabber digitizes the signal into a 2D, rectangular array N x M of integer values, stored
in the frame buffer

Fig. 3.12. CCD Camera


CCD array and frame buffer

- In a CCD camera, the physical image plane is the CCD array of nxm rectangulargrid of
photo-sensors.

- The pixel image plane (frame buffer) is an array of N xM integer values (pixels).

- The position of the same point on the image plane will be different if measured in
CCD elements (x, y) or image pixels (x im , yim).

- In general, n  N and m  M ; assuming that the origin in both cases is the upperleft corner
we have:
N M
xim = x yim = y
n m

where (x im , yim) are the coordinates of the point in the pixel plane and (x, y) are thecoordinates
of the point in the CCD plane.

- In general, it is convenient to assume that the CCD elements are always in one-to-one
correspondence with the image pixels.

- Units in each case:


(x im , yim) is measured in pixels

(x, y) is measured, e.g., in millimeters.

Fig. 3.13. CCD Camera with frame grabber


Reference Frames

- Five reference frames are needed for general problems in 3D scene analysis.

Fig 3.14 Reference Frames

Object Coordinate Frame

- This is a 3D coordinate system: x b , y b , zb

- It is used to model ideal objects in both computer graphics and computer vision.

- It is needed to inspect an object (e.g., to check if a particular hole is in proper posi-tion relative
to other holes)

- The coordinates of 3D point B, e.g., relative to the object reference frame are(x b , 0, z b )

- Object coordinates do not change regardless how the object is placed in the scene.
Notation: (X o , Y o , Zo)T
World Coordinate Frame

- This is a 3D coordinate system: x w , y w , zw

- The scene consists of object models that have been placed (rotated and translated)into the
scene, yielding object coordinates in the world coordinate system.

- It is needed to relate objects in 3D (e.g., the image sensor tells the robot where to topick up ta
bolt and in which hole to insert it).

Notation: (X w , Y w , Zw)T

Camera Coordinate Frame

- This is a 3D coordinate system (x c , y c , zc axes)

- Its purpose is to represent objects with respect to the location of the camera.
Notation: (X c , Y c , Zc)T

Fig 3.14 Camera Coordinat


• Image Plane Coordinate Frame (CCD plane)
- This is a 2D coordinate system (x f , y f axes)

- Describes the coordinates of 3D points projected on the image plane.

- The projection of A, e.g., is point a whose both coordinates are negative.


Notation: (x, y)T

Pixel Coordinate Frame

- This is a 2D coordinate system (r, c axes)

- Each pixel in this frame has an integer pixel coordinates.

- Point A, e.g., gets projected to image point (ar , a c ) where ar and ac are integer rowand column.

Notation: (x im , yim)T

Fig 3.14 Pixel Coordinate Frames


Transformations between frames

Machine Vision System


n:

Fig 3.5 Block Diagram of Functions of Machine


Vision System
Machine vision system is a sensor used in the robots for viewing and recognizing an
object with the help of a computer. It is mostly used in the industrial robots for
inspection purposes. This system is also known as artificial vision or computer
vision. It has several components such as a camera, digital computer, digitizing
hardware, and an interface hardware & software. The machine vision process
includes three important tasks, namely:

• Sensing & Digitizing Image Data


• Image Processing &
AnalysisApplications

Sensing & Digitizing Image Data:


A camera is used in the sensing and digitizing tasks for viewing
the images. It will make use of special lighting methods for g ining better picture
contrast. These images are changed into the digital form, and it is known as he fr me
of the vision data. A frame grabber is incorporated for taking digitized image
continuously at 30 frames per second. Instead of scene projections, every frame is
divided as a m trix. By performing sampling operation on the image, the number of
pixels can be identified. The pixels re generally described by the elements of the
matrix. A pixel is decreased to a value for measuring the intensity of light. As a
result of this process, the intensity of every pixel is changed into the digital value
and stored in the computer’s memory.
Image Processing & Analysis:

In this funct on, the image interpretation and data reduction


processes are done. The threshold of an image frame is developed a binary image for
reducing the data. The data reduction will help in converting the frame from raw
image data to the feature value data. The feature value data can be calculated via
computer programming. This is performed by matching the image descriptors like
size and appearance with the previously stored data on the computer.

The image processing and analysis function will be made more effective
by training the machine vision system regularly. There are several data collected in
the training process like length of perimeter, outer & inner diameter, area, and so on.
Here, the camera will be very helpful to identify the match between the computer
models and new objects of feature value data.
Applications:

Some of the important applications of the machine vision system in the robots are:

• Inspection
• Orientation
• Part Identification
• Location
Signal conversion

Our interface modules are the links between the real physical process and the control system.
Use the [EEx ia]-version of this function modules to assure a save data transmission from the
potentially explosive area to the non-hazardous area and vice-versa. Select the respective
product properties below. The right-hand column adjusts the product list immediately and
displays only products corresponding to your specifications.

Image Processing

Robotic vision continues to be treated including different methods for processing,


analyzing, and understanding. All these methods produce infor ation that is translated into
decisions for robots. From start to capture images and to the final decision of the robot, a
wide range of technologies and algorithms are used like a committee of filtering and
decisions.

Another object with other colors accompanied by different sizes. A robotic vision system has
to make the distinction between objects and in almost all cases has to tracking these objects.
Applied in the real world for robotic applications, these ma hine vision systems are designed
to duplicate the abilities of the human vision system using programming code and electronic
parts. As human eyes can detect and track many objects in the same time, robotic vision
systems seem to pass the difficulty in detecting and tracking many objects at the same time.
Machine Vision

A robotic system f nds ts place in many fields from industry and robotic services.
Even is used for identification or navigation, these systems are under continuing
improvements with new features like 3D support, filtering, or detection of light intensity
applied to an object.

Applications and benefits for robotic vision systems used in industry or for service robots:

• automating process;

• object detection;

• estimation by counting any type of moving;

• applications for security and surveillance;

• used in inspection to remove the parts with defects;

• defense applications;

• used by autonomous vehicle or mobile robots for navigation;

• for interaction in computer-human interaction;


Object tracking software

A tracking system has a well-defined role and this is to observe the persons
or objects when these are under moving. In addition, the tracking software is capable of
predicting the direction of motion and recognizes the object or persons.OpenCV is the
most popular and used machine vision library with open-source code and comprehensive
documentation. Starting with image processing, 3D vision and tracking, fitting and many
other features, the system include more than 2500 algorithms. The library interfaces have
support for C++, C, Python and Java (in work), and also can run under Windows, Linux,
Android or Mac operating systems.

SwisTrack

Used for object tracking and recognition, SwisTrack is one of the most
advanced tools used in machine vision applications. This tracking tool required only a
video camera for tracking objects in a wide range of situations. Inside, SwisTrack is
designed with a flexible architecture and uses OpenCV library. This flexibility opens the
gates for implementing new components in order to meet the requirements of the user.

visual navigation

Autonomous navigation is one of the mo t important characteristics for a


mobile robot. Because of slipping and some incorrigible drift errors for sensors, it is
difficult for a mobile robot to realize self-location after long dist nce n vigation. In this
paper, the perceptual landmarks were used to solve this problem, and he visu l serving
control was adopted for the robot to realize self-location. At the same ime, in order to
detect and extract the artificial landmarks robustly under different illumin ting conditions,
the color model of the landmarks was built in the HSV color space. These functions were
all tested in real time under experiment conditions.

Edge Detector

Edge Detector Robot from IdeaFires is an innovative approach towards


Robotics Learning. This is a simple autonomous Robot fitted with Controller and Sensor
modules. The Edge Detector Robot senses the edges of table or any surface and turns the
robot in such a way that it prevents it from falling.
TEXT/REFERENCE BOOKS:
1.Saha, S.K., “Introduction to Robotics, 2nd Edition, McGraw-Hill Higher Education, New Delhi,
2014.
2. Ashitava Ghosal., “Fundamental Concepts and Analysis”, Oxford, New Delhi, 2006.
3.Mikell.P.Groover, Mitchell Weiss, Roger.N.Nagel, Nicholas.G.Odrey, “Industrial Robotics-
Technology, Programming and Applications”, Tata McGraw-Hill Publishing Company Limited,
New Delhi, Third Reprint 2008

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy