0% found this document useful (0 votes)
13 views16 pages

Week 5 - Module 2

The document discusses image acquisition techniques using sensor strips and arrays, highlighting their applications in flatbed scanners, medical imaging, and digital cameras. It explains the principles of image formation, including illumination and reflectance, and the processes of sampling and quantization necessary for converting continuous images into digital formats. Additionally, it covers the representation of digital images as numerical arrays, emphasizing the significance of spatial coordinates in image processing.

Uploaded by

ayokoya2004
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views16 pages

Week 5 - Module 2

The document discusses image acquisition techniques using sensor strips and arrays, highlighting their applications in flatbed scanners, medical imaging, and digital cameras. It explains the principles of image formation, including illumination and reflectance, and the processes of sampling and quantization necessary for converting continuous images into digital formats. Additionally, it covers the representation of digital images as numerical arrays, emphasizing the significance of spatial coordinates in image processing.

Uploaded by

ayokoya2004
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 16

Image Acquisition with

Sensor Strips (Line Sensor)

• The strip provides imaging elements


in one direction.
• Motion perpendicular to the strip
provides imaging in the other
direction.
• This arrangement is used in most flat
bed scanners.
1
2
• Sensor strips in a ring configuration are used in medical and industrial
imaging to obtain cross-sectional (“slice”) images of 3-D objects.
• A rotating X-ray source provides illumination, and X-ray sensitive
sensors opposite the source collect the energy that passes through the
object.
• The output of the sensors is processed by reconstruction algorithms,
which transform the sensed data into meaningful cross-sectional
images.
• The same principle is applied to magnetic resonance imaging (MRI) and
positron emission tomography (PET), but with different illumination
sources and sensors.

3
Individual sensing elements arranged in the
Image form of a 2-D array.

Acquisition
using Electromagnetic and ultrasonic sensing
devices are arranged in this manner. This is
Sensor also the predominant arrangement found in
Arrays digital cameras.

(Array The key advantage of an array sensor is that


a complete image can be obtained by
Sensor) focusing the energy pattern onto the
surface of the array, because of its 2-D
form.
4
5
• The energy from an illumination source is
reflected from a scene.

The • The imaging system collects the incoming


energy and focuses it onto an image plane.
Principle of • The front end of the imaging system then
projects the viewed scene onto the focal
Image plane of the lens.
• The sensor array, which is coincident with the
Acquisition focal plane, produces outputs proportional to
the integral of the light received at each
sensor.
with Sensor • Digital and analog circuitry sweep these
outputs and convert them to an analog signal,
Arrays which is then digitized by another sectionof
the imaging system.
• The output is a digital image.
6
7
A Simple Image Formation Model
• Images are denoted by 2D functions of the form, 𝑓(𝑥, 𝑦).
• The value of 𝑓 at spatial coordinates (𝑥, 𝑦) is a scalar quantity whose
physical meaning is determined by the source of the image, and whose
values are proportional to energy radiated by a physical source (e.g.,
electromagnetic waves).
• Consequently, 𝑓(𝑥, 𝑦) must be nonnegative and finite; that is,
𝟎 ≤ 𝒇 𝒙, 𝒚 < ∞ (1)
• 𝑓(𝑥, 𝑦) is characterized by two components:
• the amount of source illumination incident on the scene being viewed
(illumination component - 𝒊(𝒙, 𝒚))
• the amount of illumination reflected by the objects in the scene (reflectance
component - 𝒓(𝒙, 𝒚))
8
𝒇 𝒙, 𝒚 = 𝒊 𝒙, 𝒚 𝒓 𝒙, 𝒚 2
where 𝟎 ≤ 𝒊 𝒙, 𝒚 < ∞ (3)
𝟎 ≤ 𝒓 𝒙, 𝒚 ≤ 𝟏 (4)
• Reflectance is bounded by 0 (total absorption)
and 1 (total reflectance).
• The nature of 𝑖(𝑥, 𝑦) is determined by the
illumination source, and 𝑟(𝑥, 𝑦) is determined
by the characteristics of the imaged objects.

9
Typical Illumination and Reflectance Values
• On a clear day, the sun may produce more than 90,000 𝑙𝑚/𝑚2 of
illumination on the surface of the Earth; and less than 10,000 𝑙𝑚/𝑚2 on
a cloudy day.
• On a clear evening, a full moon yields about 0.1 𝑙𝑚/𝑚2 of illumination.
• The typical illumination level in an office space is about 1,000 𝑙𝑚/𝑚2 .
• The typical values of reflectance are:
• 0.01 for black velvet
• 0.65 for stainless steel
• 0.80 for flat-white wall paint
• 0.90 for silver-plated metal
• 0.93 for snow

10
• Let the intensity (gray level) of a monochrome image at
any coordinates (𝑥, 𝑦) be denoted by
ℓ = 𝒇 𝒙, 𝒚 𝟓
• Equations (2) – (4) show that
𝑳𝒎𝒊𝒏 ≤ ℓ ≤ 𝑳𝒎𝒂𝒙 (𝟔)
where 𝑳𝒎𝒊𝒏 = 𝒊𝒎𝒊𝒏 𝒓𝒎𝒊𝒏 and 𝑳𝒎𝒂𝒙 = 𝒊𝒎𝒂𝒙 𝒓𝒎𝒂𝒙
• The interval [𝑳𝒎𝒊𝒏 , 𝑳𝒎𝒂𝒙 ]is called the intensity (or gray)
scale.
• It is a common practice to shift this interval numerically to
the interval 0, 1 , where ℓ = 𝟎 is considered black and
ℓ = 𝟏 is considered white on the scale.
• All intermediate values are shades of gray varying from
black to white. 11
• There are numerous ways to acquire
images, but the objective of any method
of image acquisition is to generate digital
images from sensed data.

Image • The output of most sensors is a


continuous voltage waveform whose
Sampling & amplitude and spatial behaviour are
related to the physical phenomenon
Quantization being sensed.
• To create a digital image, the continuous
sensed data needs to be converted into
a digital format.
• This conversion requires two processes:
sampling and quantization.
12
• Digitizing the coordinate values is
called sampling.
• Digitizing the amplitude values is
called quantization.
(a) The continuous image 𝑓
(b) a plot of amplitude (intensity level)
values of the continuous image along
the line segment AB
• Random variations = image noise
(a) Continuous Image (b) A scan line showing intensity (c) To sample this function, take equally
variations along line AB in the spaced samples along line AB
continuous image. • The intensity values also must be
converted (quantized) into discrete
quantities.
• The continuous intensity levels are
quantized by assigning one of the eight
values to each sample, depending on
the vertical proximity of a sample to a
vertical tick mark.
(d) The digital samples resulting from
both sampling and quantization are
(c) Sampling and Quantization (d) Digital scan line shown as white squares. 13
Representing Digital Images
• Let 𝑓(𝑠, 𝑡) represent a continuous image function
of two continuous variables, 𝑠 and 𝑡.
• If the continuous image is sampled into a digital
image, 𝑓(𝑥, 𝑦), containing 𝑀 rows and 𝑁 columns,
where (𝑥, 𝑦) are discrete coordinates; where 𝒙 =
𝟎, 𝟏, 𝟐, … , 𝑴 − 𝟏 𝒂𝒏𝒅 𝒚 = 𝟎, 𝟏, 𝟐, … , 𝑵 − 𝟏
• What is the value of the digital image at the
origin?
14
• In general, the value of a digital image at any
coordinates (𝑥, 𝑦) is denoted by 𝑓(𝑥, 𝑦), where 𝑥
and 𝑦 are integers.
• When referring to specific coordinates (𝑖, 𝑗), the
notation 𝑓(𝑖, 𝑗) is used, where the arguments are
integers.
• The section of the real plane spanned by the
coordinates of an image is called the spatial
domain, with 𝑥 and 𝑦 being referred to as spatial
variables or spatial coordinates.

15
(c) Image shown as a 2-D
numerical array

(b) Image displayed as a visual


(a) Image plotted as a surface
intensity array • an array (matrix)
• a plot of the function, with two axes composed of the
• it shows 𝑓(𝑥, 𝑦) as it would numerical values of
determining spatial location and the appear on a computer display
third axis being the value of f as a 𝑓(𝑥, 𝑦).
or photograph. • this is the
function of x and y. • the intensity of each point in
• This representation is useful when representation used
the display is proportional to for computer
working with grayscale sets whose the value of 𝑓 at that point.
elements are expressed as triplets of the processing.
form (𝑥, 𝑦, 𝑧), where 𝑥 and 𝑦 are spatial
coordinates and 𝑧 is the value of f at
16
coordinates (𝑥, 𝑦).

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy