0% found this document useful (0 votes)
9 views9 pages

Week 1

Wee

Uploaded by

Kavi Vp
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views9 pages

Week 1

Wee

Uploaded by

Kavi Vp
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

Digital Image Fundamentals

CoE4TN3 • Elements of visual perception


Image Processing • Image sensing and acquisition
• Sampling and quantization
• Relationship between pixels
Chapter 2: Digital Image
Fundamentals

The Human Visual System (HVS) Structure of Human Eye


• Why study the HVS? • Eye: Sphere, diameter of 20 mm
– A true measure of image processing quality is how well • Consists of 3 membranes:
the image appears to the observer. 1. Cornea and sclera
2. Choroid
– The HVS is very complex and is not understood well in a 3. Retina
complete sense. However, many of its properties can be • Cornea: transparent
identified and used to our advantage. • Sclera: opaque, connected to cornea
• Choroid: network of blood vessels
• In front choroid is connected to iris diaphragm
• Iris: contracts or expands to control amount of light
• Pupil: central opening of iris, 2 to 8 mm in diameter

3 4

Structure of Human Eye • Retina contains light receptors: Cones & rods
– Cones:
• Lens: • 6 to 7 million,
– focuses light on retina
• located mainly in central part of retina
– Contains 60% to 70% water (fovea)
– Absorbs 8% of visible light • Sensitive to color,
– High absorption in infrared and ultraviolet (can cause damage to eye) • Can resolve fine details because each
• Retina: the inner most layer, covers the posteriori portion of one is connected to its nerve
eye • Cone vision: photopic or bright-light
• When eye is properly focused, light of an object is imaged on – Rods:
the retina • 75 to 150 million,
• No color vision, responsible for low-
• Light receptors are distributed over the surface of retina light vision,
• Distributed a wide region on the retina
• Rod vision: scotopic or dim-light

5 6

1
Human Eye Image formation in the eye
• Blind spot: a region of retina without receptors, optic nerves • Lens is flexible
go through this part • Refraction of lens is controlled by its thickness
• Fovea: a circular area of about 1.5 mm in diameter • Thickness is controlled by the tension of muscles connected to
• A comparison between eye (fovea) and a CCD camera: the lens
– Density of cones in fovea: 150,000 /mm 2 • Focus on distance objects: lens is relatively flattened,
– Number of cones: 337,000 refractive power is minimum
– A medium resolution CCD chip has the same number of elements in a
• Focus on near objects: lens is thicker, refractive power is
5mm x 5mm area.
maximum
• Perception takes place by excitation of receptors which
transform radiant energy into electrical impulses that are
decoded by the brain.

7 8

Brightness & Intensity Weber Experiment


• The dynamic range of light intensity to which eye can adapt is • To characterize the intensity discrimination properties of eye
enormous - on the order of 1010- from the scotopic threshold  DI starts at zero and is increased slowly
to the glare limit
• The observer is asked to indicate when the circle on the
• Brightness (intensity perceived by visual system) is a constant background becomes visible (just noticeable
logarithmic function of light intensity.
difference).
• HVS can not operate over the entire range simultaneously. It
accomplishes large variations due to brightness adaptation • The ratio DI/I is called the Weber ratio.
• Procedure is repeated for different values of I.

I+DI

9 10

Weber Experiment Intensity & Brightness


• Small values of DI/I: good discrimination • Relationship between brightness and intensity is not a simple
• Large values of DI/I: poor discrimination function!
• Low levels of illumination: high Weber ratio: poor
discrimination
• In high levels of illumination, discrimination improves.

11 12

2
Intensity & Brightness Image sensing and acquisition
• Mach Band • If a sensor can be developed that is capable of detecting
effect: energy radiated by a band of the EM spectrum, we can image
Although the events in that band.
shades are • Image is generated by energy of the illumination source
constant,
overshoot and reflected (natural scenes) or transmitted through objects (X-
undershoot are ray)
observed near • A sensor detects the energy and converts it to electrical
the transition signals
boundary. • Sensor should have a material that is responsive to the
particular type of energy being detected.

13 14

Image sensing and acquisition Single sensor


• Three principle sensor arrangements: • Most familiar sensor of this type is photodiode
1. Single imaging sensor • In order to generate a 2-D image using a single sensor, there
2. Line sensor has to be relative displacement in both x and y directions
3. Array sensor between the sensor and the area to be imaged

15 16

Sensor strips Sensor strips


• Sensor elements are arranged in a line • Sensor strips mounted in a ring configuration are used in
• Strip provides imaging in one direction, and motion provides medical and industrial imaging to obtain cross-sectional
imaging in the other direction images of 3-D objects (CAT)
• Used in scanners and airborne imaging • Output of the sensors must be processed by reconstruction
• Airborne imaging: imaging system is mounted on the aircraft algorithms to transform the sensed data into meaningful cross-
which flies at a constant altitude over the area to be imaged sectional images.

17 18

3
Line scan array Sensor arrays
• This is the arrangement used on digital cameras
• Typical sensor for these cameras is the CCD array (Charge
Coupled Devices)
• Since the sensor is two dimensional a complete image can be
obtained
• Motion is not necessary

19 20

A simple image model Sampling & Quantization


• Computer processing: image f(x,y) must be digitized both
• The amount of light that enters the eye depends spatially and in amplitude
on: • Digitization in spatial coordinates: sampling
1. The amount of source illumination incident on the • Digitization in amplitude: quantization
scene, i(x,y) • Image: [f(i,j)]NxM
2. The amount of illumination reflected by the objects • What should be the values of N, M and the number of gray
in the scene, r(x,y) levels G?
f ( x, y )  i ( x, y )  r ( x, y ) • Normally: N=2n , M=2m , G=2k

(x,y): coordinates
Total absorption: r(x,y)=0
Total reflection: r(x,y)=1

21 22

Sampling & Quantization Sampling & Quantization

• Number of bits required to store image: N x M x k


• The more the values of N,M and G: the better approximation
of a continuous image
• Storage and processing requirements increase as well

23 24

4
Effects of Reducing Spatial Resolution Effects of Reducing Spatial Resolution

25 26

Effects of Reducing Spatial Resolution Effects of Reducing Gray Levels

Effect of reducing spatial resolution: checkerboard pattern

27 28

Effects of Reducing Gray Levels Effects of Reducing Gray Levels


• Effects of Reducing Gray Levels:
• appearance of fine ridge-like structures in areas of
smooth gray levels
• This effect is called false contouring

29 30

5
Zooming and Shrinkage Zooming
• Zooming: increasing the resolution (size) of an image • A simple way of zooming which works for increasing the size
• Shrinkage: decreasing the resolution of an image of an image by integer numbers is pixel replication
• Example of zooming: we have an image of 500x500 pixels • Visualize assignment in zooming: the enlarged image is
and we want to enlarge it to 750x750 placed on the original image
• Zooming has two steps: creation of new pixel locations and • Gray level of each pixel in the enlarged image is set to the
the assignment of gray levels to those locations gray-level of its nearest pixel in the original image

31 32

Zooming Shrinkage
• A more sophisticated way of accomplishing gray-level • Shrinkage by an integer number can be done by deleting some
assignment is bilinear interpolation of the rows and columns of the image
• v(x’,y’)=ax’+by’+cx’y’+d • Shrinkage by an noninteger factor can be done as the inverse
• The four coefficients are determined from the four equations of zooming
in four known (four nearest neighbors of the point (x’,y’))

33 34

Zooming Relationship between pixels


0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 1 1 1 1 1
0 0 0 0 0 0 1 0 0 0 0 0
0 0 0 0 0 0 1 0 0 0 0 0
0 0 0 0 0 0 1 0 0 0 0 0
0 0 0 0 0 1 0 0 0 0 0 0
0 0 0 0 1 1 0 0 0 0 0 0
0 0 1 1 0 0 0 0 0 0 0 0
0 1 0 0 0 0 0 0 0 0 0 0
0 1 0 0 0 0 0 0 0 0 0 0
0 0 1 1 0 0 0 0 0 0 0 0
0 0 0 0 1 0 0 0 0 0 0 0
0 0 0 0 0 1 0 0 0 0 0 0
0 0 0 0 0 1 0 0 0 0 0 0
0 0 0 0 0 0 1 0 0 0 0 0
0 0 0 0 0 0 0 1 0 0 0 0
0 0 0 0 0 0 0 0 1 0 0 0
0 0 0 0 0 0 0 0 1 0 0 0
0 0 0 0 0 0 0 0 0 1 0 0
0 0 0 0 0 0 0 0 0 1 0 0
0 0 0 0 0 0 0 0 0 1 0 0

35 36

6
Relationship between pixels Basic relationships between pixels
• Neighbors • A pixel p at coordinates (x,y) has four horizontal and vertical
• Adjacency neighbors:
• Path N4(P)={(x+1,y), (x-1,y),(x,y+1),(x,y-1)}
• Connectivity • The four diagonal neighbors of P
• Region ND(P)={(x+1,y+1), (x-1,y-1),(x-1,y+1),(x+1,y-1)}
• Boundary • The eight point neighbors of P
• Distance N8(P)=N4(P)U ND(P)

37 38

Adjacency Adjacency
• Two pixels are adjacent if they are neighbors and their gray • 4-adjacency: Two pixels p and q with values from V are 4-
levels are similar adjacent if q is in N4(p)
• V: set of gray levels • 8-adjacency: Two pixels p and q with values from V are 8-
• Similar gray level means that the gray levels of both pixels adjacent if q is in N8(p)
belong to set V • 4-adjacency: broken paths
• Exp: • 8-adjacency: multiple paths
– Binary images: V={1}
– Gray level image: V={32,33, …,63,64}
0 1 1 0 1 1 0 1 1
0 1 0 0 1 0 0 1 0
0 0 1 0 0 1 0 0 1

39 40

Adjacency Path
• m-adjacency: Two pixels p and q with values from V are m- • A path from pixel p with coordinates (x,y) to pixel q with
adjacent if: coordinates (s,t) is a sequence of distinct pixels with
q is in N4(p) or coordinates (x0,y0),(x1,y1),…,(xn,yn) where (x0,y0)=(x,y),
q is in ND(p) and the intersection of N4(p) and N4(q) has no pixels with (xn,yn)=(s,t), and points (xi,yi) and (xi-1,yi-1) are adjacent for
values in V. 1 i  n
• n is the length of the path
q q • We can have 4-, 8-, or m-paths depending on the type of
0 1 1 0 1 1 0 1 1 adjacency specified.
0 1 0 0 1 0 0 1 0
0 0 1 0 0 1 0 0 1

p p

41 42

7
Connectivity Region
• S: a subset of pixels in an image • R: a subset of pixels in an image
• Two pixels p and q are said to be connected in S if there exists • R is called a region if every pixel in R is connected to any
a path between them consisting entirely of pixels in S other pixel in R
• We can have 4-, 8-, or m-connectivity depending on the type • Boundary (border or contour) of a region: set of pixels in the
of path specified. region that have one or more neighbors that are not in R

0 1 1 0 0 1 1 0
0 1 1 1 0 1 1 1
0 0 1 0 0 0 1 0
0 0 0 0 1 0 0 0

43 44

Distance measures Distance measures


• For pixels p,q, and z with coordinates (x,y), (s,t) and (v,w), • D4 distance
respectively, D is a distance functions if:
D4 ( p, q)  x  s  y  t
D ( p, q )  0 • D8 distance
D ( p, q )  D ( q, p )
D8 ( p, q)  max{ x  s , y  t }
D ( p, z )  D ( p, q )  D ( q, z )

Pixel values D4 distances D8 distances


De  [( x  s) 2  ( y  t ) 2 ]1/ 2
0 1 1 2 1 2 1 1 1
0 1 0 1 0 1 1 0 1
0 0 1 2 1 2 1 1 1

45 46

Distance measures Linear & Non-linear operations


• Dm distance: length of the shortest m-path between two pixels • H: an operator whose inputs and outputs are images
• D4, D8 distance between p and q are independent of the pixels • H is linear if for any two images f and g and any two scalars a
along the path and b
• Dm depends on the value of the pixels between p and q H(af+bg)=aH(F)+bH(g)

0 0 1 0 0 1
1 1 0 0 1 0
1 0 0 1 0 0
Dm=3 Dm=2

47 48

8
Line scan arrays Line scan arrays
• While register is read, image capturing should stop
• A row of photosites forms the imaging device • Readout speed can be increased using more than one register
• Charges of photosites are transferred to a readout register • A 2-D image is formed by relative motion between the scene and sensor
• Readout register works similar to a shift register
Transport register

Gate
Output
N Photosites 2 1
Output
N Photosites 2 1

Gate
Gate
Readout register
Transport register

49 50

Area scan arrays Color imaging with CCD


• Composed of 2-D array of CCD elements.
• Different methods to read the accumulated charge: • Light is separated into red, green and blue components.
• Color filters or prism can be used to break light
Full frame Interline transfer Frame transfer
• Each component is recorded by CCD

Register Register Register

Dark areas: masked elements of the array (not exposed to light)

51 52

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy