0% found this document useful (0 votes)
78 views

Film Photography: Imaging

Digital imaging involves capturing images using sensors and storing them digitally using binary data. Common sensors include CCD and CMOS sensors. Digital images allow for image processing not possible with film. A digital image is represented as a 2D array of pixels, each with an intensity value. Sampling and quantization convert a continuous image to discrete pixel values. More pixels provide higher spatial resolution while more intensity levels increase gray-level resolution. Zooming can increase resolution by interpolating new pixel values.

Uploaded by

example example
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
78 views

Film Photography: Imaging

Digital imaging involves capturing images using sensors and storing them digitally using binary data. Common sensors include CCD and CMOS sensors. Digital images allow for image processing not possible with film. A digital image is represented as a 2D array of pixels, each with an intensity value. Sampling and quantization convert a continuous image to discrete pixel values. More pixels provide higher spatial resolution while more intensity levels increase gray-level resolution. Zooming can increase resolution by interpolating new pixel values.

Uploaded by

example example
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 158

L-1

IMAGING
Film photography

Time consuming

Real time processing was not possible

No automation based on vision


Digital photography

Images are stored by capturing the binary data using


some electronic devices (SENSORS)

Sensors: Charge Coupled Device (CCD)


Complementary metal–oxide–semiconductor (CMOS)
Photo multiplier tube (PMT)
The CCD was invented in 1969 by Willard Boyle and
George Smith at AT&T Bell Labs.
Nobel Prize in Physics in 2010 !!!!!!
First digital camera prototype developed
In Kodak Lab by Sasson in 1975
First commercial CCD camera

American company, Fairchild Imaging sold first


Commercial CCD camera in 1976 consists of
100x100 pixels
Beyond Visible Imaging
Thermal Imaging
Operate in infrared frequency

Grayscale representation Pseudo-color representation


(bright pixels correlate with (Human body dispersing
high-temperature regions) heat denoted by red)
Low Signal-to-Noise (SNR) Behavior

250

200

150

100

50

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1


noise

signal
Synthetic Aperture Radar (SAR)
 Environmental monitoring, earth-
resource mapping, and military
systems

 SAR produces relatively fine


azimuth resolution that
differentiates it from other radars.

 Uses Radio Waves


Radar Imaging
Operate in microwave frequency

Mountains in Southeast Tibet


Magnetic Resonance Imaging (MRI)
Operate in radio frequency

knee spine head


Basic Principle of MRI

k-space

IFT
Fluorescence Microscopy Imaging
Operate in ultraviolet frequency

normal corn smut corn


What Does a Neuron Look Like?

Artistic illustration Real image


X-ray Imaging
Operate in X-ray frequency

chest head
Positron Emission Tomography
Operate in gamma-ray frequency
Application of CCD::
• Any digital imaging device

• Experiments like light scattering (spatial variation in intensity)

• Astronomy and astrophysics

• Night vision
Why image processing?

(a)Improvement of pictorial information for


human interpretation
e.g. google earth

(b) Processing of image data for storage,


transmission, and representation for
autonomous machine perception.
Applications:

• Remote sensing via satellites and other spacecrafts: Images acquired by


satellites are useful in tracking of earth’s resources; geographical
mapping; prediction of agricultural crop, urban growth and weather
and flood and fire control. :: Unmanned Air Vehicle (UAV)

• Space image applications: recognition and analysis of object contained


in images obtained from deep space-probe missions. :: Hubble

• Image transmission and storage application : occurs in broadcast


television, teleconferencing, military communication.

• Medical application: x-ray, cine-angiogram, radiology, NMR, ultrasonic


scanning.

• Radar and sonar images are used for detection and recognition of
various types of targets or in guidance and maneuvering of aircraft
or missile system.
Through mobile
Diagnosis app for plants
Image processing techniques learnt in this
course can be used in several other places
e.g. signal processing (FFT), filtering etc.
L-2

Implies digital processing of any two dimensional (2D) data

Data: 2D images or some 2D data

Digital Image : a 2D array of real or complex numbers


represented by a finite number of bits.

Real

Complex
How to obtain a digital image?

Imaging
system

Object
Sample &
quantize
Display

Digital Digital Online Image


Storage (disk) computer buffer

Record
A simple image formation method
IMAGE is defined by a two dimensional function f(x,y)

where f(x,y) is the intensity or the gray level values


of the image at the spatial coordinates x,y.

When x, y and f are discrete, the image is called DIGITAL image

A DIGITAL image is composed of finite number of


elements (say 256x256) each of which has a
particular location and values. These elements are
called picture element, or image element or pels
or pixels
The value or the amplitude of “f” at the spatial

coordinate (x,y) is a positive scalar quantity whose

physical meaning is determined by the source of

the image.

Pixel values (f) is proportional to energy radiated by


the physical source.

So, 0  f ( x, y )  ∞
f(x,y) depends on
1. amount of illumination of the source on the object (i(x,y))

2. amount of reflected light from the object (r(x,y))

 f ( x, y )  i ( x, y ) r ( x, y )  l

where 0  i( x, y)  
and l = gray level of the monochrome image
0  r ( x, y)  1

Lmin  l  Lmax where Lmin  iminrmin


Lmax  imaxrmax

So, Lmin  0
Lmax  L  1 ( say )
The interval [0, L  1] is defined as GRAYSCALE

l  0  black
l  L  1  white

8-bit grayscale  [0,28  1]  [0,255]


An image may be continuous with respect to the

x- and y- and also in the amplitude.

 Digitization of coordinate value is called


SAMPLING.

 Digitization of amplitude value is called


QUANTIZATION.
Sampling depends on arrangement of sensors
to generate the image
Representing Digital Images :

The result of sampling and quantization yields the

image in form of a matrix of real numbers.

x, y vary from 0,1….. and are not the actual value of the physical coordinate
 f (0,0) ......... ......... 
..... ......... ............ 
Digital image 
 
..... f ( M  1, N  1) MxN

The values of f, M and N has to be “+” integers.

Due to processing, storage and sampling hardware consideration, the

number of gray levels typically is an integer power of 2.

L  2k

Discrete levels are equally spaced and they are integers in the level

(i.e., gray level) [0, L-1].


The number of bits (b) required to store a digitized image is

b  M  N k

For 8-bit image, k = 8

Gray level = [0,255]

L = 28

b=MxNx8
Spatial and gray level resolution

Spatial resolution :

Spatial resolution is the smallest distinguishable detail


in an image.

It depends on sampling.
w

A B A B A B A B

Line pairs : AA and BB


Distance between the line pairs = 2w
1
No. of lines per unit length =
2w
Spatial resolution = no. of distinguishable lines/length
1
Hence, spatial resolution =
2w
A 8-bit M x N gray scale image.

No. of samples = MxN (i.e. total number of pixels)


Typical effects of varying the number of samples in a
digital image

(Pixel size = constant,


and gray level = 256)

Sub-sampling
Sub-sampled image is scaled to the original one
• The spatial resolution goes down due to sub-sampling
Gray level resolution :

Refers to the smallest distinguishable change in the

gray level.

Gray level resolution is highly subjective and it

depends on the hardware utilized to capture the image.


8 7
2 2

6
2 25
24 2 3

22 21
L-3
N and k are independent in the previous examples

How to vary N and k to obtain an improved image?

Huang’s experiment
Isopreferential lines
255

f
0
x
Zooming and Shrinking the digital images

Zooming requires 2 steps

1. creation of new pixel locations

2. assignment of gray levels to those new locations


Pixel replication :

Duplicate the rows and columns of an image

Bilinear interpolation :

The assignments to the pixel value is accomplished


by bilinear interpolation
Let ( x' , y' ) are the coordinates of the pixel in the zoomed
image (i.e., in the grid)

Then, the gray level at (x’,y’) is given by

v( x' , y' )  ax' by' cx' y' d

The coefficients are determined from the NN pixels.


Image shrinking
Pixel replication :

Delete rows and columns (alternate) to shrink the image

by an integer value.
• Enhancement is needed for better representation

and extraction of important information.

• Methods of enhancement is highly subjective.


Image enhancement approach

Two categories

Spatial domain method Frequency domain method


Spatial domain method::
Spatial domain refers to the image plane and

the method implies the direct manipulation

of the pixels in the image.

Frequency domain method

Modifying the pixels in the Fourier transformed

image of the original image.


L-4

Spatial domain process :

g( x, y )  T [ f ( x, y )]
where f ( x , y )  Original image

g( x , y )  Processed image

T  Transformation function or Operator


Point processing  processing an image by
considering the gray level
of each pixels

Mask processing  creating a mask about the


pixel (x,y) and processing it.
The operator T is defined over some neighborhood

of (x,y)

Subimage
The subimages can be
Point Processing

The simplest form of T is when neighborhood size is

1 x 1 (i.e., point processing)

g( x, y )  T [ f ( x, y )]
 s  T (r )
where s  g( x , y )
r  f ( x, y)
T is the gray level transformation function
Gray level transformation for contrast enhancement
Basic transformation functions for image enhancement
1. Linear (negative and identity transformation)

2. Logarithmic (log and inverse-log transformation)

3. Power law (nth and nth root power transformation)


Image identity 

sr
Image negative 

s  L 1 r
For 8-bit image ; s  255 r
Log transformation 

s  c log(1  r )
Where c = constant; r0
Power law transformation 


s  cr

where

c &  are positive constants


 correction Dark levels have to be stretched

1
 correction Dark levels have to be compressed

>1
Piecewise linear transformation function
Gray level slicing

Use to HIGHLIGHT a specific range of gray levels

which are often desired in an image


L-5

Bit plane slicing

Inner pixel gray levels can be explored

by doing bit plane slicing


Three motivations for BIT Plane Slicing!!!

• Fast processing and real time analysis

• Compression

• Data or image encryption (steganography)


167 133 111

144 140 135

159 154 148


Steganography

=
+
L-6

Brief introduction to probability theory

Random: that cannot be predicted

random event, random function, random variable

Random experiment : it is not possible to


predict accurately the outcome
e.g., tossing of a coin, or dice
nH  nT  n
where

nH  number of heads
nT  number of tails
n  total number of tossing

nH nT
  1
n n
Important property: 0  P ( A)  1

Cumulative Probability distribution function:

F (a )  P ( x  a )

F(a) is the CDF of the random variable x


having the value less than or equal to a
Properties of CDF:

1. F ( )  0
2. F (  )  1
3. 0  F ( x )  1
4. F ( x1 )  F ( x 2 ) if x1  x 2
5. P ( x1  x  x 2 )  F ( x 2 )  F ( x1 )
6. F ( x )  F ( x ) if x  x   ,   0
 
Probability Density function (PDF) :
is defined as derivative of CDF
dF ( x )
p( x ) 
dx
Properties: 1. p( x )  0 for all x

2.  p( x )dx  1

x
3. F ( x )   p( )d

x2

4. P ( x1  x  x 2 )   p( x )dx
x1
Histogram Processing

The histogram of a digital image with gray levels

in the range [0,L-1] is a discrete function

h( rk )  nk

where rk = kth gray level


nk = number of pixels in the image
having gray level value rk
Normalization:

h( rk ) nk
p( rk )  
n n

n = total number of pixels in the image

p( rk ) = probability of occurrence of gray level


rk

 p(r )  1
k
k
p(rk) =h(rk)/n

rk = 255

 p(r )  1
k
k
Uniformly distributed
histogram yields

HIGH CONTRAST IMAGE

Hence, histogram
processing requires the
stretching of gray level
uniformly over the entire
gray level range.

This is histogram equalization


Histogram

p(r)

r =1
Histogram equalization

p(s)

s
0 1
1. Requires the distribution of histogram peaks
uniformly over the entire gray level range

2. Equalize the height of the peaks

Let us define the gray level values (r) to be

• Continuous and

• Normalized between [0,1]


r0 Black

r 1 White

For any r, the transformation

s  T (r ) ; 0  r  1
Let us require that the transformation
function T(r) should satisfy

1. T(r) is single valued and monotonically


increasing in the interval 0 ≤ r ≤ 1

2. 0 ≤ T(r) ≤ 1 for 0 ≤ r ≤ 1
s
T(r)  Single valued
and
monotonically increasing
r
s T(r)  Not single valued

Inverse transformation of the T(r) will not


give the original image
Non-monotonic transformation function

T(r)
s

r
Transformation Transformation
from black  white from white  black
Original negative 1st half identity
2nd half negative
Hence the requirements:
1. T(r) should be single valued which ensures the

inverse transformation will exists

The monotonicity condition preserves the

increasing order from black to white in the

output image

2. Condition (b) guarantees the output gray levels

will be in the same range as that of input range


The gray levels (r or s) in an image

random variable in the interval [0,1]

pr ( r ) & ps ( s ) are the PDFs of random


variables r and s, respectively

From the PDF theory of the random variable:

ps ( s ) ds  pr ( r ) dr
To make the histogram uniform:

Put, ps ( s )  1

 ds  pr ( r ) dr
s r

 ds   p ( ) d
0 0
r

r
s   p ( ) d  T (r )
0
r
r
s  T (r )  
0
pr ( ) d

Hence, the transformation function (T(r)) is equal


to the CDF of random variable, r

a)T(r) is singled valued and monotonically


increasing function. Hence the first condition
is satisfied.

b) Values of T(r) or s lie in the range [0,1]


For discrete values,

nk
pr ( rk )  , k  0, 1, ..... L  1
n

k
sk  T ( rk )   p (r )
j0
r j

k nj
sk  T ( rk )  n
j 0
, for k  0, 1, ...., L  1
This will do the histogram equalization
AUTOMATICALLY
Problems:
L-7, 29/1/2014
Histogram equalization is not full proof method
to achieve auto contrast
Giving a transformation to the input image

where histogram of the output image is

predefined
Original histogram
A predefined histogram

pr(r)

pz(z)

z
r
How to transform???
r  gray levels of the input image Random

s  gray levels of the output image variables

Let us define pr(r) and pz(z) are the PDFs of

the random variables r and s

s has the property::


r
s  T (r )  
0
pr ( ) d
Discrete form::
k k nj
sk  T ( rk )   p (r )   n
j 0
r j
j 0

The random variable z has the property::

z
G( z )  
0
pz ( ) d  s
Discrete form::
k
G( zk )   p (z )  s
i 0
z i k

Therefore,
G( z )  T ( r )

Hence,
1 1
z  G ( s )  G [T ( r )]
STEPS for histogram matching

1. For a given input image, compute the histogram

pr(r)

r
2. According to the definition of s, calculate

it from pr(r) Vs r curve

r
s  T (r )   p ( ) d
0
r

This will yield 


3. For a given histogram ( Pz(z) Vs z ) ,

find out the variation of s Vs z

pz(z)

z
z
s  G( z )   p ( ) d
0
z


4. MAPPING of the variable r to z via s

For each value of rk , obtain the corresponding

value of sk from sk = T(rk) .

For each value of sk , obtain the corresponding

value of zk from sk = G(zk) by inverse

transformation.
The last step:: s

sk
s

sk
z zk z
r

r
s
Image averaging

Noisy image can be enhanced by averaging it

over a set of images

A noisy image (g(x,y)) can be expressed as

g( x , y )  f ( x , y )   ( x , y )
where
f ( x, y)  Original image

 ( x, y)  Noise

If the noise (x,y) is uncorrelated, then


•The expected value of the noise (x,y) = 0
•The expected covariance of the
two random variables = 0
{ gi ( x , y )}  Represents a set of noisy image

The average image is formed by,

K
1
g( x , y ) 
K
 g ( x, y)
i 1
i

1  K K

  
K  i 1
fi  
i 1
i 

Take expectation either side

1  K K

E( g)   
K  i 1
E( fi )  
i 1
E ( i )

f i is a constant and same for all the images

 E{ f i }  f i

&
E{i }  0

 E{ g( x, y)}  f ( x, y)
K=8 K=16

K=64 K=128
How to remove people from the image????
How to remove vehicles from the image????
Scaling of the gray levels during mathematical
operation

+ =

unscaled scaled
-

unscaled
=
scaled

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy