Chapter 7-1
Chapter 7-1
Chapter 7
By: Kinfe W.
Outlines
• Introduction
Digital image processing
Transmission and storage of Data
Rectification and restoration
Radiometric correction
Geometric correction
Common imagery file format
Digital Image and Digital Image Processing
• Processing of image data for storage, transmission and representation for autonomous
machine perception
Image Processing
• Initial stages of data processing where the image is corrected for various errors and
degradation.
• A radiometric distortion is an error that influences the radiance or radiometric value of a scene
element (pixel).
c. Noise in an image: may be due to irregularities or errors that occur in the sensor
response and/or data recording and transmission.
• Common forms of noise include:
line stripping
periodic line dropouts
random noise or spike noise
10
• Line striping or banding: due to non-identical response of one or more detectors
resulting from drift in response after calibration of the detectors.
• Correction method:
Compute the histogram of one detector as standard
Match the histograms of the other detectors to the histogram of the standard detector
11
• Periodic line drop outs: when a detector either completely fails to function or
becomes temporarily saturated during a scan.
• Caused by erroneous radiance values for pixels, lines or areas or defective scanner,
transmission, receiving or media system.
• Correction method:
Comparison of actual and computed line dropout values.
Correction by repetition of neighbouring values or taking
the average of the line above and below shows little divergence from the actual values.
12
• Random noise or spikes: caused by transmission errors or temporary disturbances.
• Correction method:
Detect spike by comparing DN with DN of its surrounding pixels (neighbours)
Replace DN with DN value interpolated from the surrounding pixels
13
Geometric corrections
• Both maps and images provide a representation of the earth’s surface.
• However raw, unprocessed images are not maps!
• Raw imagery has geometric errors of multiple sources.
•Why is there geometric distortion in imagery?
Perspective of the sensor optics
Motion of the scanning system
Motion and instability of the platform
Platform attitude, altitude and velocity
Terrain relief
Curvature and rotation of the earth
14
Cont`d
• Geometric corrections include correcting for geometric distortions due to sensor-earth
geometry variations and conversion of the data to real world coordinates (Latitude and
longitude) on the earth’s surface.
• Types of distortions:
systematic distortions due to orbital variations (mostly corrected at ground station after image
captation)
distortions due to relief variation
distortions due to different projection systems
• Tilt movements
platform makes small rotation
movements along three axes.
16
Cont`d
• Perspective effects
off-nadir pixel size greater than size of nadir-pixels
earth’s curvature amplifies effect
17
Cont`d
• Distortions due to different projection systems
Two maps of the same area and scale will not be thesame
if they follow different projection systems.
18
Geometric correction procedures
i. Image-to-image registration
• The process of geometrically aligning two or more sets of image data such that they are geometrically
conform.
• Data being registered may be the same type, from very different kinds of sensors, or collected at
different times.
• Orthorectification is a specific form of rectification that does correct for terrain displacements.
19
iii. Image resampling
• Resampling is the process of calculating the pixel values for the rectified image and
creating new file.
• Through a process of interpolation, the output pixel values are derived as functions input
pixel values.
Bilinear interpolation
Cubic convolution
21
a. Nearest neighbour resampling
• Uses the value of the closest pixel to assign to the output pixel.
22
b. Bilinear interpolation
• Uses the pixel values of four pixels in a 2x2 window to calculate an output value with a
bilinear function.
• For each of the four pixels, the pixel value is weighted more if the pixel is closer to the
output pixel.
• Produces smoother output.
• Edges are smoothed out.
• The output image gives a natural look because each output value is based on several
input values.
23
c. Cubic convolution
• Uses data values of 16 pixels in a 4x4 window to calculate an output value with a cubic
function.
• The pixels farther from the output pixel have exponentially less weight than those close to
the output pixel.
• Preserves best the mean and standard deviation of the input pixels.
• Computationally intensive
24
Image enhancement
• Image enhancement is the process of making an image more interpretable for a
particular application.
• Enhancement can improve the appearance of the imagery to assist in visual interpretation
and analysis.
• After pre-processing, the image needs to be enhanced.
• The image enhancement techniques available to facilitate visual interpretation are :
Radiometric enhancement (individual pixel, 1 band)
Spatial enhancement (individual pixel + surrounding pixels, 1 band)
Spectral enhancement (Multiple bands, see Image Transformations)
25
Cont`d
i. Radiometric enhancement: enhancing images based on the values of individual pixels
within each band.
• It is also referred to as a contrast stretching.
•Radiometric enhancement is integrated with the concept of image histogram.
• Image histogram is a graphical representation of the brightness values that comprise an
image.
• Inputbrightness values
(small range) stretching
output brightness values
(broad range).
26
Cont`d
ii. Spatial enhancement: enhancing images based on the values of neighboring pixels
within each band.
• Spatial enhancement deals largely with spatial frequency, it is the difference between
the highest and lowest values of a contiguous set of pixels.
27
Cont`d
• Spatial filtering encompasses a set of digital processing functions which are
used to enhance the appearance of an image.
28
Image Enhancement
• Reduce noise
• Techniques
• Contrast enhancement
• Edge enhancement
• Noise filtering
• Sharpening
• Magnifying
29
Haze Reduction
• One means of haze compensation in multispectral data is to observe the
radiance recorded over target areas of zero reflectance
• For example, the reflectance of deep clear water is zero in NIR region of
the spectrum
Therefore, any signal observed over such an area represents the path
radiance
• This value can be subtracted from all the pixels in that band
Haze Reduction
(a) The aerial image before haze removal (b) The aerial image after haze removal
Coordinate Systems Geographic vs. Projected
• Helps to know between locations on earth and their relative locations on a flat map
• Cause the distortion of one or more map properties (scale, distance, direction, shape)
Classifications of Map Projections
Conformal – local shapes are preserved
Equidistant – distance from a single location to all other locations are preserved
Azimuthal – directions from a single location to all other locations are preserved
Why project data?
• Data often comes in geographic, or spherical coordinates (latitude and longitude) and can’t
be used mathematical model in most GIS software applications
• Some projections work better for different parts of the globe giving more accurate
calculations
14
Cylindrical projections
• Shapes are preserved
• Mercator projection
Conic projections
Planar projections
• Shape (conformal)
• Distance (equidistant)
• Ulterior motives?
Geographic Coordinate System
•Latitude and longitude are angular measurements made from the center of the earth to a point on the surface of
the earth
Datum
• Reference frame for locating points on Earth’s surface
• No color loss