0% found this document useful (0 votes)
8 views49 pages

Chapter Four

Chapter Four covers digital image processing, detailing the definition and classes of digital images, as well as techniques for radiometric and geometric correction. It emphasizes the importance of preprocessing, image enhancement, and classification methods in extracting meaningful information from remote sensing imagery. The chapter also outlines the steps involved in image classification and the distinction between informational and spectral classes.

Uploaded by

bekeletamirat931
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views49 pages

Chapter Four

Chapter Four covers digital image processing, detailing the definition and classes of digital images, as well as techniques for radiometric and geometric correction. It emphasizes the importance of preprocessing, image enhancement, and classification methods in extracting meaningful information from remote sensing imagery. The chapter also outlines the steps involved in image classification and the distinction between informational and spectral classes.

Uploaded by

bekeletamirat931
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 49

CHAPTER FOUR

DIGITAL IMAGE PROCESSING


Unite objective
Upon the completion of the chapter, you will be able to;
Describe image processing
Define digital image
Identify classes of digital image processing
Explain radiometric and geometric correction
Image enhancement
Apply elements of visual image interpretation
Apply image classification, supervised and unsupervised
Explain accuracy assessment
Lesson Objective
By the end of this lesson, you will be able to:
 Describe image processing

 Define digital image

 Identify classes of digital image processing

 Explain radiometric and geometric correction


Image processing
After the data is collected, it must be processed and converted into a usable format
to interpret and understand the data.
In order to take advantage and make good use of remote sensing data, we must be
able to extract meaningful information from the imagery.
Interpretation and analysis of remote sensing imagery involves the identification
and/or measurement of various targets in an image in order to extract useful
information about them.
This technique is called image processing.
Image format
Analog Images: remote sensing products such as aerial photos are the result of
photographic imaging systems (i.e. Camera). Once the film is developed, then no more
processing is required. In this case, the image data is referred to as being in an analog
format.

Digital Images: A digital image is a two-dimensional array of pixels.

 Each pixel has an intensity value (represented by a digital number) and a location
address (referenced by its row and column numbers).

 In this case, the data are in a digital format. These types of digital images are referred
to as raster images in which the pixels are arranged in rows and columns.
Digital Image Processing
In order to process remote sensing imagery digitally, the data must be recorded
and available in a digital form suitable for storage on a computer.

The other requirement for digital image processing is a computer system with
appropriate hardware and software to process the data.

Digital image processing involve many procedures including formatting and


correcting of the data, digital enhancement to facilitate better visual interpretation,
or even automated classification of targets and features entirely by computer.
Cont..
Digital Image processing systems can be categorized into three
classes:
I. Pre-processing
Image correction
 Radiometric correction
 Geometric correction
II. Image enhancement
III. Post processing
Image Classification,
Accuracy assessment and
Analysis
I. Pre-processing

 Preprocessing is a critical initial step in digital image processing for remote sensing
data.

 It ensures that the raw data acquired by sensors is corrected and standardized for
further analysis.

 The goal of preprocessing is to improve data quality by removing distortions and


errors caused by sensor limitations, atmospheric effects, geometric distortions, and to
expand or contract the extent of an image via mosaicking or sub-setting.

 Proper preprocessing ensures that the data is accurate, consistent, and ready for
subsequent interpretation or automated analysis.
Radiometric distortion
What is a radiometric distortion?

it’s an error that influences the radiance or radiometric value of a scene element
(pixel).

Why?

Signal travelling through atmosphere and atmosphere affects the signal.

Sun illumination influences radiometric values.

Seasonal changes affect radiometric values.

Sensor failure or system noise affects radiance.

 Terrain influences radiance.


Radiometric correction
Radiometric corrections include correcting the data for sensor irregularities and
unwanted sensor or atmospheric noise, and converting the data so that they restore an
image to as close an approximation of the original reflected or emitted radiation of a
scene as possible.

Common forms of noise include systematic striping or banding and line drop outs.
Cont…
Line striping or banding: are errors that occur in the sensor response and/or
data recording and transmission and results in a systematic error or shift of
pixels between rows.
Several destriping procedures have been developed to deal with the type of
line striping. One method is to compile a set of histograms for the image one
for each detector involved in a given band.
These histograms are then compared in terms of their descriptive statistics
(mean, median, standard deviation, and so on) to identify radiometric
differences or malfunctions among the detectors.
e

Landsat 7 Line striping scan line error


Cont…
 Line drop outs: are errors that occur in the sensor response and/or data recording
and transmission which loses a row of pixels in the image.

 Line dropout occurs when a detector either completely fails to function or


becomes temporarily saturated during a scan.

 In this case, a number of adjacent pixels along a line (or an entire line) may
contain spurious DNs, often values of 0 or <no data.

 This problem is normally addressed by replacing the defective DNs with the
average of the values for the pixels occurring in the lines just above and below.
• .
Cont..

Line drop correction:


(a) original image containing two line drops;
(b) restored image
resulting from averaging pixel values above and
below defective line
Geometric distortion
Both maps and images provide a representation of the earth’s surface. However, raw
unprocessed images are not maps!

 Raw imagery has geometric errors.

 Why is there geometric distortion in imagery?

 Motion of the scanning system

 Motion and instability of the platform

 Platform attitude, altitude and velocity

 Terrain relief

 Curvature and rotation of the earth


Geometric correction
 Geometric correction aligns an image to a real-world coordinate system by removing
distortions caused by sensor movement, Earth's curvature, or topographic relief.
 This method attempts to correct any error introduced into an image by the geometry
of the curved Earth's surface and the movement of the satellite.
 Geometric registration of the imagery to a known ground coordinate system must be
performed.
 Increasingly, some or all of the necessary geometric corrections are performed
automatically by image data providers.
Cont.
 Image registration: it is the process of aligning two or more images of the same scene
taken at different times, from different viewpoints, or by different sensors so that,
they are geometrically conform.
 The geometric registration process involves identifying the image coordinates (i.e.
row, column) of several clearly discernible points, called ground control points (or
GCPs), in the distorted image.
 The goal is the pixels in each image line up which means the same feature (like a tree
or building) appears in the same location across all images.
Cont..
 Geo-referencing: linking image coordinates to map coordinates using a
transformation formula.
 Image geo-referencing is the process of assigning real-world geographic coordinates
(like latitude/longitude or UTM) to an image so that it fits accurately on a map.
 The process of assigning coordinates to an image to locate the image in the world.
 It requires defined series of reference points that are both clearly visible in the image
and have known geographic coordinate.
Cont..
 Resembling is used to determine the digital values to place in the new pixel locations
of the corrected output image.
 The resampling process calculates the new pixel values from the original digital
pixel values in the uncorrected image.
 There are three common methods for resampling:
 Nearest neighbour,
 Bilinear interpolation, and
 Cubic convolution.
Cont..
 Nearest neighbour resampling uses the digital value from the pixel in the original
image which is nearest to the new pixel location in the corrected image.
 This is the simplest method and does not alter the original values, but may result in
some pixel values being duplicated while others are lost.
94 85 70 80 60 70 70
65 78 82 79 90 75 87
80 88 89 85 87 85 90
The missing DN value at position (4,4) is
85 90 85 95 98 99
filled with the value of the closest neighbor.
70 75 82 80 90 87 85
60 65 68 77 78 79 82
50 55 62 64 67 69 70
Cont..
 Bilinear interpolation resampling: takes a weighted average of four pixels in the
original image nearest to the new pixel location.
 The process alters the gray levels of the original image, problems may be
encountered in subsequent spectral pattern recognition analyses of the data.
 Because of this, resampling is often performed after, rather than prior to, image
classification procedures. 94 85 70 80 60 70 70
65 78 82 79 90 75 87
f(4,4) = (f(3,4) + f(4,3) + f(4,5) + f(5,4))/4 80 88 89 85 87 85 90
= (85 + 85 + 95 + 80)/4 85 90 85 95 98 99
= 345/4 70 75 82 80 90 87 85
= 86.25 60 65 68 77 78 79 82
50 55 62 64 67 69 70
Cont..
 Cubic Convolution resampling: This resampling method assigns the average DN of
the sixteen pixels closest to the input pixel (in a 4×4 window) to the corresponding
output pixel. As with bilinear interpolation, this method results in completely new
pixel values.
94 85 70 80 60 70 70
65 78 82 79 90 75 87
80 88 89 85 87 85 90
85 90 85 95 98 99
70 75 82 80 90 87 85
60 65 68 77 78 79 82
50 55 62 64 67 69 70
Subsetting, layer stacking, and mosaicking
 Subsetting the image’s to reduce the data volume, layer stacking to combine multiple
separate bands, and mosaicking multiple images to cover a broader area.
 Subsetting may be used to reduce the spatial extent of an image, cropping the image
to cover only the specific area of interest, and it may also involve selecting only
certain spectral bands.
 Layer stacking is often used when individual spectral bands are provided in separate
files, but it can also be used to combine two or more different images (perhaps from
different dates or different sensors).
II. Image enhancement
 Image enhancement, is solely to improve the appearance of the imagery to assist in
visual interpretation and analysis.
 Image enhancement can be done by;
 Contrast stretching to increase the tonal distinction between various features in a
scene.
 Linear contrast stretching this involves identifying lower and upper bounds from the
histogram (usually the minimum and maximum brightness values in the image) and
 Applying a transformation to stretch this range to fill the full range.
Cont..
 Spatial filtering to enhance (or suppress) specific spatial patterns in an image.
 Low pass filtering is designed to emphasize larger, homogeneous areas of similar
tone and reduce the smaller detail in an image.
 low-pass filters generally serve to smooth the appearance of an image.
 Smooth the image
 Remove noise
 High pass filtering፡ do the opposite and serve to sharpen the appearance of fine
detail in an image. Use to:
 Sharpen edges or details
 Enhance contrast
 To detect edges and boundaries between land covers
Image interpretation
 Interpretation and identification of targets in remote sensing imagery is
performed either manually or visually - by a human interpreter or digitally.
 Manual interpretation is done using imagery displayed in a pictorial or
photograph.
 In this case we refer to the data as being in analogue format.
 Manual interpretation is a subjective process, meaning that the results will vary
with different interpreters
Elements of Visual Interpretation
 Recognizing targets is the key to interpretation and information extraction.
 Observing the differences between targets and their backgrounds involves comparing
different targets based on any, or all, of the visual elements
 Tone, Tone refers to the relative brightness or colour of objects in an image.
 Variations in tone also allows the elements of shape, texture, and pattern of objects
to be distinguished.
 Shape, Shape refers to the general form, structure, or outline of individual objects.
Shape can be a very distinctive clue for interpretation.
Cont..
Size, Size of objects in an image is a function of scale. It is important to assess the
size of a target relative to other objects in a scene
 Pattern refers to the spatial arrangement of visibly discernible objects.
 Texture refers to the arrangement and frequency of tonal variation in particular
areas of an image
 Shadow is also helpful in interpretation as it may provide an idea of the profile
and relative height of a target or targets which may make identification easier
 Association takes into account the relationship between other recognizable objects
or features in proximity to the target of interest. .
Image classification
 Image classification is one of the techniques in the domain of digital image
interpretation.
 Digital image classification is assigning pixels to classes (categories).
 It uses the spectral information represented by the digital numbers in one or more
spectral bands.
 Classification is a process by which a set of items is grouped into sets based on
common characteristics of their spectral information.
 Classification of satellite image means detection and grouping of remotely sensed
images into the same information categories based on respective reflectance value of
each individual pixel of images.
Cont…
 Therefore, image classification is likely to assemble groups of identical pixels found
in remotely sensed data, into classes that match the informational categories of user
interest by comparing pixels to one another and to those of known identity.
 The objective is to assign all pixels in the image to particular classes or themes (e.g.
water, coniferous forest, deciduous forest, grassland, building, etc.)
 The resulting classified image is essentially a thematic "map" of the original image.
Informational vs Spectral Classes
 Informational classes - categories of interest to users. - land use, forest types,
geological units, surface temperature, etc…
 Not directly recorded on a remotely sensed image!
 Spectral classes - are pixels that are of uniform brightness in each of their several
channels.
 the idea is to link spectral classes to informational classes.
 However, there is usually variability that causes confusion: - a forest can have trees
of varying age, health, species composition, density, etc.
Image classification steps
The process of image classification typically involves five steps:
1. Selection and preparation of the RS images. Depending on the land cover types or
whatever needs to be classified, the most appropriate sensor, the most appropriate
date(s) of acquisition and the most appropriate wavelength bands should be selected.
2. Definition of the clusters in the feature space. Here two approaches are possible:
supervised classification and unsupervised classification.
In a supervised classification, the operator defines the clusters during the training
process; in an unsupervised classification, a clustering algorithm automatically finds
and defines the number of clusters in the feature space.
Cont…
3. Selection of the classification algorithm. Once the spectral classes have been defined in
the feature space, the operator needs to decide on how the pixels (based on their feature
vectors) are to be assigned to the classes.
4. Running the actual classification. Once the training data have been established and the
classifier algorithm selected, the actual classification can be carried out. This means that,
based on its DNs, each “multi-band pixel” (cell) in the image is assigned to one of the
predefined classes.
5. Validation of the result. Once the classified image has been produced its quality is
assessed by comparing it to reference data (ground truth). This requires selection of a
sampling technique, generation of an error matrix, and the calculation of error parameters.
Image Classification Techniques
 There are two ways to classify pixels:

– Unsupervised classification: -
– Supervised classification
 Unsupervised classification: - identification of natural groups, or structures/patterns,
within multispectral data.
 spectral classes are defined by the computer through statistical clustering method;
informational classes are assigned to output spectral clusters.
 It is a technique that groups the pixels into clusters based upon the distribution of the
digital numbers.
Cont..
 Clustering algorithms are used to determine the natural (statistical) groupings or
structures in the data.
 The programs require the following:

– maximum number of classes


– maximum number of iterations
– threshold value
 We apply unsupervised classification when:
 no information of the area exists to training samples.
 high degree of ambiguity exists in recognizing classes.
Cont…
 Advantages: -

 No extensive/detailed prior knowledge of the region is required.

 Minimize human errors/biases (fewer decisions by analyst).

 Produces more uniform classes.

 Spectrally distinct classes present in the data may not have initially been obvious to the
analyst.
 Disadvantages: -

 Classes do not necessarily match informational categories of interest. Limited control


of classes and identities.
 Spectral properties of classes can change with time.
Supervised classification
 It is the process of using samples of known informational classes (training sets) to
classify pixels of unknown identity.
 Identification and delineation of training areas is key to successful implementation.
 Start with knowledge of class types and classes are chosen at the start
 Training samples are created for each class.
 Initially the operator outlines sample or training areas for each class (from ancillary
data or ground truth).
 Training data: specify corner points of selected areas. often requires ancillary data
(maps, photos, etc.). field work often needed to verify.
Cont…
 Advantages:
 Analyst has control over the selected classes.
 Has specific classes of known identity.
 Does not have to match spectral categories on the final map with informational
categories of interest.
 Can detect serious errors in classification if training areas are misclassified.
 Disadvantages:
 Analyst imposes a classification (may not be natural).
 Training data are usually tied to informational categories and not spectral
properties.
 Training data selected may not be representative.
 Selection of training data may be time consuming.
Steps in Supervised Classification

Can be characterized in four general steps:


I. Recognizing the thematic classes that are present in the multispectral image to be
classified.
II. Selecting a series of training areas which characterize the multispectral (DN)
variability of each thematic class.
III. Using the training areas to characterize the statistical multispectral (DN)
distribution of each thematic class.
IV. Using the generated statistics to assign each multispectral pixel to one of the
recognized themes present in the scene.
Supervised Classification Algorithms
 Minimum distance classifier: any pixel in the scene is categorized using the distances
between:
– the digital number vector (spectral vector) associated with that pixel, and
– the means of the information classes derived from the training sets.
 The pixel is designated to the class with the shortest distance.
 Some versions of this classifier use the standard deviation of the classes to
determine a minimum distance threshold.
Cont..
 Maximum likelihood:
 For each training class the spectral variance and covariance is calculated.
 The class can then be statistically modelled with a mean vector and covariance
matrix.
 This assumes the class is normally distributed, which is generally okay for
natural surfaces.
 Unidentified pixels can then be given a probability of being in any one class.
 Assign the new pixel to the class with the highest probability or unclassified if
all probabilities low.
Accuracy assessment
 To assess the accuracy of an image classification, it is common practice to create a
confusion matrix.
 In a confusion matrix, the classification results are compared to additional ground
truth information.
 From the accuracy assessment cell array, two kinds of reports can be derived:

– the error matrix simply compares the reference points to the classified points in a
c x c matrix, where c is the number of classes.
– the accuracy report calculates statistics of the percentages of accuracy, based
upon the results of the error matrix.
Cont…
Cont…
 Rows correspond to classes in the classification result.
 Columns correspond to classes in the ground truth.
 The diagonal elements in the matrix represent the number of correctly classified
pixels of each class.
 The off-diagonal elements represent misclassified pixels or the classification errors.
 Off-diagonal row elements represent ground truth pixels of other classes that were
included in a certain classification class.
 Such errors are also known as errors of commission or inclusion.
Cont…
 Off-diagonal column elements represent ground truth pixels of a certain class which
were excluded from that class during classification.
 Such errors are also known as errors of omission or exclusion.
 Producer's accuracy is interested to measure how well a certain area can be
classified.
 It Indicates the probability of a reference pixel being correctly classified.
 It can be calculated as follows:
 for each class of ground truth pixels (column), the number of correctly classified
pixels is divided by the total number of ground truth.
Cont..
 User's accuracy (Reliability) is the probability that a pixel classified on the map
actually represents that category on the ground.
 It can be calculated as follows:
 for each class in the classified image (row), the number of correctly classified pixels
is divided by the total number of pixels which were classified as this class.
 The overall accuracy is calculated as:
 The total number of correctly classified pixels (diagonal elements) divided by the
total number of test pixels.
CP
Reference Point ( 𝑃𝐴 )= ∗ 100
CT
Grazing Bush Eucalyptus Bare
CP
LUL class Farmland Land Land Plantation Land RT UA (%) ( 𝑈𝐴 ) = ∗100
RT P
Farm Land 67 4 0 2 3 76 88.16
Grazing Land 6 53 1 0 0 60 88.33 overall accuracy *100
Bush Land 0 0 72 3 0 75 96.00
Classified Image

Eucalyptus Where, CP is the number of


Plantation 2 0 0 34 5 41 82.93 correctly classified pixels for each
Bare Land 3 0 0 2 43 48 89.58 class and TRP is total number of
CT 78 57 73 41 51 300 reference pixels for that class.
PA (%) 85.9 92.98 98.6 82.9 84.3 89.67
Over All Accuracy= 89.67
Kappa Value= 0.87
General procedure of Digital image processing
I. Downloading satellite image from USGS (http://glovis.usgs.gov/)
 By creating account
II. Preprocessing
 Using image processing software E.g. ERDAS EMAGINE software making layer
stack, mosaicking and subset the study area.
 Such as radiometric (noise and strip line drop out removal and
 Geometric correction (geo-referencing)
I. Image enhancement
II. Image classification either supervised or unsupervised or mixed approach.
III. Accuracy assessment
IV. Finally preparation of maps and extraction of required information.
En
d
Th of
an le s
k son
yo
Go
od u
lu
ck

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy