0% found this document useful (0 votes)
64 views33 pages

Unit Iii Image Enhancement

This document discusses various image enhancement techniques including point operations, local or neighborhood operations, contrast enhancement, and spatial enhancement. Point operations modify pixel values independently, while neighborhood operations modify pixels based on surrounding pixel values. Contrast enhancement techniques, like minimum-maximum contrast stretching and histogram equalization, aim to improve contrast by mapping input pixel values to better utilize the full dynamic range. Spatial enhancement techniques, also called spatial filtering, modify pixels based on neighboring values to emphasize different spatial frequencies and features like edges.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
64 views33 pages

Unit Iii Image Enhancement

This document discusses various image enhancement techniques including point operations, local or neighborhood operations, contrast enhancement, and spatial enhancement. Point operations modify pixel values independently, while neighborhood operations modify pixels based on surrounding pixel values. Contrast enhancement techniques, like minimum-maximum contrast stretching and histogram equalization, aim to improve contrast by mapping input pixel values to better utilize the full dynamic range. Spatial enhancement techniques, also called spatial filtering, modify pixels based on neighboring values to emphasize different spatial frequencies and features like edges.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 33

UNIT III IMAGE ENHANCEMENT

Image characteristics- point, local and regional operation – contrast, spatial feature
and multi-image manipulation techniques – level slicing, contrast stretching, spatial
filtering, edge detections - Fourier transform-FFT, DFT - Band ratio - Principal
Component Analysis (PCA) – Scale-space transform-multi-image fusion.

Image enhancement involves use of a number of statistical and image manipulation


functions available in an image processing software. Image features are enhanced by
the following two operations:

• Point Operations: In this operation, value of pixel is enhanced independent of


characteristics of neighbourhood pixels within each band.

• Local (Neighbourhood) Operations: In this operation, value of pixel is enhanced based


on neighbouring brightness values.

Image enhancement techniques based on point operations are also called as contrast
enhancement techniques and the enhancement techniques based on neighbourhood
operations are also known as spatial enhancement techniques.

Contrast Enhancement

Remote sensing systems record reflected and emitted radiant flux exiting from
Earth's surface materials. Ideally, one material would reflect a tremendous amount of
energy in a certain wavelength and another material would reflect much less energy in
the same wavelength~ This would result in contrast between the two types of material
when recorded by the remote sensing system. Unfortunately, different materials often
reflect similar amounts of radiant flux throughout the visible, near-infrared, and
middle-infrared portions of the electromagnetic spectrum, resulting in a relatively low-
contrast imagery. In addition, besides this obvious low-contrast characteristic of
biophysical materials, there are cultural factors at work. For example, people in
developing countries often construct urban areas using natural building materials (e.g.,
wood, sand, silt, clay) (Haack et al., 1997). This can cause urbanized areas in developing
countries to have about the same reflectance characteristics as the neighboring
countryside. Conversely, urban infrastructure in developed countries is usually
composed of concrete, asphalt~ and fertilized green vegetation. This typically causes
urbanized areas in developed countries to have reflectance characteristics significantly
different from the surrounding countryside.

An additional factor in the creation of low-contrast remotely sensed imagery is the


sensitivity of the detectors. For example, the detectors on most remote sensing
systems are designed to record a relatively wide range of scene brightness values
(example : 0 to 255) without becoming saturated. Saturation occurs if the radiometric
sensitivity of a detector is insufficient to record the ful-1-range-of--intensities of
reflected or emitted energy emanating from the scene. The Landsat TM detectors, for
example, must be sensitive to reflectance from diverse biophysical materials such as
dark volcanic basalt outcrops or snow (possibly represented as BVs of 0 and 255,
respectively). However, very few scenes are composed of brightness values that use
the full sensitivity range of the Landsat TM detectors. Therefore, this results in
relatively low-contrast imagery, with original brightness values that often range from
approximately 0 to 100.

To improve the contrast of digital remotely sensed data, it is desirable to use the
entire brightness range of the display medium, which is generally a video CRT display or
hardcopy output device (discussed in Chapter 5). Digital methods may. be more
satisfactory than photographic techniques for contrast enhancement because of the
precision and wide variety of processes that can be applied to the imagery. There are
linear and nonlinear digital contrast-enhancement techniques.

Linear Contrast Enhancement:

Contrast enhancement (also referred to as contrast stretching) expands the original


input brightness values to make use of the total dynamic range or sensitivity of the
output device. To illustrate the linear contrast-stretching process, consider the
Charleston, SC, TM band 4 image produced by a sensor system whose image output
levels can vary from 0 to 255. A histogram of this imago is provided (Figure 8-lla). We
will assume that the output device (a high-resolution black-andwhite CRT) can display
256 shades of gray (i.e., quant, ~ 255). The histogram and associated statistics of this
band 4 sub image reveal that the scene is composed of brightness values ranging from a
minimum.

of 4 (i.e., min4 = 4) to a maximum value of 105 (i.e., max4 = 105), with a mean of 27.3
and a standard deviation·of15.?6 (refer to Table 4-7). When these data are displayed
on the CRT without any contrast enhancement, we use less than one-half of the full
dynamic range of brightness values that could be displayed (i.e., brightness values
between 0 and 3 and between 106 and 255 are not used). The image is rather dark, low
in contrast, with no distinctive bright areas (Figure 8-1 la). It is difficult to visually
interpret such an image. A more useful display can be produced if we expand the range
of original brightness values to use the full dynamic range of the video display.

Minimum-Maximum Contrast Stretch:

Linear contrast enhancement is best applied to remotely sensed images with Gaussian
or near-Gaussian histograms, that is, when all the brightness values fall generally within
a single, relatively narrow range of the histogram and only one mode is apparent.
Unfortunately, this is rarely the case, especially for scenes that contain both land and
water bodies. 1 o perform a linear contrast enhancement, the analyst examines , the
image statistics and determines the minimum and maximum brightness values in band k,
mink and max k-> respectively. The output brightness value, BVout , is computed
according to the equation:
where B Vin is the original input brightness
value and quant k is the maximum value of the range of brightness values that can be
displayed on the CRT (e.g., 255). In the Charleston, SC,

example, any pixel with a BV in of 4 would now have a Bout of 0, and any pixel with a B
Vin of 105 would have a B out of 255:

The original brightness values between 5 and 104 would be


linearly distributed between 0 and 255, respectively. The application of this
enhancement to the Charleston TM band 4 data is shown in Figure 8-llb. This is
commonly referred to as a minimum maximum contrast stretch. Most image processing
systems -provide for the display of a before-andafter histogram, as well as a graph of
the relationship between the input brightness value (BV,,) and the output brightness
value .

Nonlinear Contrast Enhancement :

Nonlinear contrast enhancements may also be applied. One of the most useful
enhancements is histogram equalization. The algorithm passes through the individual
bands of the dataset and assigns approximately an equal number of pixels to each of
the user-specified output grayscale classes (e.g., 32, 64, 256). Histogram equalization
applies the greatest contrast enhancement to the most populated range of brightness
Values in the image. It automatically reduces the contrast in the very light or dark
parts of the image associated with the tails of a normally distributed histogram.

HISTOGRAM EQUALIZATION:

This is another non-linear contrast enhancement technique. In this technique,


histogram of the original image is redistributed to produce a uniform population
density. This is obtained by grouping certain adjacent grey values. Thus the number of
grey levels in the enhanced image is less than the number of grey levels in the original
image.

In the first step, we compute the probability of the Brightness value by dividing each
of the frequencies by total number of pixels in the scene. The next step is to compute
a transformation function for each individual Brightness Value. For each BV, a new
cumulative frequency (Ki) is calculated mechanically given as

The histogram equalization procedure iteratively compares the transformation function


(K i) with original normalized Brighter Value (Normalized BV lies between 0-1). The
closest match is rearranged to the appropriate BV.

Spatial Enhancement:

Point operation based enhancement techniques operate on each pixel individually


whereas spatial enhancement techniques modify pixel values based on the values of
surrounding (neighbourhood) pixels. Spatial enhancement deals largely with the spatial
frequency. Spatial frequency is the difference between the highest and lowest values
of a contiguous set of pixels. Jensen (1996) defines spatial frequency as “the number
of changes in brightness value per unit distance for any particular part of an image”. A
low spatial frequency image consists of a smoothly varying pixel values whereas a high
spatial frequency image consists of abruptly changing pixel values (Fig. 12.5).

Spatial frequency based on the image enhancement approach is popularly known as


spatial filtering techniques. These techniques are used to improve the appearance of
image by transforming or changing pixel values in the image. In simple terms, image
filtering is a process for removing certain image information to get more details about
particular features. All filtering algorithms involve so-called neighbourhood processing
because they are based on the relationship between neighbouring pixels rather than a
single pixel. In general, there are three types of filtering techniques as given below:

• low pass filter is used to smooth the image display by emphasising low frequency
details in an image

• high pass filter is used to sharpen the linear appearance of the image objects such as
roads and rivers by emphasising high frequency details in an image and

• edge detection filter is used to sharpen the edges around the objects. Low pass
filters pass only the low frequency information.

They produce images that appear smooth or blurred when compared to original data.
High pass filters pass only the high frequency information, or abrupt gray level
changes. They produce images that have high frequency information. Edge detection
filters delineate edges and makes shapes more conspicuous. In another approach, band
pass filters are used in which a low pass filter is first applied and then a high pass
filter is applied on an image. The low pass filter blocks too high frequencies and high
pass filter blocks the low frequency information. Hence the resultant image has
frequencies which are neither too low nor too high.

The most commonly used filters for digital image filtering rely on the concept of
convolution and operate in the image domain for reasons of simplicity and processing
efficiency. For example, low-pass filters are mainly used to smoothen image features
and to remove noise but often at the cost of degrading image spatial resolution
(blurring). In order to remove random noise with the minimum degradation of
resolution, various edge-preserved filters have been developed such as the adaptive
median filter.

Digital image filtering is useful for enhancing lineaments that may represent
significant geological structures such as faults, veins or dykes. It can also enhance
image texture for discrimination of lithology and drainage patterns. Digital image
filtering is used for the land use studies, highlighting textures of urban features, road
systems and agricultural areas.

Multi image manipulation techniques:

Multi-image manipulation techniques in image enhancement involve the processing of


multiple images to create a single enhanced image. These techniques can be particularly
useful when working with a set of images that have different exposures, focus, or
other properties that affect image quality. Here are some common multi-image
manipulation techniques used in image enhancement:

1. Image alignment: This technique involves aligning multiple images of the same
scene to create a single composite image. Image alignment can be used to
remove unwanted elements from a scene or to increase the resolution of an
image.
2. High dynamic range (HDR) imaging: This technique involves combining multiple
images with different exposures to create an image with a higher dynamic range.
HDR imaging can be used to capture scenes with a wide range of brightness
levels, such as sunsets or cityscapes.
3. Image stacking: This technique involves combining multiple images of the same
scene to increase the signal-to-noise ratio and create a cleaner image. Image
stacking can be used in astrophotography or macro photography to capture fine
details that are difficult to capture in a single image.
4. Focus stacking: This technique involves combining multiple images of the same
scene with different focus points to create an image with a larger depth of
field. Focus stacking can be used in macro photography or landscape
photography to create an image with sharp focus throughout the scene.
5. Panorama stitching: This technique involves stitching multiple images of a scene
together to create a wider field of view. Panorama stitching can be used in
landscape photography to capture a wider view of a scene than can be captured
in a single image.

These multi-image manipulation techniques can be combined with other image


enhancement techniques, such as level slicing or histogram equalization, to create a
final enhanced image. The choice of technique(s) used will depend on the specific goals
of the image enhancement project and the characteristics of the images being
processed.

Spatial Filtering:

A characteristic of remotely sensed images is a Parameter called spatial frequency,


defined as the number of changes in brightness value per unit distance for any
particular part of an image. If there are very few changes in brightness value over a
given area in an image, this is commonly referred to as a low-frequency area.
Conversely, if the brightness values change dramatically over short distances, this is an
area of high-frequency detail. Because spatial frequency by its very nature describes
the brightness values over a spatial region, it is necessary to adopt a spatial approach
to extracting quantitative spatial information. This is done by looking at the local
(neighboring) pixel brightness values rather than just an independent pixel value. This
perspective allows the analyst to extract useful spatial frequency information from the
imagery.

Spatial frequency in remotely sensed imagery may be enhanced Or subdued using two
different approaches. The first is spatial convolution filtering based primarily on the
use of convolution masks. The procedure is relatively easy to understand and can be
used to enhance low- and high-frequency detail, as well as edges in the imagery.
Another technique is Fourier analysis, which mathematically separates an image into its
spatial frequency components. It is then possible interactively to emphasize certain
groups (or bands) of frequencies relative to others and recombine the spatial
frequencies to produce an enhanced image. We first introduce the technique of spatial
convolution filtering and then. proceed to the more mathematically challenging Fourier
analysis

Level slicing:

Level slicing is an enhancement technique whereby the Digital Numbers (DN)


distributed along the x-axis of an image histogram is divided into a series of analyst-
specified intervals of “slices”. All of DNs falling within a given interval in the input
image are then displayed at a single DN in the output image

Band ratioing:

Sometimes differences in brightness values from identical surface materials are


caused by topographic slope and aspect, shadows, or seasonal changes in sunlight
illumination angle and intensity. These conditions may hamper the ability of an
interpreter or classification algorithm to identify correctly surface materials or land
use in a remotely sensed image. Fortunately, ratio transformations of the remotely
sensed data can, in certain instances, be applied to reduce the effects of such
environmental conditions. In addition to minimizing the effects of environmental
factors, ratios may also provide unique information not available in any single band that
is useful for discriminating between soils and vegetation
To represent the range of the function in a linear fashion and to encode the ratio
values in a standard 8-bit format (values from 0 to 255), normalizing functions are
applied. Using this normalizing function, the ratio value 1 is assigned the brightness
value 128. Ratio values within the range 1/255 to 1 are assigned values between 1 and
128 by the function:

Deciding which two bands to ratio is not always a simple task. Often, the analyst simply
displays various ratios and then selects the most visually appealing. The optimum index
factor (OIF) and Sheffield Index (discussed in Chapter 5) can be used to identify
optimum bands for band ratioing (Chavez et al., 1984; Sheffield, 1985). Crippen (1988)
recommended that all data be atmospherically corrected and free from any sensor
calibration problems (e.g., a detector is out of adjustment) before it is ratioed.

The ratio of Charleston, SC, Thematic Mapper bands 3 (red) and 4 (near-infrared) is
displayed in Figure 8-18a. This red/ infrared ratio provides vegetation information that
will be discussed in the section on vegetation indexes in this chapter. Generally, the
brighter the pixel, the more vegetation present. Similarly, the ratio of bands 4 and 7
provides useful information. The ratio of band 3 (red) and band 6 (thermal infrared)
provides detail about the water colUI1U1 as well as the urban · structure.

Spatial Convolution Filtering:


frequencies may sharpen the edges within an image. A linear spatial filter that
emphasizes low spatial frequencies may be used f9 reduce noise within an image.

Low-frequency Filtering in the Spatial Domain:


because it is based on the computation of the standard deviation for each 3 x 3
window, rather than on the standard deviation of the entire scene. Even very minor bit
errors are removed from low-variance areas, but valid data along sharp edges and
corners are not replaced. Their second adaptive filter for cleaning up extremely noisy
images was based on the Lee (1983) sigma filter. Lee's filter first computed the
standard deviation of the entire scene. Then, each BV5 in a 3 x 3 moving window was
replaced by the average of only those neighboring pixels that had an intensity within a
fixed a range of the central pixel. Eliason and McEwen (1990) used the local (adaptive)
a, rather than the fixed computed from the entire scene_ The filter averaged only
those pixels within the box that had intensities within 1.0 to 2.0cr of the central pixel.
This technique effectively reduced speckle in radar images without eliminating the fine
details. The two filters can be combined into a single program for processing images
with both random bit errors and noisy data.

High-frequency Filtering in the Spatial Domain:

High-pass filtering is applied to imagery to remove the slowly varying components and
enhance the high-frequency
The Fourier transform:
symmetric about their centers, and u and v _represent spatial frequency. The displayed
Fourier magnitude image is usually adjusted to bring the F(O, 0) to the center of the
image rather than to the upper-left comer. Therefore, the intensity at the center
represents the magnitude of the lowest-frequency component. The frequency increases
away from the center. For example, consider the Fourier magnitude of the
homogeneous water body . The very bright values found in and around the center
indicate that it -is dominated by low-frequency components. In the second image, more
medium-frequency components are present in addition to the background of low-
frequency components. We can easily identify the high-frequency information
representing the horizontal and vertical linear features in the original image Notice the
alignment of the cloud of points in the center of the ·Fourier transform It represents
the diagonal linear features trending in the NW-SE direction in the photograph.

It is important to remember that the strange-looking Fourier transformed image F(u,


v) contains all the information fou·1d in the original image. It provides a mechanism for
analyzing and manipulating images according to their spatial frequencies. lt is useful
for image restoration, filtering, and rad: Jmetric correction. For example, the Fourier
transform can be used to remove periodic noise in remotely sensed data. When the
pattern of periodic noise is unchanged throughout the image, it is called stationary
periodic noise. Striping in remotely sensed imagery is usually composed of stationary
periodic noise.

When stationary periodic noise is a single-frequency sinusoidal function in the spatial


domain, its Fourier transform consists of a single bright point (a peak of brightness).
For example, Figure 8-29a and c displays two images of sinusoidal functions with
different frequencies (which look very much like striping in remote sensor data). Figure
8-29b and dare their Fourier transforms. The frequency and orientation of the noise
can be identified by the position of the bright points. The distance from the bright
points to the center of the transform (the lowest-frequency component in the image) is
directly proportional to the frequency. A line connecting the bright point and the
center of the transformed image is always perpendicular to the orientation of the noise
lines in the original image. Striping in the remotely sensed data is usually composed of
sinusoidal functions with more than one frequency in the same orientation. Therefore,
the Fourier transform of such noise consists of a series of bright -points lined up in
the same orientation.

Because the noise information is concentrated in a point or a series of points in the


frequency domain, it is relatively straightforward-to identify and remove them in the
frequency domain, whereas it is quite difficult to remove them in the standard spatial
domain. Basically, an analyst can manually cut out these lines or points in the Fourier
transform image or use a computer program to look for such noise and remove it. For
example, consider the Landsat TM band 4 data of Al Jubail, Saudi Arabia, obtained on
September 1, 1990 (Figure 8-30). The image contains serious stationary periodic
striping, which can make the data unusable when conducting near-shore studies of
suspended sediment transport. Figure i8-31 documents how a portion of the Landsat
TM scene was corrected. First, a Fourier transform of the area was computed . The
analyst then modified the Fourier transform by selectively removing the points in the
plot associated with the systematic striping (Figure 8-3lc). This can be done manually
or a special program can be written to look for and remove such systematic noise
patterns in the Fourier transform image. The inverse Fourier transform was then
computed, yielding a clean band 4 image, which may be more useful for biophysical
analysis (Figure 8-3ld). This type of noise could not be removed using a simple
convolution mask. Rather, it requires access to the Fourier transform and selective
editing out of the noise in the Fourier transform image.

Spatial Filtering in Frequency Domain:

We have discussed filtering in the spatial domain using convolution filters. It can also
be performed in the frequency domain. Using the Fourier transforms, we can
manipulate directly the frequency information of the image. The manipulation can be
performed by-multiplying the Fourier transform of the original image by a mask image
called a frequency domain filter, which will block or weaken certain frequency
components by making the values of certain parts of the frequency spectrum become
smaller or even zero. Then we can compute the inverse Fourier transform of the
manipulated frequency spectrum to obtain a filtered image in the spatial domain.
Numerous algorithms are available for computing the Fast Fourier transform (FFT) and
the inverse Fast Fourier transform (IFFT) (Russ, 2002). Spatial filtering in the
frequency domain generally involves computing the FFT of the original image,
multiplying the FFT of a convolution mask of the analyst's choice (e.g., a low-pass
filter) with the FFT, and inverting the resultant image with the IFFT; that is,
Principal component analysis(PCA):
Scale space transform:

Scale space transform is a technique used in digital image processing for analyzing an
image at multiple scales. The goal of this technique is to enhance the features of an
image at different scales while suppressing noise and other unwanted details.

The scale space transform involves generating a series of images that are blurred with
increasing amounts of blur or smoothed at each level. The resulting images are called
the scale space representation of the original image. The process of generating the
scale space representation involves convolving the original image with a Gaussian kernel
of increasing standard deviation, which results in the images becoming progressively
smoother.

One common application of scale space transform in image enhancement is edge


detection. Since edges in an image correspond to regions of rapid change in intensity,
they can be detected by identifying the locations where the gradient magnitude of the
image is maximized. However, the gradient magnitude can be affected by noise in the
image, and it can be difficult to determine the optimal scale at which to apply edge
detection. Scale space transform can be used to address these issues by smoothing the
image at multiple scales, allowing for the detection of edges at multiple scales. This
process is known as scale space edge detection.

Another application of scale space transform is feature detection and extraction.


Features such as corners, blobs, and lines can be extracted at different scales using
scale space transform. The scale space representation of the image can be used to
detect these features at different scales and to extract features that are robust to
noise and scale changes.

Overall, scale space transform is a powerful technique in digital image processing for
enhancing and analyzing images at multiple scales, and it can be used in a wide range of
applications, including image enhancement, computer vision, and pattern recognition.

Image Fusion:

Image fusion is the process of combining relevant information of two or more remotely
sensed images of a scene into a highly informative single image. The primary reason
image fusion has gained prominence in remote sensing application is based on the fact
that the remote sensing instruments have design or observational constrains,
therefore, a single image is not sufficient for visual or machine level information
analysis. In satellite imaging, two types of images are available. The panchromatic
images acquired by satellites have higher spatial resolution and multispectral data have
coarser spatial resolution. These two image types are merged (fused) in order to get
information of both the images in one single image. The image fusion technique, thus,
allows integration of different information sources and fused image can have
complementary spatial and spectral resolution characteristics. In other words, fused
image will have spatial information of the panchromatic image and spectral information
of multispectral image.

Image fusion involves transforming a set of low or coarse spatial resolution


multispectral (colour) images to high spatial resolution colour images by fusing a co-
registered fine spatial resolution panchromatic (gray scale) image. Usually, three low-
resolution images in the visible spectrum (blue, green and red) are used as main inputs
to produce a high-resolution natural (true) colour image as shown in Fig. 12.10, where
the image 12.10b is a natural colour image with a spatial resolution of 29 m (which has
been resampled 400%) and image 12.10a is a panchromatic image with a spatial
resolution of 4 m. By combining these inputs, a high-resolution colour image is produced
(Fig.12.10c). The fused output retains spectral signatures of input colour image and
spatial features of input panchromatic image, and usually the best attributes of both
inputs. The final output with its high spectral and spatial resolution is often as good as
high-resolution colour aerial photographs.

There are many methods which are used for image fusion. These methods can be
broadly divided into two categories - spatial and transform domain fusions. Some of the
popular spatial domain methods include intensity hue saturation (IHS) transformation,
Brovey method, principal component analysis (PCA) and high pass filtering based
methods. Wavelet transform, Laplace pyramid and curvelet transform based methods
are some of the popular transform domain fusion methods.

Density slicing is the process


in which the pixel values are
sliced into different ranges and
for
each range a single value or
color is assigned in the
output image. It is also know
as level
slicing.
Density slicing is the process
in which the pixel values are
sliced into different ranges and
for
each range a single value or
color is assigned in the
output image. It is also know
as level
slicing.
Density slicing is the process
in which the pixel values are
sliced into different ranges and
for
each range a single value or
color is assigned in the
output image. It is also know
as level
slicing.
Density slicing is the process
in which the pixel values are
sliced into different ranges and
for
each range a single value or
color is assigned in the
output image. It is also know
as level
slicing
Density slicing is the process
in which the pixel values are
sliced into different ranges and
for
each range a single value or
color is assigned in the
output image. It is also know
as level
slicing
Density slicing is the process
in which the pixel values are
sliced into different ranges and
for
each range a single value or
color is assigned in the
output image. It is also know
as level
slicing

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy