Unit Iii Image Enhancement
Unit Iii Image Enhancement
Image characteristics- point, local and regional operation – contrast, spatial feature
and multi-image manipulation techniques – level slicing, contrast stretching, spatial
filtering, edge detections - Fourier transform-FFT, DFT - Band ratio - Principal
Component Analysis (PCA) – Scale-space transform-multi-image fusion.
Image enhancement techniques based on point operations are also called as contrast
enhancement techniques and the enhancement techniques based on neighbourhood
operations are also known as spatial enhancement techniques.
Contrast Enhancement
Remote sensing systems record reflected and emitted radiant flux exiting from
Earth's surface materials. Ideally, one material would reflect a tremendous amount of
energy in a certain wavelength and another material would reflect much less energy in
the same wavelength~ This would result in contrast between the two types of material
when recorded by the remote sensing system. Unfortunately, different materials often
reflect similar amounts of radiant flux throughout the visible, near-infrared, and
middle-infrared portions of the electromagnetic spectrum, resulting in a relatively low-
contrast imagery. In addition, besides this obvious low-contrast characteristic of
biophysical materials, there are cultural factors at work. For example, people in
developing countries often construct urban areas using natural building materials (e.g.,
wood, sand, silt, clay) (Haack et al., 1997). This can cause urbanized areas in developing
countries to have about the same reflectance characteristics as the neighboring
countryside. Conversely, urban infrastructure in developed countries is usually
composed of concrete, asphalt~ and fertilized green vegetation. This typically causes
urbanized areas in developed countries to have reflectance characteristics significantly
different from the surrounding countryside.
To improve the contrast of digital remotely sensed data, it is desirable to use the
entire brightness range of the display medium, which is generally a video CRT display or
hardcopy output device (discussed in Chapter 5). Digital methods may. be more
satisfactory than photographic techniques for contrast enhancement because of the
precision and wide variety of processes that can be applied to the imagery. There are
linear and nonlinear digital contrast-enhancement techniques.
of 4 (i.e., min4 = 4) to a maximum value of 105 (i.e., max4 = 105), with a mean of 27.3
and a standard deviation·of15.?6 (refer to Table 4-7). When these data are displayed
on the CRT without any contrast enhancement, we use less than one-half of the full
dynamic range of brightness values that could be displayed (i.e., brightness values
between 0 and 3 and between 106 and 255 are not used). The image is rather dark, low
in contrast, with no distinctive bright areas (Figure 8-1 la). It is difficult to visually
interpret such an image. A more useful display can be produced if we expand the range
of original brightness values to use the full dynamic range of the video display.
Linear contrast enhancement is best applied to remotely sensed images with Gaussian
or near-Gaussian histograms, that is, when all the brightness values fall generally within
a single, relatively narrow range of the histogram and only one mode is apparent.
Unfortunately, this is rarely the case, especially for scenes that contain both land and
water bodies. 1 o perform a linear contrast enhancement, the analyst examines , the
image statistics and determines the minimum and maximum brightness values in band k,
mink and max k-> respectively. The output brightness value, BVout , is computed
according to the equation:
where B Vin is the original input brightness
value and quant k is the maximum value of the range of brightness values that can be
displayed on the CRT (e.g., 255). In the Charleston, SC,
example, any pixel with a BV in of 4 would now have a Bout of 0, and any pixel with a B
Vin of 105 would have a B out of 255:
Nonlinear contrast enhancements may also be applied. One of the most useful
enhancements is histogram equalization. The algorithm passes through the individual
bands of the dataset and assigns approximately an equal number of pixels to each of
the user-specified output grayscale classes (e.g., 32, 64, 256). Histogram equalization
applies the greatest contrast enhancement to the most populated range of brightness
Values in the image. It automatically reduces the contrast in the very light or dark
parts of the image associated with the tails of a normally distributed histogram.
HISTOGRAM EQUALIZATION:
In the first step, we compute the probability of the Brightness value by dividing each
of the frequencies by total number of pixels in the scene. The next step is to compute
a transformation function for each individual Brightness Value. For each BV, a new
cumulative frequency (Ki) is calculated mechanically given as
Spatial Enhancement:
• low pass filter is used to smooth the image display by emphasising low frequency
details in an image
• high pass filter is used to sharpen the linear appearance of the image objects such as
roads and rivers by emphasising high frequency details in an image and
• edge detection filter is used to sharpen the edges around the objects. Low pass
filters pass only the low frequency information.
They produce images that appear smooth or blurred when compared to original data.
High pass filters pass only the high frequency information, or abrupt gray level
changes. They produce images that have high frequency information. Edge detection
filters delineate edges and makes shapes more conspicuous. In another approach, band
pass filters are used in which a low pass filter is first applied and then a high pass
filter is applied on an image. The low pass filter blocks too high frequencies and high
pass filter blocks the low frequency information. Hence the resultant image has
frequencies which are neither too low nor too high.
The most commonly used filters for digital image filtering rely on the concept of
convolution and operate in the image domain for reasons of simplicity and processing
efficiency. For example, low-pass filters are mainly used to smoothen image features
and to remove noise but often at the cost of degrading image spatial resolution
(blurring). In order to remove random noise with the minimum degradation of
resolution, various edge-preserved filters have been developed such as the adaptive
median filter.
Digital image filtering is useful for enhancing lineaments that may represent
significant geological structures such as faults, veins or dykes. It can also enhance
image texture for discrimination of lithology and drainage patterns. Digital image
filtering is used for the land use studies, highlighting textures of urban features, road
systems and agricultural areas.
1. Image alignment: This technique involves aligning multiple images of the same
scene to create a single composite image. Image alignment can be used to
remove unwanted elements from a scene or to increase the resolution of an
image.
2. High dynamic range (HDR) imaging: This technique involves combining multiple
images with different exposures to create an image with a higher dynamic range.
HDR imaging can be used to capture scenes with a wide range of brightness
levels, such as sunsets or cityscapes.
3. Image stacking: This technique involves combining multiple images of the same
scene to increase the signal-to-noise ratio and create a cleaner image. Image
stacking can be used in astrophotography or macro photography to capture fine
details that are difficult to capture in a single image.
4. Focus stacking: This technique involves combining multiple images of the same
scene with different focus points to create an image with a larger depth of
field. Focus stacking can be used in macro photography or landscape
photography to create an image with sharp focus throughout the scene.
5. Panorama stitching: This technique involves stitching multiple images of a scene
together to create a wider field of view. Panorama stitching can be used in
landscape photography to capture a wider view of a scene than can be captured
in a single image.
Spatial Filtering:
Spatial frequency in remotely sensed imagery may be enhanced Or subdued using two
different approaches. The first is spatial convolution filtering based primarily on the
use of convolution masks. The procedure is relatively easy to understand and can be
used to enhance low- and high-frequency detail, as well as edges in the imagery.
Another technique is Fourier analysis, which mathematically separates an image into its
spatial frequency components. It is then possible interactively to emphasize certain
groups (or bands) of frequencies relative to others and recombine the spatial
frequencies to produce an enhanced image. We first introduce the technique of spatial
convolution filtering and then. proceed to the more mathematically challenging Fourier
analysis
Level slicing:
Band ratioing:
Deciding which two bands to ratio is not always a simple task. Often, the analyst simply
displays various ratios and then selects the most visually appealing. The optimum index
factor (OIF) and Sheffield Index (discussed in Chapter 5) can be used to identify
optimum bands for band ratioing (Chavez et al., 1984; Sheffield, 1985). Crippen (1988)
recommended that all data be atmospherically corrected and free from any sensor
calibration problems (e.g., a detector is out of adjustment) before it is ratioed.
The ratio of Charleston, SC, Thematic Mapper bands 3 (red) and 4 (near-infrared) is
displayed in Figure 8-18a. This red/ infrared ratio provides vegetation information that
will be discussed in the section on vegetation indexes in this chapter. Generally, the
brighter the pixel, the more vegetation present. Similarly, the ratio of bands 4 and 7
provides useful information. The ratio of band 3 (red) and band 6 (thermal infrared)
provides detail about the water colUI1U1 as well as the urban · structure.
High-pass filtering is applied to imagery to remove the slowly varying components and
enhance the high-frequency
The Fourier transform:
symmetric about their centers, and u and v _represent spatial frequency. The displayed
Fourier magnitude image is usually adjusted to bring the F(O, 0) to the center of the
image rather than to the upper-left comer. Therefore, the intensity at the center
represents the magnitude of the lowest-frequency component. The frequency increases
away from the center. For example, consider the Fourier magnitude of the
homogeneous water body . The very bright values found in and around the center
indicate that it -is dominated by low-frequency components. In the second image, more
medium-frequency components are present in addition to the background of low-
frequency components. We can easily identify the high-frequency information
representing the horizontal and vertical linear features in the original image Notice the
alignment of the cloud of points in the center of the ·Fourier transform It represents
the diagonal linear features trending in the NW-SE direction in the photograph.
We have discussed filtering in the spatial domain using convolution filters. It can also
be performed in the frequency domain. Using the Fourier transforms, we can
manipulate directly the frequency information of the image. The manipulation can be
performed by-multiplying the Fourier transform of the original image by a mask image
called a frequency domain filter, which will block or weaken certain frequency
components by making the values of certain parts of the frequency spectrum become
smaller or even zero. Then we can compute the inverse Fourier transform of the
manipulated frequency spectrum to obtain a filtered image in the spatial domain.
Numerous algorithms are available for computing the Fast Fourier transform (FFT) and
the inverse Fast Fourier transform (IFFT) (Russ, 2002). Spatial filtering in the
frequency domain generally involves computing the FFT of the original image,
multiplying the FFT of a convolution mask of the analyst's choice (e.g., a low-pass
filter) with the FFT, and inverting the resultant image with the IFFT; that is,
Principal component analysis(PCA):
Scale space transform:
Scale space transform is a technique used in digital image processing for analyzing an
image at multiple scales. The goal of this technique is to enhance the features of an
image at different scales while suppressing noise and other unwanted details.
The scale space transform involves generating a series of images that are blurred with
increasing amounts of blur or smoothed at each level. The resulting images are called
the scale space representation of the original image. The process of generating the
scale space representation involves convolving the original image with a Gaussian kernel
of increasing standard deviation, which results in the images becoming progressively
smoother.
Overall, scale space transform is a powerful technique in digital image processing for
enhancing and analyzing images at multiple scales, and it can be used in a wide range of
applications, including image enhancement, computer vision, and pattern recognition.
Image Fusion:
Image fusion is the process of combining relevant information of two or more remotely
sensed images of a scene into a highly informative single image. The primary reason
image fusion has gained prominence in remote sensing application is based on the fact
that the remote sensing instruments have design or observational constrains,
therefore, a single image is not sufficient for visual or machine level information
analysis. In satellite imaging, two types of images are available. The panchromatic
images acquired by satellites have higher spatial resolution and multispectral data have
coarser spatial resolution. These two image types are merged (fused) in order to get
information of both the images in one single image. The image fusion technique, thus,
allows integration of different information sources and fused image can have
complementary spatial and spectral resolution characteristics. In other words, fused
image will have spatial information of the panchromatic image and spectral information
of multispectral image.
There are many methods which are used for image fusion. These methods can be
broadly divided into two categories - spatial and transform domain fusions. Some of the
popular spatial domain methods include intensity hue saturation (IHS) transformation,
Brovey method, principal component analysis (PCA) and high pass filtering based
methods. Wavelet transform, Laplace pyramid and curvelet transform based methods
are some of the popular transform domain fusion methods.