0% found this document useful (0 votes)
13 views32 pages

Visual Image Interpretataion

The document provides an introduction to image interpretation, emphasizing the importance of aerial photographs and remote sensing imagery in various fields. It outlines the process of image interpretation, the factors affecting image quality, and the essential elements for analyzing images, such as tone, shape, and texture. Additionally, it discusses interpretation techniques and the use of interpretation keys to aid in identifying objects and conditions in imagery.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views32 pages

Visual Image Interpretataion

The document provides an introduction to image interpretation, emphasizing the importance of aerial photographs and remote sensing imagery in various fields. It outlines the process of image interpretation, the factors affecting image quality, and the essential elements for analyzing images, such as tone, shape, and texture. Additionally, it discusses interpretation techniques and the use of interpretation keys to aid in identifying objects and conditions in imagery.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 32

INTRODUCTION TO IMAGE INTERPRETATION

Aerial photographs as well as imagery, obtained by remote sensing using aircraft


or spacecraft as platforms, have applicability in various fields. By studying the
qualitative as well as quantitative aspects of images recorded by various sensor systems,
like aerial photographs (black‐and‐white, black‐and‐white infrared, colour and colour
infrared), multiband photographs, satellite data (both pictorial and digital) including
thermal and radar imagery, an interpreter well experienced in his field can derive lot of
information.

Image Interpretation

Image interpretation is defined as the act of examining images to identify objects


and judge their significance. An interpreter studies remotely sensed data and attempts
through logical process to detect, identify, measure and evaluate the significance of
environmental and cultural objects, patterns and spatial relationships. It is an
information extraction process.

Anyone who looks at a photograph or an imagery in order to recognize an image


is an interpreter. A soil scientist, a geologist or a hydrogeologist, a forester or a
planner, trained in image interpretation can recognize the vertical view presented by
the ground objects on an aerial photograph or a satellite image, which enables him or
her to detect many small or subtle features that an amateur would either overlook or
mis‐interpret. An interpreter is, therefore, a specialist trained in the study of
photography or imagery, in addition to his or her own discipline. The present discussion
mainly pertains to the techniques of visual interpretation, the application of various
instruments and the extraction of information.

Aerial photographs, as well as imagery, obtained by remote sensing employing


electromagnetic energy as the means of detecting and measuring target/objects
characteristics, has applicability to various fields because of four basic reasons.

First ‐ It represents a larger area of the earth from a perspective view and provides a
format that facilitates the study of objects and their relationships.

Second ‐ Certain types of imagery and aerial photograph can provide a 3‐D view.

Third ‐ Characteristics of objects not visible to the human eye can be


transformed into images

Fourth ‐ It provides the observer with a permanent record/representation of


objects at any moment of time. In addition, data is real‐time,
repetitive and, when in digital form, is computer compatible for quick
analysis.

1
BASIC PRINCIPLES OF IMAGE INTERPRETATION

Images and their interpretability

¾ An image taken from the air or space is a pictorial presentation of the pattern of a
landscape.

¾ The pattern is composed of indicators of objects and events that relate to the
physical, biological and cultural components of the landscape.

¾ Similar conditions, in similar circumstances and surroundings, reflect similar


patterns, and unlike conditions reflect unlike patterns.

¾ The type and amount of information that can be extracted is proportional to the
knowledge, skill and experience of the analyst, the methods used for interpretation
and the analyst's awareness of any limitations.

Factors Governing the Quality of an image

In addition to the inherent characteristics of an object itself, the following factors


influence image quality:

¾ Sensor characteristics (film types, digital systems)


¾ Season of the year and time of day
¾ Atmospheric effects
¾ Resolution of the imaging system and scale
¾ Image motion
¾ Stereoscopic parallax

Factors Governing Interpretability

1. Visual and mental acquity of the interpreter

2. Equipment and technique of interpretation

3. Interpretation keys, guides, manuals and other aids.

Visibility of Objects

The objects on aerial photographs or imagery are represented in the form of


photo images in tones of grey in B/W photography and in colour/false colour

2
photography in different colours/hues. This visibility of objects in the images varies due
to ‐

a) The inherent characteristics of the objects

b) The quality of the aerial photography or imagery.


Inherent Characteristics of Objects

In any photographic image forming process, the negative is composed of tiny


silver deposits formed by the action of light on photosensitive film during exposure. The
amount of light received by the various sections of the film depends on the reflection of
electromagnetic radiation (EMR) from various objects. This light, after passing through
the optical system, gives rise to different tones and textures.

In visual interpretation, an interpreter is primarily concerned with recognizing


changes in tonal values, thereby differentiating an object of a certain reflective
characteristic from another. However, he must be aware that the same object under
different moisture or illumination conditions, and depending on the wavelength of
incident energy, may reflect a different amount of light. For this reason, a general key,
based on tone characteristics of objects, cannot be prepared. In such cases, other
characteristics of objects such as their shape, size and pattern etc. help in their
recognition.

Quality of Aerial Photography/Imagery:


The quality of image interpretation depends on the quality of the basic material
on which the images are formed. Normally, in visual interpretation, these images are
formed on the photograph and represented in tones of grey or in colours of various
hues, chroma and values. A study of the factors, affecting image quality and
characteristics of images, is essential from an interpreter's point of view.

The Tonal or Colour Contrast Between an Image and Its Background

Photographic tone contrast is the difference in brightness between an image and


its background. Similarly, in colour photography colour contrast is the result of all hue
values and chroma differences between the image and its background. The tonal
contrast can be sufficiently increased with proper filters.

Image Sharpness Characteristics

Sharpness is the abruptness with which tone or colour contrasts appear on the
photograph or imagery. Both tone and sharpness enable an interpreter to distinguish
one object from another. To a large extent, image sharpness is dependent on the
focussing ability of the optical system. Image sharpness is closely related to the
resolution of the optical system.

3
Stereoscopic Parallax Characteristics

Stereoscopic parallax is the displacement of the apparent position of an image


with respect to a reference point of observation. Sufficient parallax is necessary in
order to distinguish objects from their shadows. Parallax depends on the height of an
object, flying height and the stereobase or its corollary, the forward overlap.
Stereoscopic parallax can be improved by choosing the right base/height (B/H) ratio.

The above investigation appears to be over simplified as a number of other


factors can be mentioned which obviously effect the image quality. However, for the
purpose of simplification, we may conclude that other factors influence image quality
indirectly through their effect on tone, sharpness or parallax.
In general, if image motion and exposure times were no problem, we would
obviously use fine grain, high definition, slow photographic material, with an
appropriate filter in order to get better sharpness and contrast.

ELEMENTS OF IMAGE INTERPRETATION

The word photograph in Greek means to draw with light, and a photograph, in
fact, is nothing more or less than a graphic record of energy intensities. An image
represents energy reflected, emitted or transmitted from an object in different parts of
the spectrum.
Image interpretation is essential for the efficient and effective use of the data.
While the above properties of aerial photographs/imagery help an interpreter to
detect objects due to their tonal variations, he must also take advantage of
other important characteristics of the objects in order to recognize them. The following
elements of image interpretation are regarded as being of general significance,
irrespective of the precise nature of the imagery and the features it portrays.

Elements of Image Interpretation

Black and White Tone


Primary Elements Color

Stereoscopic Parallax

Spatial Arrangement Size


of Tone and Color
Shape

Texture

4
Pattern

Based on Analysis Height


of Primary Elements
Shadow

Site
Contextual Elements
Association
Shape
Numerous components of the environment can be identified with reasonable
certainty merely by their shape. This is true of both natural features and man‐made
objects.

Size
In many cases, the length, breadth, height, area and/or volume of an object can
be significant, whether these are surface features (e.g. different tree species) or
atmospheric phenomena (e.g. cumulus versus cumulonimbus clouds). The approximate
size of many objects can be judged by comparisons with familiar features(e.g. roads) in
the same scene.

5
Tone

We have seen how different objects emit or reflect different wavelengths and
intensities of radiant energy. Such differences may be recorded as variations of picture
tone, colour or density. which enable discrimination of many spatial variables, for
example, on land different crop types or at sea water bodies of contrasting depths or
temperatures. The terms 'light', 'medium' or 'dark' are used to describe variations in
tone.

Shadow

Hidden profiles may be revealed in silhouette (e.g. the shapes of buildings or the
forms of field boundaries). Shadows are especially useful in geomorphological studies
where micro relief features may be easier to detect under conditions of low‐angle solar

6
illumination than when the sun is high in the sky. Unfortunately, deep shadows in areas
of complex detail may obscure significant features, e.g. the volume and distribution of
traffic on a city street.

Pattern

Repetitive patterns of both natural and cultural features are quite common,
which is fortunate because much image interpretation is aimed at the mapping and
analysis of relatively complex features rather than the more basic units of which they
may be composed. Such features include agricultural complexes (e.g. farms and
orchards) and terrain features (e.g. alluvial river valleys and coastal plains).

Texture

Texture is an important image characteristic closely associated with tone in the


sense that it is a quality that permits two areas of the same overall tone to be

7
differentiated on the basis of microtonal patterns. Common image textures include
smooth, rippled, mottled, lineated and irregular. Unfortunately, texture analysis tends
to be rather subjective, since different interpreters may use the same terms in slightly
different ways. Texture is rarely the only criterion of identification or correlation
employed in interpretation. More often it is invoked as the basis for a subdivision of
categories already established using more fundamental criteria. For example, two rock
units may have the same tone but different textures.

Site

At an advanced stage in image interpretation, the location of an object with


respect to terrain features of other objects may be helpful in refining the identification
and classification of certain picture contents. For example, some tree species are found
more commonly in one topographic situation than in others, while in industrial areas
the association of several clustered, identifiable structures may help us determine the
precise nature of the local enterprise. For example, the combination of one or two tall
chimneys, a large central building, conveyors, cooling towers and solid fuel piles point to
the correct identification of a thermal power station.

Resolution

8
Resolution of a sensor system may be defined as its capability to discriminate
two closely spaced objects from each other. More than most other picture
characteristics, resolution depends on aspects of the remote sensing system itself,
including its nature, design and performance, as well as the ambient conditions during
the sensing programme and subsequent processing of the acquired data. An interpreter
must have a knowledge about the resolution of various remote sensing data products.

Stereo‐scopic Appearance

When the same feature is photographed from two different positions with
overlap between successive images, an apparently solid model of the feature can be
seen under a stereoscope. Such a model is termed a stereomodel and the three‐
dimentional view it provides can aid interpretation. This valuable information cannot be
obtained from a single print.

In practice, these nine elements assure a variety of ranks of importance.


Consequently, the order in which they may be examined varies from one type of study
to another. Sometimes they can lead to assessment of conditions not directly visible in
the images, in addition to the identification of features or conditions that are explicitly
revealed. The process, by which related invisible conditions are established by
inference, is termed "convergence of evidence". It is useful, for example, in assessing
the social class and/or income group occupying a particular neighbourhood or the soil
moisture conditions in agricultural areas.

9
Image interpretation may be very general in its approach and objective, such as
in the case of terrain evaluation or land classification. On other occasions it is highly
specific, related to clear‐cut goals in such fields as geology, forestry, transport studies
and soil erosion mapping. In no instance should the interpreter fail to take into account
features other than those for which he or she is specifically searching. Failure to give
adequate consideration to all aspects of a terrain is, perhaps, the commonest source of
interpretation error.

The interpretation of images is therefore an essentially deductive process, and


the identification of certain key features leads to the recognition of others. Once a
suitable starting point has been selected, the elements listed earlier are considered
either consciously or subconsciously. The completeness and accuracy of the results
depends on an interpreter's ability to integrate such elements in the most appropriate
way to achieve the objectives that have been set for him or her.

TECHNIQUES OF IMAGE INTERPRETATION

The development of interpretation techniques has been mainly by the empirical


method. The gap between the photo image on the one hand and the reference level,
i.e. the level of knowledge in a specific field, in the human mind on the other hand, is
bridged by the use of image‐interpretation. The techniques adopted for one discipline
may differ from those adopted for another. The sequence of activity and the search
method may have to be modified to suit the specific requirements.

Image interpretation comprises at least three mental acts that may or may not
be performed simultaneously:

i) The measurement of images of objects


ii) Identification of the objects imaged
iii) Appropriate use of this information in the solution of the problem.

In visual interpretation, the methodology of interpretation for each separate


discipline will depend on :

¾ Kind of information to be interpreted

¾ Accuracy of the results to be obtained

¾ The reference level of the person executing the interpretation

¾ Kind and type of imagery or photographs available

¾ Instruments available

10
¾ Scale and other requirements of the final map

¾ External knowledge available and any other sensory surveys that have been or
will be made in the near future in the same area.

From the scrutiny of the above list, it is evident that no stereotyped approach
can be prescribed for the techniques or the methodology of photo‐interpretation. An
interpreter must work out the plan of operations and the techniques depending on the
project's special requirements.

In carrying out this task, an interpreter may use many more types of data than
those recorded on the images he is to interpret. Many sources, such as literature,
laboratory measurements, analysis, field work and ground and aerial photographs (or
imagery) make up this collateral material.

Activities of Image‐interpretation

Image‐interpretation is a complex process comprising physical as well as mental


activities. This means familiarity with so wide a variety of stimuli that the even most
accomplished interpreter is occasionally dependent on reference materials.

The reference material in the form of identification keys is a useful aid in image
interpretation. Many types of image interpretation keys are available or may be
constructed depending on the abilities of the interpreter and the purpose to be served
by the interpretation.

INTERPRETATION KEYS

Scope of image interpretation keys:

There are four types of image interpretation keys:

An Item key, is a key concerned with the identification of an individual object or


condition.

A Subject key, is a collection of item keys concerned with the identification of principal
objects or conditions within a given subject category.

A Regional Key, is a compilation of items or subject keys dealing with the identification
of objects or conditions characteristic of a particular region.

11
An Analogous Area Key, is a subject or regional key which has been prepared for an
accessible area and which by interpretation may be used in the interpretation of objects
or conditions in inaccessible areas which exhibit similar characteristics.

Technical level image interpretation keys:

A Technical Key, is one prepared for use by image interpreters who have had
professional or technical training or experience in the subject concerned.

A Non‐technical Key, is one prepared for use primarily by image interpreters who have
not had professional or technical training or experience in the subject concerned.

Intrinsic character of image interpretation keys:

A Direct Key, is a designed primarily for the identification of discrete objects or


conditions directly discernible on images.

An Association Key, is one designed primarily for the deduction of information not
directly discernible on images.

Manner of organization or presentation of image interpretation keys:

All image interpretation keys are based upon diagnostic features of the images
of objects or conditions to be identified. As stated above, depending upon the manner
in which the diagnostic features are organized, two general types of keys are recognized
selective and elimination. Selective keys are arranged in such a way that the interpreter
simply selects the example corresponding to the object he is trying to identify.
Elimination keys are arranged so that the interpreter follows a prescribed step‐wise
process that leads to the elimination of all items except the one he is trying to identify.
Most interpreters consider the latter type of key preferable.

Selective Keys:

An Essay Key, is one where objects or conditions are described in textural form using
images for illustrations only.

A File Key, is an item key composed of one or more selected images, with notes
concerning their interpretation. This type of key is generally assembled for use by an
individual interpreter.

A Photo Key, is an item key composed of one or more selected images, together
with notes concerning their interpretation, assembled for rapid reproduction and
distribution to other interpreters.

12
An Integrated‐selective Key, is one in which images and recognition features for any
individual object or condition, within a subject or regional key, are so associated that by
reference to the appropriate portion of the key the object or condition can be identified.

Elimination Keys:

A Disk Key, is one in which selected images recognition features are grouped or
arranged on one or more disks so that, when the recognition features are properly
aligned, all but one object or condition of the group under consideration is eliminated
from view.

A Punch Card Key, is one in which selected image recognition features are arranged in
groups on separate punch cards. When the properly selected cards are superimposed
upon a coded base, all but one object or condition of the subject group under
consideration is eliminated from view.

A Dichotomous Key, is one in which the graphic or word description assumes the form of
a series of pairs of contrasting characteristics which permit progressive elimination of all
but one object or condition of the subject group under consideration.

A modification of the elimination key is to allow probabilistic rather than absolute


identification at any step or steps in a sequence of steps. A probabilistic key based on
local apriori statistics, is necessary where identification cannot be completed.

METHODS OF SEARCH AND SEQUENCE OF INTERPRETATION

In visual interpretation and whenever possible, especially when examining


vertical or nearly vertical photographs, the scene is viewed stereoscopically. The
sequence begins with the detection and identification of objects followed by
measurements of the image. The image is then considered in terms of information,
usually non‐pictorial, and finally deductions are made. The interpreter should work
methodically, proceeding from general considerations to specific details and from
known to unknown features.

There are two basic methods that may be used to study aerial imagery:

"Fishing expedition" ‐ an examination of each and every object so as not to miss


anything,

"Logical search" ‐ quick scanning and selective intensive study.

Sequence of Activities

13
Normally the activities in an image‐interpretation sequence include the
following:
Detection

Detection means selectively picking out an object or element of importance for


the particular kind of interpretation in hand. It is often coupled with recognition, in
which case the object is not only seen but also recognized.

Recognition and Identification

Recognition and identification together are sometimes termed photo‐reading.


However, they are fundamentally the same process and refer to the process of
classification of an object by means of specific or local knowledge within a known
category upon an object's detection in a photo‐image.

Analysis

Analysis is the process of separating or delineating a set of similar objects. In


analysis, boundary lines are drawn separating the groups, and the degree of reliability of
these lines may be indicated.

Deduction

Deduction may be directed to the separation of different groups of objects or


elements and the deduction of their significance based on converging evidence. The
evidence is derived mainly from visible objects or from invisible elements, which give
only partial information on the nature of certain correlative indications.

Classification

Classification establishes the identity of a surface or an object delineated by


analysis. It includes modification of the surface into a pertinent system for use in field
investigation. Classification is made in order to group surfaces or objects according to
those aspects that, for a certain point of view, bring out their most characteristic
aspects.

Idealization

Idealization refers to the process of drawing or standardized representations of


what is actually seen in the photo image. This process is helpful for the subsequent use
of photograph/imagery during field investigations and in the preparation of base maps.

These processes would be better explained by taking an example. If


investigations of dwellings are to be carried out, the first step would be to detect photo

14
images having rectangular shape etc. The next step would be to recognize, say, a single
storey construction and a double storey construction. Delineation of the two groups of
objects would be done under the process of analysis in which, a boundary line may be
drawn separating the two groups. At this stage, in view of various converging evidence,
it may be deduced that one group is a single storey dwelling. In more difficult cases this
would be done in the process of classification and a code number appointed to the
groups to help field examinations. Cartographic representation would be made under
the process of idealization.

Convergence of evidence:

Image interpretation is basically a deductive process. Features that can be


recognized and identified directly lead the image interpreter to the identification and
location of other features. Even though all aspects of an area are irreversibly interwined,
the interpreter must begin some place, he can not consider drainage, landform,
vegetation, and manmade features simultaneously. He should begin with one feature
or group of features and then on to the others, integrating each of the facets of the
terrain as he goes. For each terrain, the interpreter must find his own point of beginning
and then consider each of the various aspects of the terrain in logical fashion.
Deductive image interpretation requires conscious or unconscious consideration of the
elements of image interpretation listed earlier. The completeness and accuracy of
image interpretation are proportional to the interpreter's understanding of how and
why images show shape, size, tone, shadow, pattern, and texture, while an
understanding of site, association, and resolution strengthens the interpreter's ability to
integrate the different features making up a terrain. For the beginners, systematic
consideration of the elements of image interpretation should precede integrated
terrain interpretation.

The principle of convergence of evidence requires the interpreter first to


recognize basic features or types of features and then to consider their arrangement
(pattern) in the a real context. Several interpretations may suggest themselves. Critical
examination of the evidence usually shows that all interpretations but one are unlikely
or impossible. The greatest difficulty in interpreting images involves judging degrees of
probability.

Sensors in Photographic Image Interpretation

As stated earlier, characteristics not visible to the human eye can also be
recorded and displayed by using proper sensor types. Digital data can also be
transferred onto any type of film, depending on the type of study to be carried out.
Normally, the four types of films are used for visual data display as follows.

a) Black‐and‐white panchromatic,
b) Black‐and‐white infrared

15
c) Colour,
d) Colour infrared/false colour

All of the above types are available in different grades and sensitivities that can
be preselected for a particular use. An interpreter must know the characteristics of
each of these before starting an interpretation job. The same is true for the digital data
display for multispectral, thermal and radar imagery.

METHODS OF ANALYSIS AND REFERENCE LEVELS

The mental process involved in image‐interpretation is related to the reference


levels of the interpreter, i.e. the capacity of the interpreter to make decisions,
consciously or unconsciously. The reference level in context is the amount of
knowledge stored in the mind of any group of personnel involved in the interpretation
of photographs or imagery. Methods of analysis depend on the type of material used
for interpretation as well as the instruments and equipment available. The following
methods are generally used.

Monocular analysis: in the case of satellite imagery and photographic enlargements


Stereoscopic analysis : for vertical/near vertical aerial photographs, SPOT as well as
IRS‐IC stereo‐imagery.
Densitometric analysis: for both aerial photographs and imagery for quantitative
analysis using densitometers, based on the measurement of tonal variation, a
fundamental characteristic of all objects, for identifying terrain features. Such
analysis is now carried out using digital image processing systems (refer DIP lectures).

The Use of Multiple Images in Image Interpretation

Advances in sensor and platform technology have increased the amount and
type of information available to the image interpreter. Sensor systems currently being
used are capable of presenting the interpreter with a visual representation of energy
emitted, reflected and transmitted at wavelengths outside the visible portion of the
electromagnetic spectrum and therefore beyond direct visual experience. Available
sensor platforms can present the interpreter with a variety of scales. The impact of
these technological advances has been to present the interpreter with a multiplicity of
data for interpretation. These are the use of multi‐band, multi‐date, multi‐stage, and
multi‐disciplinary analysis techniques.

Multi‐band concept and images

Basic to the interpretation of multiple images is what has come to be known as


the multi‐band concept. The level of energy reflected, emitted and transmitted by
objects normally varies with wavelength throughout the electromagnetic (EM)
spectrum. The signature of an object on an image is governed by the amount of energy

16
received by the sensor within the wavelength range in which that sensor images.
Therefore, a unique tonal signature for a particular object can often be identified if the
energy that is being emitted, reflected, and/or transmitted from it, is broken down into
carefully selected wavelength bands. Stated another way, conventional imaging
systems sensitive to broad wavelength regions within the EM spectrum, e.g., colour film
in a conventional aerial camera, may not be as effective in producing adequate object‐
to‐background contrast ratios as imagery obtained from a number of selected narrow‐
wavelength bands.

The term multiband is often applied to the analysis and/or acquisition of imagery
from within a particular wavelength band of the electromagnetic spectrum, e.g. visible,
ultraviolet or thermal infrared. The term multispectral image analysis is commonly
used to denote the analysis of imagery from more than one spectral region, so it follows
that the combined analysis of images acquired in the ultraviolet, thermal infrared
and/or microwave regions would also increase the amount of information which could
be extracted by the interpreter. It is important then for the image interpreter to
become aware of the important imaging characteristics of a variety of sensor systems.

Multi‐date concept and imagery

Just as the recording of data in various bands of the spectrum can provide
valuable information to the image interpreter, so too, in many cases, can the recording
of energy from the same area through time prove valuable (multi‐date or sequential
photos/imagery). Many features exhibit unique changes with the passage of time. It
may be difficult even with the use of multi‐band, multi‐spectral imagery acquired on a
single date to discriminate and identify the mix of agricultural crops growing in a
particular area. If multiple image acquisition missions are coupled with a knowledge of
the crop phonological cycles (crop calendar) of the area under investigation,
identification is facilitated. This is true because crops grown in an area generally will
exhibit unique growth characteristics which, if known, can aid in identification. Changes
in urban areas, assessment of flood or disaster evaluation, and monitoring changes in
coastal morphology are examples of studies in which the interpretation of multi‐date
imagery can add significant information.

Multi‐stage concept and data

In a multistage sampling scheme, progressively more detailed information is


obtained for smaller sub‐samples of the area under investigation. Basically, this method
takes advantages of increasingly finer resolution, which can be provided by the use of
either multiple sensor platforms (such a low and high‐altitude aircraft and spacecraft) or
by using a variety of focal lengths from a single sensor platform. These act as sub‐
samples which can be used to increase the efficiency of the sample selection in each

17
subsequent stage. The precision of the estimated information depends solely on the
relationships between predictions made by image interpretation and the value of
measured characteristics of the sample units used to estimate population parameters.
It should be stressed here that the accuracy of the final estimate therefore depends
solely on the quality of image interpretation at all levels of generalization. The
methodology is easy to employ. Operationally, the technique is efficient and provides
for a greater portion of the work to be concentrated an areas of higher values.

Multi‐disciplinary analysis

It has been stated that remotely sensed data are "once written, many times
read." Basically, this means that one image can be looked at by a number of specialists
and each may gain information of value to his or her particular discipline. In order to
ascertain the agricultural potential of a given area, a team of geologists, hydrologists,
pedologists, agronomists, meteorologists, geographers, foresters and economists,
among others, might examine the imagery of a given area. Having interpreted by various
discipline specialists , a synergistic effect can be created. For many types of earth‐
resource analysis, the use of the convergence system by image interpretation of varying
background is likely to produce a more accurate and thorough analysis than could be
achieved by a single image interpreter working alone.

INSTRUMENTS FOR VISUAL INTERPRETATION AND TRANSFER OF DATA

Interpretation Instruments
Monocular instruments: magnifiers
Stereoscopic instruments: mirror and pocket stereoscope
interpretoscope
zoom stereoscope
scanning mirror
stereoscope

Instruments for Transfer of Data


For flat terrain: Sketchmaster
Stereosketchmaster
Zoom transferscope
Optical pantograph or reflecting projector
For hilly terrain: Stereoplotters
Orthophoto together with its stereo‐mate, can be used for
interpretation and delineation’s. Since preparation of
orthophoto and its stereo‐mate is a complex process, the
method is not so popular.

18
Conclusion

The scope of image‐interpretation as a tool for analysis and data collection is


widening with the advance of remote sensing techniques. Space images have already
found their use in interpretation for the earth sciences. Because of the flexibility of its
techniques and substantial gains in accuracy, speed and economy over conventional
ground methods, the future of image‐interpretation is assured. However, great
endeavor is required on the part of the interpreter to assess his or her own empirical
knowledge in order to formulate the optimum data requirements for different
disciplines. This is essential for the better development of image‐interpretation and for
widening the scope of application of its techniques.

IMAGE INTERPRETATION FOR MULTISPECTRAL SCANNER IMAGERY

Introduction

The application of MSS image interpretation has been demonstrated in many


fields, such as agriculture, botany, cartography, civil engineering, environmental
monitoring, forestry, geography, geophysics, land resource analysis, land use planning,
oceanography, and water resource analysis.

LANDSAT MSS Image Interpretation

As shown in Table 6.3, the image scale and area covered per frame are very
different for Landsat images than for conventional aerial photographs. For example,
more than 1600 aerial photographs at a scale of 1:20,000 with no overlap are required
to cover the area of a single Landsat MSS image! Because of scale and resolution
differences, Landsat images should be considered as a complementary interpretive tool
instead of a replacement for low altitude aerial photographs. For example, the
existence and/or significance of certain geologic features trending of tens or hundreds
of kilometers, and clearly evident on a Landsat image, might escape notice on low
altitude aerial photographs. On the other hand, housing quality studies from aerial
imagery would certainly be more effective using low altitude aerial photographs rather
than Landsat images, since individual houses cannot be resolved on Landsat MSS
images. In addition, most Landsat MSS images can only be studies in two dimensions,
whereas most aerial photographs are acquired in stereo.

Table 6.3 Comparison of Image Characteristics

Image Format Image Scale Area Covered per Frame (km2)


Low altitude USDA‐ASCS aerial 1:20,000 21

19
photographs (230 X 230 mm)
High altitude NASA aerial 1:120,000 760
photographs (RB‐57 or ER‐2) (230
X 230 mm)
Landsat scene (185 X 185 mm) 1:1,000,000 34,000

Resolution

The effective resolution (in terms of the smallest adjacent ground features that
can be distinguished from each other) of Landsat MSS images is about 79 m (about 30 m
on Landsat‐3 RBV images). However, linear features as narrow as a few meters, having
a reflectance that contrasts sharply with that of their surroundings, can often be seen
on Landsat images (for example, two‐land roads, concrete bridges crossing water
bodies, etc.). On the other hand, objects much larger than 79 m across may not be
apparent if they have a very low reflectance contrast with their surroundings, and
features detected in one band may not be detected in another.

Stereoscopic ability

As a line scanning system, the Landsat MSS produces images having one
dimensional relief displacement. Because there is displacement only in the scan
direction and not in the flight track direction, Landsat images can be viewed in stereo
only in areas of side lap on adjacent orbit passes. This side lap varies from about 85
percent near the poles to about 14 percent at the equator. Consequently, only a limited
area of the globe may be viewed in stereo. Also, the vertical exaggeration when viewing
MSS images in stereo in quite small compared to conventional air photos. This systems
from the extreme platform altitude (900 km) of the satellite compared to the base
distance between images. Whereas stereo airphotos may have a 4X vertical
exaggeration, stereo Landsat vertical exaggeration ranges from about 1.3X at the
equator to less than 0.4X at latitudes above about 70o. Subtle as this stereo effect is,
geologists in particular have found stereoviewing in Landsat overlap areas quite
valuable in studying topographic expression. However, most interpretations of Landsat
imagery are made monoscopically, either because sidelapping imagery does not exist or
because the relief displacement needed for stereoviewing is so small. In fact, because
of the high altitude and narrow field of view of the MSS, images from the scanner
contain little or no relief displacement in nonmountainous areas. When such images are
properly processed, they can be used as planimetric maps at scales as large as
1:250,000. Recently all these difficulties has been overcome in Panchromatic of SPOT
and IRS‐1C imagery.

Individual Band Interpretation

The most appropriate band or combination of bands of MSS imagery should be


selected for each interpretive use. Band 41 (green) and 5(red) are usually best for

20
detecting cultural features such as urban areas, roads, new subdivisions, gravel pits, and
quarries. In such areas, band 5 is generally preferable because the better atmospheric
penetration of red wavelengths provides a higher contrast image. In areas of deep,
clear water, greater water penetration is achieved in band 4. Bands 4 and 5 are
excellent for showing silty water flowing into clear water. Bands 6 and 7 (near infrared)
are best for delineating water bodies. Since energy of near‐infrared wavelengths
penetrates only a short distance into water, where it is absorbed with very little
reflection, surface water features have a very dark tone in bands 6 and 7. Wetlands with
standing water or wet organic soil where little vegetation has yet emerged also have a
dark tone in bands 6 and 7, as do asphalt‐surfaced pavements and wet bare soil areas.
Both bands 5 and 7 are valuable in geologic studies, the largest single use of Landsat
MSS data.

In the comparative appearance of the four Landsat MSS band, the extent of the
urban areas is best seen in bands 4 and 5 (light toned). The major roads are best seen
in band 5 (light toned), clearly visible in band 4, undetectable in band 6, and slightly
visible in band 7 (dark toned). An airport concrete runway and taxiway are clearly
visible. The concrement pavement is clearly visible in bands 4 and 5 (light toned), very
faint in band 6 (light toned),and undetectable in band 7. The asphalt pavements is very
faint in bands 4 and 5 (light toned), reasonably clear in band 6 (dark toned), and best
seen in band 7 (dark toned). The major lakes and connecting river are best seen in
bands 6 and 7 (dark toned). These lakes have a natural green colour in mid‐July
resulting from the presence of algae in the water. In the band 4 image, all lakes have a
tone similar to the surrounding agricultural land, which consists principally of green‐
leafed crops such as corn. The lakes mostly surrounded by urban development, and
therefore, their shorelines can be reasonably well detected. The lakes principally
surrounded by agricultural land and their shorelines are often indistinct. The shorelines
are more distinct in band 5, but still somewhat difficult to delineate. The surface water
of major lakes and the connecting river is clearly seen in both bands 6 and 7 (dark
toned). The agricultural use have a rectangular field pattern with different tones
representing different crops. This is best seen in bands 5, 6 and 7. For purposes of crop
identification and mapping from MSS images, the most effective procedure is to view
two or more bands simultaneously in an additive colour viewer or to interpret color
composite images. Small forested areas appear dark‐toned in bands 4 and 5. In regions
receiving a winter snowfall, forested areas can best be mapped using wintertime images
where the ground is snow covered. On such images, the forested and shrub land areas
will appear dark toned against a background of light‐toned snow.

Temporal data

As each Landsat satellite passes over the same area on the earth's surface during
daylight hours about 20 times per year. The actual number of times per year a given
ground area is imaged depends on amount of cloud cover, sun angle, and whether or

21
not the satellite is in operation on any specific pass. This provides the opportunity for
many areas to have Landsat images available for several dates per year. Because the
appearance of the ground in many areas with climatic change is dramatically different in
different seasons, the image interpretation process is often improved by utilizing images
from two or more dates.

Band 5 imaged in September and December, in some areas the ground is snow
covered (about 200 mm deep) in the December image and all water bodies are frozen,
except for a small stretch of the river in northern hemisphere. The physiography of the
area can be better appreciated by viewing the December image, due in part to the low
solar elevation angle in winter that accentuates subtle relief. The snow‐covered upland
areas and valley floors have a very light tone, whereas the steep, tree‐covered valley
sides have a darker tone. The identification of urban, agricultural, and water areas can
better be accomplished using the September image. The identification of forested areas
can be more positively done using the December image.

22
Synoptic view

The synoptic view afforded by space platforms can be particularly useful for
observing short‐lived phenomena. However, the use of Landsat images to capture such
ephemeral events as floods, forest fires, and volcanic activity is, to some degree, a hit‐
or‐miss proposition. If a satellite passes over such an event on a clear day when the
imaging system is in operation, excellent images of such events can be obtained. On the
other hand, such events can easily be missed if there are no images obtained within the
duration of the event or, as is often true during floods, extensive cloud cover obscures
the earth's surface. However, some of these events do leave lingering traces. For
example, soil is typically wet in a flooded area for at least several days after the flood
waters have receded, and this condition may be imaged even if the flood waters are not
there. Also, the area burned by a forest fire will have a dark image tone for a
considerable period of time after the actual fire has ceased.

In the red band image, the vast quantities of silt flowing from the river into the
delta can be clearly seen. However, it is difficult to delineate the boundary between
land water in the delta area. In the near‐infrared band image, the silt‐laden water
cannot be distinguished from the clear water because of the lack of water penetration
of near‐infrared wavelengths. However, the delineation of the boundary between land
and water is much clearer than in red band.

The black tone of the burned area contrasts sharply with the lighter tones of the
surroundings unburned forest area.

Tropical deforestation in response to intense population pressures. An extensive


area of forest land being cleared for transmigration site development. The dark toned
area shows forested land. Areas being actively cleared are principally are light‐toned
"fingers" cutting into the forested land are cleared areas. The indistinct lighter toned
plumes from the nearly cleared areas are smoke plumes from burning debris.

False Colour Composite (FCC)

Bands 4,5, and 7 are combined in this fashion to produce the color image .
Spectral characteristics and color signatures of Landsat MSS color images are
comparable to those of IR color aerial photographs. Typical signatures are as follows :

Healthy vegetation Red


Clear water Dark blue to black
Silty water Light blue
Red beds Yellow
Bare soil, fallow fields Blue
Windblown sand White to yellow

23
Cities Blue
Clouds and snow White
Shadows Black

Land‐Use and Land‐Cover Interpretation on FCC

Urban areas has a grid pattern of major traffic arteries. Central commercial
areas have blue signatures caused by pavement, roofs, and an absence of vegetation.
The suburbs are pink to red, depending on density and condition of lawns, trees, and
other landscape vegetation. Small, bright red areas are parks, golf courses, cemeteries,
and other concentrations of vegetation.

Agriculture and vegetation has a rectangular bright red (growing crops) and blue‐
grey (fallow fields) pattern. Red circles formed by alfalfa fields irrigated by centerpoint
irrigation sprinklers.

Rangeland has a red‐brown signature in the fall season image. Forest and brush
cover mountainous terrain and the Transverse Ranges: lower elevation are covered by
chaparral and higher elevations by pine trees are also red‐brown.

Water is represented by the ocean and scattered reservoirs. The dark blue color
is typical of the ocean much of the year, but during the winter rainy season, muddy
water from various rivers forms light‐colored plumes that are carried.

The desert have a light yellow signature that is ;typical of arid land. Valley are
several light gray to very dark gray triangles, which are alluvial fans of gravel eroded
from the bedrock of the Transverse Ranges. Dry lakes have white signatures caused by
silt and clay deposit.

Major geologic features are also recognizable in the Landsat image. The fault,
which separates the valley from the Transverse Ranges, is expressed as linear scarps and
canyons.

Return‐Beam Vidicon System

Return‐beam vidicons (RBV) are framing systems that are essentially television
cameras. Landsat 1 and 2 carried three RBVs that recorded green, red and photographic
IR images of the same area on the ground. These images can be projected in blue,
green, and red to produce infrared color images comparable to MSS images. There
were problems with the color RBV system, and the images were inferior to MSS images;
for these reasons, only a few color RBV images were acquired. Landsat 3 deployed an
extensively modified version of RBV.

24
Typical RBV Images

In typical RBV images the array of small crosses, called reseau marks, are used
for geometric control. The 1:10,00,000 scale is the same as that of the MSS image to
which these RBV frames may be compared. This comparison illustrates the advantages
of the higher spatial resolution of RBV. For example, in the urban area the grid of
secondary streets is recognizable on the RBV image but not on the MSS.

Landsat 3 collected RBV images of many areas around the world. Where RBV and
MSS images are available, it is useful to obtain both data sets in order to have the
advantages of higher spatial resolution (from RBV) plus IR color spectral information
(from MSS).

LANDSAT TM Image Interpretation

Landsat TM images are useful for image interpretation for a much wider range of
applications than Landsat MSS images. This is because the TM has both an increase in
the number of spectral bands and an improvement in spatial resolution as compared
with the MSS. The MSS images are most useful for large area analyses, such as geologic
mapping. More specific mapping, such as detailed land cover mapping, is difficult on
MSS images because so many pixels of the original data are "mixed pixels," pixels
containing more than one cover type. With the decreased IFOV of the TM data, the
area containing mixed pixels is smaller and interpretation accuracies are increased. The
TM's improved spectral and radiometric resolution also aid image interpretation. In
particular, the incorporation of the mid‐IR bands (bands 5 and 7) has greatly increased
the vegetation discrimination of TM data.

The dramatic improvement in resolution from the MSS's ground resolution cell
of 79 x 79 m to the TM's ground resolution cell of 30 x 30 m. Many indistinct light‐toned
patches on the MSS image can be clearly seen as recent suburban development on the
TM image. Also, features such as agricultural field patterns that are indistinct on the
MSS image can be clearly seen on the TM image.

TM has more narrowly defined wavelength ranges for the three TM bands
roughly comparably to MSS bands 1 to 4 and has added bands in four wavelength
ranges not covered by the MSS bands.

25
Table 6.4 Thematic‐mapper spectral bands

Band Wavelength, um Characteristics


1 0.45 to 0.52 Blue‐green ‐ no MSS equivalent. Maximum penetration of
water, which is useful for bathymetric mapping in
shallow water. Useful for distinguishing soil from
vegetation and deciduous from coniferous plants
2 0.52 to 0.60 Green ‐ coincident with MSS band 4. Matches green
reflectance peak of vegetation, which is useful for
assessing plant vigor.
3 0.63 to 0.69 Red ‐ coincident with MSS band 5. Matches a
chlorophyll absorption band that is important or
discriminating vegetation type.
4 0.76 to 0.90 Reflected IR ‐ coincident with portions of MSS bands 6
and 7. Useful for determining biomass content and for
mapping shorelines.
5 1.55 to 1.75 Reflected IR. Indicates moisture content of soil and
vegetation. Penetrates thin clouds. Good contrast
between vegetation types.
6 10.40 to 12.50 Thermal IR. Nighttime images are useful for
thermal mapping and for estimating soil moisture
7 2.08 to 2.35 Reflected IR. Coincides with an absorption band caused
by hydroxyl ions in minerals. Ratios of bands 5 and 7 are
potentially useful for mapping hydrothermally altered
rock associated with mineral deposits.

For example, the blue‐green water of the lake, river and ponds has moderate
reflection in bands 1 and 2 (blue and green), a small amount of reflection in band 3
(red), and virtually no reflection in bands 4,5, and 7 (near and mid‐infrared); reflection
from roads and urban streets is least in band 4; and overall reflection from agricultural
crops is highest in band 4. In band 4 high reflectance of the golf courses appears.
Glacial ice movement characterized by many drumlins and scoured bedrock hills.
Present‐day crop and soil moisture patterns reflect the alignment of this grooved
terrain. The thermal band (band 6) has a less distinct appearance than the other bands
because the ground resolution cell of this band is 120 m. It has an indistinct, rather than
blocky, appearance because the data have been resampled into the 30‐m format of the
other bands. As would be expected on a summertime thermal image recorded during
the daytime, the roads and urban areas have the highest radiant temperature, and the
water bodies have the lowest radiant temperature.

26
Table 6.5 TM Band‐Colour Combinations Shown
TM Band‐Color Assignment Composite
in
Combination Blue Green Red
a 1 2 3
b 2 3 4
c 3 4 5
d 3 4 7
e 3 5 7
f 4 5 7

The color combinations used to generate each of these composites. Notes that
(a) is a "normal colour composite, (b) is a "color infrared" composite, and (c) to (f) are
some of the many other "false color" combinations that can be produced. For the
mapping of water sediment patterns, a normal color composite of bands 1,2, and 3
(displayed as blue, green, and red) was preferred. For most other applications, such as
mapping urban features and vegetation types, the combinations of (1) bands 2,3, and 4
(color infrared composite), (2) bands 3,4, and 7, and (3) bands 3,4, and 5 (all in the order
blue, green, and red) were preferred. In general, vegetation discrimination is enhanced
through the incorporation of data from one of the mid‐IR bands (5 or 7). Combinations
of any one visible (bands 1 to 3), the near‐IR band 4), and one mid‐IR (band 5 or 7) band
are also very useful. However, a great deal of personal preference is involved in band‐
color combinations for interpretive purposes, and for specific applications, other
combinations could be optimum.

Sensing energy emitted form objects at ambient earth temperatures within the 8
to 14 um wavelength range. When objects are extremely hot, such as flowing lava,
emitted energy can be sensed in wavelengths shorter than thermal infrared
wavelengths ( 3 to 14 æm). Forest fires are another example of an extremely hot
phenomenon that can be sensed in wavelengths shorter than thermal infrared.

Landsat TM band 5 image timber clear‐cutting practices can be interpreted.


Here the darker toned areas are dense stands of Douglas fir and the lighter toned areas
are recently cleared areas consisting of tree stumps, shrubs, and various grasses, in
areas where essentially all trees have been removed during timber harvesting
operations. Mottled, intermediate toned areas are replanted with Douglas fir trees and
are at an intermediate growth stage.

Image Mapping

Thematic Mapper data have been used extensively to prepare image maps over
a range of mapping scales. Such maps have proven to be useful tools for resource

27
assessment in that they depict the terrain in actual detail, rather than in the line‐and‐
symbol format of conventional maps. Image maps are often used as map supplements
to augment conventional map coverage and to provide coverage of unmapped areas.

As we can see, there are several digital image processing procedures that may
be applied to the image mapping process. These include such things as large area digital
Mosaicing, image enhancement procedures, merging of image data with conventional
cartographic information, and streamlining the map production and printing process
using highly automated cartographic systems. Extensive research continues in the area
of image mapping with both Landsat, SPOT, and IRS data in which push broom scanners
has been deployed. The stereo/coverage with desired B/H ratio is also possible.
Resolution has also improved to 20m and 10m in SPOT while 23.2m and 5.8 in IRS‐1C.

SPOT HRV & IRS Image Interpretation

The use of SPOT data for various interpretive purposes is facilitated by the
system's combination of multispectral sensing with excellent spatial resolution,
geometric fidelity, and the provision for multidate and stereo imaging.

Merging Data

An increase in the apparent resolution of SPOT & IRS multispectral images can be
achieved through the merger of multispectral and panchromatic data. 20‐m‐resolution
multispectral image of an agricultural area and a 10‐m‐resolution merged multispectral
and panchromatic image in case of SPOT while 23.6m MSS and 5.8 m Pan of IRS‐1C. The
merged image maintains the colors of the multispectral image but has a resolution
equivalent to that of the panchromatic image. Both the spatial and spectral resolution
of the merged image approach that seen in small scale, high altitude, color infrared
aerial photographs.

Using the parallax resulting when SPOT & IRS‐1C data are acquired from two
different orbit tracks, perspective views of a scene can be calculated and displayed.
Perspective views can also be produced by processing data from a single image with
digital elevation data of the same scene.

INDIAN REMOTE SENSING SATELLITES

Department of Space, Govt. of India has sent the following Remote Sensing
Satellites for Natural Resources Surveys. The details of which is given in the table given
below.

28
able: Indian Remote Sensing Satellites

Satellite Scanner Spatial Radiometric Spectral Application


Resolution resolution band
level micrometer
IRS‐1A & LISS‐I 72.5 m 128 0.45‐0.52 (Coastal,
IRS‐1B LISS‐II 36.25 m water
mapping
and soils,
vegetation
0.52‐0.59 differences)
(green
reflectance
of heavy
0.62‐0.68 vegetation)
(chlorophyll
0.77‐0.86 absorption
of plants)
(Biomass
survey and
water
bodies)
IRS‐P2 LISS‐I & 65.48 128 0.45‐0.52 (same as
LISS‐II 74.78 128 0.52‐0.59 above)
(modified) 32.84 across 0.62‐0.68
track
37.39 & day
track
IRS‐1C & LISS‐III 23.6 m 128 0.52‐0.59 (same as
IRS–1D 3 bands 0.62‐0.68 above)
0.77‐0.86

70.8 m 128 1.55‐1.7


one band

WIFS 188 m 128 0.62‐0.68


For regional
2 bands 0.77‐0.86 vegetation
monitoring
PAN 5.8 64 0.5‐0.75
Off Nadir For stereo
±26o one band coverage &
3D
interpretatio

29
n
IRS‐P6 LISS‐III 23.6 m 128 0.52‐0.59 (same as
4 bands 0.62‐0.68 above)
0.77‐0.86
1.55‐1.7

AWIFS 56 m 1024 0.52‐0.59


4 bands 0.62‐0.68
0.77‐0.86
1.55‐1.7 For regional
vegetation
LISS ‐ IV 5.8m ( 128 bits out 0.52‐0.59 monitoring
Mono and of 1024 bits 0.62‐0.68
Multispectra 0.77‐0.86
l Mode)

one band

Analysis of MSS Images

MSS images are interpreted in much the same manner as small‐scale


photographs or images and photographs acquired from manned satellites. However,
there are some differences and potential advantages of MSS images. Linear features
caused by topography may be enhanced or suppressed on MSS images depending on
orientation of the features relative to sun azimuth. Linear features trending normal, or
at a high angle, to the sun azimuth are enhanced by shadows and highlights. Those
trending parallel with the azimuth are suppressed and difficult to recognize, as are linear
features parallel with the MSS scan lines.

Scratches and other film defects may be mistaken for natural features, but these
defects are identified by determining whether the questionable features appear on
more than a single band of imagery. Shadows of aircraft contrails may be mistaken for
tonal linear features but are recognized by checking for the parallel white image of the
contrail. Many questionable features are explained by examining several images
acquired at different dates. With experience, an interpreter learns to recognize linear
features of cultural origin, such as roads and field boundaries.

The recommended interpretation procedure in geology is to plot lineaments as


dotted lines on the interpretation map. Field checking and reference to existing maps
will identify some lineaments as faults; for these the dots are connected by solid lines on
the interpretation map. The remaining dotted lines may represent (1) previously
unrecognized faults, (2) zones of fracturing with no displacement, or (3) lineaments
unrelated to geologic structure.

30
The repeated coverage of landsat enables interpreters to select images from the
optimum season for their purpose. Winter images provide minimum sun elevations and
maximum enhancement of suitably oriented topographic features are commonly
enhanced on images of snow‐covered terrain because the snow eliminates or
suppresses tonal differences and minor terrain features, such as small lakes. Areas with
wet and dry seasonal climates should be interpreted from images acquired at the
different seasons. In cloud‐free rainy‐season images are best for most applications, but
this selection may not apply everywhere.

Significance of colors on Landsat IR color images was described earlier in the section
on MSS images. For special interpretation objectives, black‐and‐white images of
individual bands are useful. Table 2&4 gives some specific applications of TM & IRS
bands.

Points to remember

1. Cloud‐free MSS images are available for most of the world with no political or
security restrictions.

2. The low to intermediate sun angle enhances many subtle geologic features.

3. Long‐term repetitive coverage provide images at different seasons and illumination


conditions.

4. The images are low in cost.

5. IR color composites are available for many of the scenes. With suitable equipment,
color composites may be made for any image.

6. Synoptic coverage of each scene under uniform illumination aids recognition of


major features. Mosaics extend this coverage.

7. There is negligible image distortion.

8. Images are available in a digital format suitable for computer processing.

9. Limited stereo coverage is available except SPOT and IRS‐1C.

10. TM provides images with improved spatial resolution, extended spectral range, and
additional spectral bands.

31
In addition to the applications shown in this chapter, Landsat images are
valuable for resource exploration, environmental monitoring, land‐use analysis, and
evaluating natural hazards.

Another major contribution of Landsat is the impetus it has given to digital image
processing. The availability of low‐cost multispectral image data in digital form has
encouraged the application and development of computer methods for image pro‐
cessing, which are increasing the usefulness of the data for interpreters in many
disciplines.

Since the first launch in 1972, Landsat has evolved from an experiment into an
operational system. There has been a steady improvements in the quality and utility of
the image data. Many users throughout the world now rely on Landsat, SPOT and IRS
images as routinely as they do on weather and communication satellites. It is essential
that the all remote sensing programs continue to provide images.

32

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy