0% found this document useful (0 votes)
21 views195 pages

Gis For Disaster Monitoring Powerpoint - 2019

The document provides an overview of Geographic Information Systems (GIS) for disaster monitoring, detailing its definitions, components, and functions. It explains the representation of geographic data through raster and vector methods, as well as the benefits of using GIS in decision-making processes. Additionally, it discusses data acquisition, input, and the role of GPS in enhancing GIS capabilities.

Uploaded by

k4bxt4j98q
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views195 pages

Gis For Disaster Monitoring Powerpoint - 2019

The document provides an overview of Geographic Information Systems (GIS) for disaster monitoring, detailing its definitions, components, and functions. It explains the representation of geographic data through raster and vector methods, as well as the benefits of using GIS in decision-making processes. Additionally, it discusses data acquisition, input, and the role of GPS in enhancing GIS capabilities.

Uploaded by

k4bxt4j98q
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 195

FIDM 203 :GEOGRAPHIC INFORMATION SYSTEMS FOR DISASTER MONITORING I

AUSTIN ASARE (LECTURER)


A Geographic Information System (GIS)
Contents
• What is GIS?
• Components of GIS
• What are geographic data?
• How are geographic data represented?
• Functions of GIS
DEFINIT IONS OF GEOGRAPHIC INFORMAT ION
SYST EMS (GIS)
Def init ion # 1
• Over the past 20 to 30 years, many authors (Dept of the
Environment, 1987; Rhind, 1988; Parker, 1988; and Bolstad, 2002)
have defined GIS, and most of their definitions are similar to one
another.
• The definitions generally refer to a system of computer hardware,
software, and people that support the capture, management,
analysis, and display of spatial data.
• It is a decent definition, but to understand GIS better, you
should break it down into its four main subsystems:
GIS Subsystems
1. Data input: Get spatial and attribute data into the GIS
by collecting and preprocessing of data from various
sources.
2. Preprocessing: Organize data for retrieval and editing.
3. Analysis : Perform tasks on the data e.g. spatial analysis
to create information.
4. Output : Create thematic maps, models, and statistics.
Def init ion # 2
•An even shorter definition equates GIS to a
“spatial database”, .
•To do this, think of a computer screen displaying a
simple parcel map.
•The computer stores many database characteristics
about the feature like location of cocoa farms and its
owner’s name.
•In other words, there are two parts to a GIS: a map
(or spatial / location) component and an attribute
(or database) component.
•By making this LINK BETWEEN the map and
the stored attributes, GIS becomes a powerful tool
for addressing and analyzing geographic data and
environmental issues.
Def init ion # 3
• A Geographic Information System (GIS) is any system that
integrates, captures, stores, analyzes, manages, and displays data
that is linked to location.
Def init ion # 4
• An Information System that is used to input, store , retrieve,
manipulate, analyze and output geographically referenced data or
geospatial data, in order to support decision making for planning
and management of land use, natural resources, environment,
transportation, urban facilities, and other administrative records.
• In this context, GIS can also be defined as a tool of exploration that
helps us explore geographic (or spatial) patterns and aids us in
describing these patterns.

• GIS can go beyond simply description to help us investigate and


understand why these patterns (sometimes called distributions) exist,
the impacts these patterns have on our life and land, and to discover
potential future geographic patterns.
FEATURES
• GIS has a spatial or map component and an attribute or database
component.
• Features have these two components as well.
• They are represented spatially on the map and their attributes,
describing the features, are found in a data file.
• These two parts are linked. In other words, each map feature is
linked to a record in a data file that describes the feature.
• Features are arranged in data files often called layers, coverages, or
themes.
How are geographic data represented?
Rasters and vectors

• Two methods of representing geographic data in digital form are raster and
vector.
• Raster: In a raster representation, geographic space is divided into a
rectangular array of cells, which are usually square. All geographic variation
is expressed by assigning properties or attributes to these cells. The cells are
sometimes called pixels (short for picture elements).
• One common form of raster data comes from remote sensing satellites.
Data from the Landsat satellite, for example, which is commonly used in GIS
applications, come in cells that are 30 meters on a side on the ground, or
approximately one-tenth of a hectare in area.
Raster
• A matrix of rows and columns, the raster data
model covers sections of the Earth’s surface and
represents features with cells or pixels.
• Pixels are the building blocks of the raster data
model, and they are usually uniformly square
and of consistent size within each layer.
• Each pixel represents a precise chunk of the
Earth’s surface; the geographic position of any
cell can be determined.
•A specific attribute value,
representing the condition of that
specific portion of the Earth’s surface
(see figure 1.8), is associated with the
pixel.
•Individual cells and groups of cells
represent the features of the real
world (Figure 1.8).
•A point feature usually fills one cell
while lines and polygons are
constructed as a string or contiguous
group of cells.
•Raster layers fill space; they describe
what occurs everywhere in the study
area.
• There are no blank spaces across
the layer.
•“Empty” areas simply get a “0”
value, but every pixel gets a value.
• Vector data
In a vector representation, all lines are captured as points
connected by straight.
• An area is captured as a series of points or vertices connected by
straight lines as shown below. The straight edges between
vertices explain why areas in vector representation are often
called polygons, and the terms polygon and area are often used
interchangeably.
• Lines are captured in the same way, and the term "polyline" has
been coined to describe a curved line represented by a series of
straight segments connecting vertices.
Points
• Points are zero dimensional features (meaning that
they possess only one x, y coordinate set) whose
location is depicted by a small symbol.
• What you represent as a point depends on your
study.
• Examples include streetlights, individual trees,
wells, crimes, telephone polls, earthquake
epicenters, and even, depending on scale,
buildings and cities.
Polygons
• What is represented as a polygon differs from study to study, but
examples include lakes, forest stands, cocoa farms buildings,
countries, states, and census districts.
BENEFITS OF GIS

• Revision and updating are easier.


• Geospatial data and information are easier to search,
analysis and represent.
• Geospatial data can be shared and exchanged freely.
• Time and money are saved.
• Better decision can be made.

16
THE BASIC ELEMENTS OF A GIS

• A GIS is a 5-part Six Functions of a GIS


system /Components:
Capture data,
– People
Store data,
– Data
Query data,
– Hardware Spatial measurement and
– Software Analyze data,
– Procedures Display data,
Produce output

17
DATA INPUT
6 Key questions to ask yourself in planning a GIS database

• What are your key features?


• What are the project's spatial extent, scale, and temporal extent?
• For each feature type (layer), what attribute do you need to collect?
• For each feature type, how should they be coded (points, lines, or
polygon)? How should the attributes be coded?
• What base map features will provide reference?
• What projection, coordinate system, and datum will you use?
• Get spatial and attribute data into the GIS. Here you collect and preprocess
data from various sources.
1. Determine Your Features
• What features are necessary?
• Think back to your project’s goals. If you want to analyze a
particular plant species’ distribution, it may be necessary to have a
feature devoted to the specific plant type.
• Equally important, however, are the other features—nearby plant
species, soil types, climate conditions, land tenure practices, and
landform conditions like slope and aspect.
• These other features, along with many others, play a role in the
distribution of your plant.
• The question of which features to use, however, may seem simple,
but frequently it can be complicated.
• For example, you might want to interview hundreds of people in
Sunyani Fiapire about their family income and quality of life.
• Is it appropriate to construct a layer with a point location at the
home of each respondent, or would it be better to aggregate the
responses of the individuals into neighborhood or census tract
boundaries.
• In this scenario, your feature layer might be called “respondents”,
and each point feature would be located at the home of an individual
respondent.
• Attributes would be stored within each individual feature.
2.Determine the Project’s Spatial Extent, Scale, and Temporal Extent

• You must determine the area and the period in which your
project focuses.
• If you are working in cocoa areas you must know its area
extent.
• Sometimes, it is important to go beyond the area extent to
sphere of influence borders because these areas may have
a direct impact on the cocoa.
• Along with the project’s spatial extent, you
should think about an appropriate scale.
• There is a relationship between scale and
detail.
• Small- scale maps depict large territories,
but they usually are less precise and may
require that some reference layers be left
out.
• Large- scale maps show smaller areas but
comparatively include more detail.
• Although GIS allows one to zoom in at
increasingly larger scales, data captured at
a “small scale” become inherently
inaccurate when zoomed in on.
• Similarly, you may want to define a temporal extent.
• Is time an important variable in your study?
• Most GIS projects focus on the contemporary scene and ignore the
past.
• If, however, you want to determine how much an area has changed,
you need to define a period for your project.
3. Determine the Attributes for Each Feature Type
• As described in Chapter 1, attributes are the characteristics of features.
• You need to identify the required attributes for each feature type.
• The more you can do this before you collect your data, the less you will retrace your steps
and collect additional attributes later.
• Again, look to the project’s goals for some clues to the necessary attributes.
• Also consider how you will analyze the features. You cannot use some analytical processes
(like many statistical tests) if the attribute values that you collect are in an improper form to
be used in a particular analytical process (i.e. consider the feature’s levels of measurement).
• One other thing to consider at this point is that some attributes (like a polygon’s area, a and
line’s length) can be generated automatically by the software.
4. Determine How the Features and Their Attributes should be
Coded
• Once you have decided on the features and their attributes, determine
how they will be coded in the GIS database.
• As described earlier, there is not just one way to code features.
• Although roads are usually coded as lines.
• Decide whether to code each feature type as a point, line, or polygon.
• Then define the format and storage requirements for each of the
feature’s attributes. For instance, is the attribute going to be in
characters (string) or numbers?
• If they are going to be numbers, are they byte, integer, or real
numbers? You will have to establish these database parameters before
•Look at the example below (Figure
2.4).
•Listed are some attributes (under
Field Name) relating to the feature
“streets”.
•Notice that street “LENGTH” has a
data type called double (a type of real
numbers), and in this case, the
database will store up to 18 numbers
including 5 decimal places for the
length of each individual street.

Figure 2.4: Each feature’s attribute needs to be coded.


5. Determine the Base Map Reference Features
• What features are helpful to include? Add reference features that
help people orient themselves within your study area even if you are
not going to analyze these features.
• Major roads, rivers, and principal buildings are good examples of
features that help orient map readers.
• These secondary features are often the easiest features to find on the
Internet, and sometimes they are bundled with GIS software.
• In short, having these base- map features may not be important for
analysis, but they are important for clarity.
6. Determine your Project’s Projection, Coordinate System,
and Datum
• Before you collect or look for data, you should decide on which
projection, coordinate system, and datum to use.
• These three terms, collectively termed “projection parameters”, and
it is important that these parameters remain consistent throughout
your layers or themes.
• Consistency enables you to properly overlay your feature layers to
produce maps and analyze feature relationships.
DATA ACQUISIT ION

•In the data acquisition phase, you obtain the data for your
GIS.
•Getting all the data together (and in a suitable format) is the
most costly and time- consuming task for any GIS project.
•Most estimates suggest that between 75 to 80 percent of your
time is spent collecting, entering, cleaning, and converting
data.
Sources of Data

• Remote sensing
• Ground Truthing\ Field survey with GPS
• Aerial photographs \ photogrammetry \ Drones
• Scanning of old maps and georeferencing
• Surveying
• Digitization
Google Maps and Google Earth

• Google Maps is a basic web mapping service


application and technology provided by Google.
• It offers street maps, a route planner for traveling
by foot, car, or public transport and an urban
business locator for numerous countries around the
world.
• The big advantages of using Google are: it is free,
generally up to date, easy to learn,
developer/programmer friendly, independently plat-
formed and open to a large user community.
• Users can freely view data, images or geographical
information on the Google Earth platform.
• Google Earth displays satellite images of varying
resolution of the Earth's surface, allowing users to
see things like cities.
• The degree of resolution available is based
somewhat on the points of interest and popularity.
• For example, it can vary from less than 15m in
countryside to less than 1-2 feet (20-40 cm) in
major US cities.
• The internal coordinate system of Google Earth is geographic
coordinates (latitude/longitude) on the World Geodetic System of
1984 (WGS84) datum.
• GIS data can be exported into Keyhole Markup Language (KML)
file format and viewed on Google Earth.
• Alternatively, Google Earth derived .tiff, .jpeg and other graphic files
can be geo-referenced using some reference points.
Metadata
• Metadata is a data quality document, and its frequently repeated
definition is “data about data” (although it is perhaps more accurate
to define it as “information about data”).
• It describes the attributes and the location of the features in the layer.
It gives you an impression about the dataset’s accuracy and
precision.
PHASE 3: DATA CAPT URE
• Creating new GIS datasets from both digital data that are not
currently in a GIS format and from non- digital or hard- copy data
sources.
• Examples of digital and non- digital data sources include maps
(hard- copy and digital), aerial photographs (hard- copy and digital),
questionnaires, field observations, digital satellite imagery, survey
data, and Global Positioning System (GPS) coordinates.
GLOBAL POSITIONING
SYSTEM (GPS)
• During the 1970s, a new and unique approach to
surveying, the global positioning system (GPS),
emerged.
• GPS relies upon signals transmitted from satellites for
its operation.
• It provides precise timing and positioning information
anywhere on the Earth with high reliability and low
cost.
• GPS (Global Positioning System) is a radio-
based navigation system that uses GPS
receivers to compute accurate locations on
the Earth’s surface from a series of orbiting
satellites.

Typical low- end GPS device


DATA COLLECTION
DATA INPUT IN EXCEL
OVERVIEW OF GPS

• As I have already indicated, precise distances from the satellites


to the receivers are determined from timing and signal
information, enabling receiver positions to be computed.
• In satellite surveying, the satellites become the reference or
control stations, and the ranges (distances) to these satellites are
used to compute the positions of the receiver.
• The global positioning system can be arbitrarily broken into three
parts or segments:
The space segment
• The space segment consists nominally of 24 satellites
operating in six orbital planes spaced at 60° intervals
around the equator.
• Four additional satellites are held in reserve as spares.
The orbital planes are inclined to the equator at 55°

: The GPS constellation


• This configuration provides 24-h satellite coverage
between the latitudes of 80°N and 80°S.
• The satellites travel in near-circular orbits that have a
mean altitude of 20,200 km above the Earth and an
orbital period of 12 sidereal hours.
• The individual satellites are normally identified by their
satellite vehicle number (SVN) or orbital position.
• This configuration provides 24-h satellite coverage
between the latitudes of 80°N and 80°S.
• The satellites travel in near-circular orbits that have a
mean altitude of 20,200 km above the Earth and an
orbital period of 12 sidereal hours.
• The individual satellites are normally identified by their
satellite vehicle number (SVN) or orbital position.
CONTROL SEGMENT

• The control segment : consists of monitoring


stations which monitor the signals and track the
positions of the satellites over time.
• The initial GPS monitoring stations are at Colorado
Springs, and on the islands of Hawaii, Ascension,
Diego Garcia, and Kwajalein.
USER SEGMENT

• The user segment in GPS consists of two categories of receivers


that are classified by their access to two services that the system
provides. These services are referred to as the Standard Position
Service (SPS) and the Precise Positioning Service (PPS).

ERRORS IN OBSERVATIONS

Electromagnetic waves can be affected by several sources of error


during their transmission. Some of the larger errors include

• satellite and receiver clock biases


• selective availability
Ionospheric and tropospheric refraction
• the velocities of electromagnetic waves change as they
pass through media with different refractive indexes. The
atmosphere is generally subdivided into regions.
• The subregions of the atmosphere that have similar
composition and properties are known as spheres.
• The boundary layers between the spheres are called
pauses. The two spheres that have the greatest effect on
satellite signals are the troposphere and ionosphere.
• The troposphere is the lowest part of the atmosphere,
and is generally considered to exist up to 10–12 km in
altitude.
• The tropopause separates the troposphere from the
stratosphere. The stratosphere goes up to about 50 km.
• The combined refraction in the stratosphere,
tropopause, and troposphere is known as tropospheric
refraction.
• As the satellite signals pass through the ionosphere and
troposphere, they are refracted.
• The ionosphere is primarily composed of ions—
positively charged atoms and molecules, and free
negatively charged electrons.
• The free electrons affect the propagation of
electromagnetic waves. The number of ions at any given
time in the ionosphere is dependent on the sun’s
ultraviolet radiation.
• Since ionospheric refraction is the single largest error in
satellite positioning, it is important to explore the space
weather when performing surveys.

Satellite geometry
• errors due to satellite geometry.
Errors due to satellite
geometry.
• Multipathing
• This occurs when the signal emitted by the satellite arrives at the
receiver after following more than one path.
• It is generally caused by reflective surfaces near the receiver or
when a satellite signal reflects from a surface and is directed
toward the receiver.
• This causes multiple signals from a satellite to arrive at the
receiver at slightly different times.
• Vertical structures such as buildings and chain link fences are
examples of reflecting surfaces that can cause multipathing
errors.
SELECTIVE AVAILABILITY

• Until May of 2000, GPS signals were degraded to intentionally


reduce accuracies achievable using the code-matching method.
• The intent was to exclude the highest accuracy attainable with
GPS from nonmilitary users, especially adversaries.
REMOTE SENSING
OUTLINE

• Remote Sensing definition


• Elements in involved in Remote sensing
• Electromagnetic Radiations (Energy) (EMR)
• Resolution Types
• Sensors and Platforms
• Types of remote sensing / types of sensors
• Remote Sensing applications
• Image preprocessing
• Application of Spectral indices and hands-on computation in ArcMap
• INDVI Change detection
• Image classification
• Post-classification accuracy assessments / confusion matrix
Remote Sensing
• Definit ion # 1
• Remote Sensing
–The art and science of obtaining information about an object
without physically contact between the object and sensor
–The processes of collecting information about Earth surfaces and
phenomena using sensors not in physical contact with the
surfaces and phenomena of interest.
–There is a medium of transmission involved i.e. Earth’s
Atmosphere.
• Definit ion # 2
• Remote Sensing is a technology for sampling electromagnetic radiation to
acquire and interpret non-immediate geospatial data from which to extract
information about features, objects, and classes on the Earth's land
surface, oceans, and atmosphere.

62
Elements involved in Remote sensing

1. Energy Source or Illumination (A)


2. Radiation and the Atmosphere (B)
3. Interaction with the Object (C)
4. Recording of Energy by the Sensor
(D)
5. Transmission, Reception and
Processing (E)
6. Interpretation and Analysis (F)
7. Application (G)
63
Remote sensing cycle
• Remote Sensing Includes:

– A) The mission plan and choice of sensors

– B) The reception, recording, and processing of the signal data

– C) The analysis of the resultant data.


Some Remote Sensors
Electromagnetic Spectrum

Remote Sensing & GIS Applications Directorate


Electromagnetic Radiation
Signature Spectral

Remote Sensing & GIS Applications Directorate


Spectral signature

The pattern of electromagnetic radiation that identifies a


chemical or compound
RESOLUTION
Data Resolution
• A major consideration when choosing a sensor type is the definition of
resolution capabilities.
• “Resolution” in remote sensing refers to the ability of a sensor to distinguish or
resolve objects that are physically near or spectrally similar to other adjacent
objects.
• The term high or fine resolution suggests that there is a large degree of
distinction in the resolution.
• High resolution will allow a user to distinguish small, adjacent targets.
• Low or coarse resolution indicates a broader averaging of radiation over a
larger area (on the ground or spectrally).
• Objects and their boundaries will be difficult to pinpoint in images with coarse
resolution.
• The four types of resolution in remote sensing include spatial, spectral,
radiometric, and temporal.
Sensor-platform Electromagnetic Spectrum
characteristics
1. Spectral resolution = part of
the EM spectrum measured.
2. Radiometric resolution =
smallest differences in
energy that can be
measured.
3. Spatial resolution = smallest
unit-area measured.
4. Revisit time (temporal
resolution) = time between
two successive image
acquisitions over the same
area. 75
(1) Spatial Resolution.
• (a) An increase in spatial resolution corresponds to an increase in the
ability to resolve one feature physically from another.
• The earth surface area covered by a pixel of an image is known as spatial
resolution
• Large area covered by a pixel means low spatial resolution and vice versa.
• (b) Spatial resolution is best described by the size of an image pixel.
• A pixel is a two-dimensional square-shaped picture element displayed on a
computer.
• For example, if a project requires the discernment of individual tree,
the spatial resolution should be a minimum of 15 m.
• If you need to know the percent of timber stands versus clear cuts, a
resolution of 30 m will be sufficient.
Source: Jensen (2000)

High vs. Low?


Remote Sensing & GIS Applications Directorate
AVHRR (Advanced Very High Resolution Radiometer) NASA
GOES (Geostationary Operational Environmental Satellites) IR 4
MODIS (250 m)

Remote Sensing & GIS Applications Directorate


Landsat TM
(False Color Composite)

Remote Sensing & GIS Applications Directorate


SPOT (2.5 m)

Remote Sensing & GIS Applications Directorate


QUICKBIRD (0.6 m)

Remote Sensing & GIS Applications Directorate


IKONOS (4 m Multispectral)

Remote Sensing & GIS Applications Directorate


IKONOS (1 m Panchromatic)

Remote Sensing & GIS Applications Directorate


RADAR
(Radio Detection and Ranging)

Remote Sensing & GIS Applications Directorate


LIDAR
(Light Detection and Ranging)

Remote Sensing & GIS Applications Directorate


• (2) Spectral Resolution.
• Spectral resolution is the size and number of wavelengths,
intervals, or divisions of the spectrum that a system is able to
detect.
• Is the ability to resolve spectral features and bands into their
separate components.
• More number of bands in a specified bandwidth means
higher spectral resolution and vice versa.
• Fine spectral resolution generally means that it is possible to
resolve a large number of similarly sized wavelengths, as well
as to detect radiation from a variety of regions of the spectrum.
• A coarse resolution refers to large groupings of wavelengths
and tends to be limited in the frequency range.
(3) Radiometric Resolution.
• Radiometric resolution is a detector’s (sensors) ability to
distinguish differences in the strength of emitted or reflected
electromagnetic radiation.
• A high radiometric resolution allows for the distinction
between subtle differences in signal strength.
• Sensitivity of the sensor to the magnitude of the received
electromagnetic energy determines the radiometric resolution.
• A sensor has a Finer radiometric resolution, if it is more
sensitive in detecting small differences in reflected or emitted
energy.
Radiometric
Resolution

2-bit range
0 4

6-bit range
0 63

8-bit range
0 255

10-bit range
0 1023
Temporal Resolution.

(4) (a) Temporal resolution refers to the frequency of data collection.


• Data collected on different dates allows for a comparison of surface
features through time.
• Frequency at which images are recorded/ captured in a specific place on
the earth.
• The more frequently it is captured, the better or finer the temporal
resolution is said to be.
• For example, a sensor that captures an image of an agriculture land twice
a day has better temporal resolution than a sensor that only captures
that same image once a week.
• If a project requires an assessment of change, or change detection,
it is important to know:
a) 1) how many data sets already exist for the site;
b) 2) how far back in time the data set ranges; and
c) 3) how frequently the satellite returns to acquire the same
location.
• (b) Most satellite platforms will pass over the same spot at regular
intervals that range from days to weeks, depending on their orbit
and spatial resolution.
• A few examples of projects that require change detection are the
growth of crops, deforestation, sediment accumulation in
estuaries, and urban development.
Temporal Resolution

July 2 July 18 August 3

16 days

Time

11 days

July 1 July 12 July 23 August 3


• (5) Determine the Appropriate Resolution for the Project.
• Increasing resolution tends to lead to more accurate and useful
information; however, this is not true for every project.
• The downside to increased resolution is the need for increased
storage space and more powerful hardware and software.
• High-resolution satellite imagery may not be the best choice when
all that is needed is good quality aerial photographs.
• It is, therefore, important to determine the minimum resolution
requirements needed to accomplish a given task from the outset.
This may save both time and funds.
SOME KNOWN SATELLITES and their spatial resolution

• NOAA-AVHRR (1100 m)
• GOES (700 m)
• MODIS (250, 500, 1000 m)
• Landsat TM and ETM (15m – 100 m)
• SPOT (10 – 20 m)
• IKONOS (4, 1 m)
• Quick bird (0.6 m)
• Sentinel 2A (10 – 60)
• ALOS (20m)
• ASTER (4m….)
SENSORS

Passive
sensors
•Landsat
•ASTER
•Quickbard
•Ikonos
•Sentinel 2A
Active Sensors
•LIDAR
•RADAR

97
Platforms
Platforms are:
• Ground based
• Airborne
• Spaceborne

Sensing from 1 meter to


36,000 km height

98
Application of Remote Sensing

Natural resource Management

• Forestry: biodiversity, forest, deforestation


• Water source management
• Habitat analysis
• Environmental Impact assessment
• Pest/disease outbreaks
• Impervious surface mapping
• Hydrology
• Mineral province
• Geomorphology
SCOPE Forestry
• Satellite image based forest resource mapping and up-dation
• Forest change detection
• Forest resource inventory
• GIS database development
• Land cover mapping

Benefits Sarhad Reserve Forest (Ghotki)

• Availability of baseline information


• Planning for aforestation strategies
• Futuristic resource planning
• Sustainability of environment
• Wild life conservation & development for recreation purpose Nausharo
Firoz
Application of Remote Sensing

Urbanization & Transportation


• Urban planning
• Roads network and transportation
planning
• City expansion
• City boundaries by time
• Wetland delineation
Image source: www.geospectra.net
Agriculture
Application
The application of remote sensing of Remote Sensing
in agriculture include:
- Soil sensing
- Farm classification
- Farm condition assessment : NDVI
- Agriculture estimation
- Mapping of farm e.g. cocoa farms, cashew farms etc.
- Mapping of land management practices.
- Compliance monitoring
Creation
Creation of
of the
the Burn
Burn Area
Area Reflectance
Reflectance Classification:
Classification:
Rapid
Rapid Assessment
Assessment ofof Vegetation
Vegetation Condition
Condition after
after
Wildfire
Wildfire
Prefire
Postfire
dNBR
BARC
Normalized Burn Ratio (NBR)
Differenced Normalized Burn
Ratio (dNBR)

NBR = (NIR – SWIR) / (NIR + SWIR)


dNBR = Pre NBR – Post NBR

Normalized Difference Vegetation Index (NDVI)


Differenced NDVI (dNDVI)

NDVI = (NIR – Red) / (NIR + Red)


Unburned to Low dNDVI = Pre NDVI – Post NDVI
Low
Moderate
High dNDVI is utilized when
appropriate SWIR band is not
available
Burned
Burned Area
Area Emergency
Emergency Response
Response Imagery
Imagery
Support
Support
BAER Support Rapid Delivery Products
Prefire Image

Postfire Image

dNBR or dNDVI Image

BARC Image 3D Visualizations

Map Products
DIGITAL IMAGE PROCESSING
INTRODUCTION:DIGITAL IMAGE PROCESSING

• Image processing in the context of remote sensing refers to the


management of digital images, usually satellite or digital aerial
photographs.
• Image processing includes the display, analysis, and
manipulation of digital image computer files.
• An image analyst relies on knowledge in the physical and natural
sciences for aerial view interpretation combined with the
knowledge of the nature of the digital data.
So IMAGE PROCESSING SOFTWARE. me Image Processing
Software
• ERDAS Imagine
• ENVI
• ILWIS
• ArcGIS
• QGIS
• SAGA
• GRASS
• IDRISI

108
METADATA
• Metadata is simply ancillary information about the characteristics of the data; in other
words, it is data about the data.
• It describes important elements concerning the acquisition of the data as well as any post-
processing that may have been performed on the data.
• Metadata is typically a digital file that accompanies the image file or it can be a hardcopy of
information about the image.
• Metadata files document the source (i.e., Landsat, SPOT, etc.), date and time, projection,
precision, accuracy, and resolution of the image.
• It is the responsibility of the vendor and the user to document any changes that have been
applied to the data.
• Without this information the data could be rendered useless.
Spectral Bands
• Sensors collect wavelength data in bands.
• A number or a letter is typically assigned to a band.
• For instance, radiation that spans 0.45 to 0.52 μm is designated as band 1 for Landsat 7 data;
in the microwave region radiation spanning 15 to 30 cm is termed the L-band.
• Not all bands are created equally.

• Landsat band 1 (B1) does not represent the same wavelengths as SPOT’s B1.
• Band numbers are not the same as sensor numbers.

• For instance Landsat 4 does not refer to band 4. It instead refers to the fourth satellite sensor
placed into orbit by the Landsat program.

• It is important to know which satellite program and which sensor collected the data.
Individual DNs can be identified in each spectral band of an image. In this
example the seven bands of a subset from a Landsat image are displayed.
Table 1: Spectral characteristics of Landsat 8 and Sentinel-2 bands

Landsat-8 / OLI Sentinel-2A / MSI


Bands Wavelength Resolution Bands Central Resolution
(μm) (m) Wavelength (m)
(μm)
Bandd1-Coastal Bandd1- Coastal
Aerosol 0.43-0.45 30 Aerosol 0.443 60
Band2-Blue 0.45-0.51 30 Band2-Blue 0.490 10
Band3-Green 0.53-0.59 30 Band3-Green 0.560 10
Band4-Red 0.64-0.67 30 Band4-Red 0.665 10
Band5-NIR 0.85-0.88 30 Band5-Red Edge 1 0.705 20
Band6-SWIR1 1.57-1.65 30 Band6 -Red Edge 2 0.740 20
Band7-SWIR2 2.11-2.29 30 Band7-Red Edge 3 0.783 20
Band8-
Panchromatic 0.50-0.68 15 Band8-NIR 1 0.842 10
Band9- Cirus 1.36-1.38 30 Band 8A-NIR 2 0.865 20
10.60-11.19
Band10-TIRS 1 100 (30*) Band 9-Water Vapour 0.945 60
Band11-TIRS 2 11.50-12.51 100 (30*) Band10 - Cirus 1.375 60
Band11- SWIR 1 1.610 20
Band12 - SWIR 2 2.190 20
Table 1: Spectral characteristics of Landsat 5 and 7 bands

Landsat-7 /ETM+ Landsat-5 / TM


Bands Wavelength Resolution Bands Wavelength Resolution
(μm) (m) (μm) (m)

Band1-Blue 0.45-0.51 30 Band1-Blue 0.45-0.51 30


Band2-Green 0.53-0.59 30 Band2-Green 0.53-0.59 30
Band3-Red 0.64-0.67 30 Band3-Red 0.64-0.67 30
Band4-NIR 0.85-0.88 30 Band4-NIR 0.85-0.88 30
Band5- 1.57-1.65 30 Band5- 1.57-1.65 30
SWIR1 SWIR1 /Mid IR
Band 6- 10.40- 30 Band6-Thermal 10.40- 12.50 30
Thermal IR 12.50 IR
Band7- 2.11-2.29 30 Band7-SWIR 2 / 2.11-2.29 30
SWIR2 Mid IR
Band8-
Panchromatic 0.50-0.68 15
Table 2: Band numbers with similar spectral response
characteristics across the three sensors.
Satellite Sensors
Spectral Landsat-8 Landsat-7 Sentinel-2 bands
response bands bands Central Wavelength
Wavelength Wavelength (μm)
interval (μm) interval (μm)

Blue 2 (0.45-0.51) 1 (0.45-0.51) 2 (0.490 )


Green 3 (0.53-0.59) 2 (0.53-0.59) 3 (0.560)
Red 4 (0.64-0.67) 3 (0.64-0.67) 4 (0.665)
NIR 5 (0.85-0.88) 4 (0.85-0.88) 8 (0.842)
SWIR 1 6 (1.57-1.65) 5 (1.57-1.65) 11 (1.610)
SWIR 2 7 (2.11-2.29) 7 (2.11-2.29) 12 (2.190)
SPECTRAL INDICES
• Vegetation indices are also useful for characterizing forest typologies.
• The seminal indices were meant to enhance the strong reflectance of
vegetation in the near infrared (NIR) region in relation to its marked
absorption by chlorophyll in the red region of the electromagnetic spectrum in
order to quantify the vegetation greenness, such as the Normalized Difference
Vegetation Index (NDVI) or the Difference Vegetation Index (DVI), calculated as
a simple difference between the spectral reflectance in the NIR and red
ranges.
• More recently, with the advent of new multispectral sensors, some
refinements were made possible in the conception of these indices, such as
the Optimized Soil Adjusted Vegetation Index (OSAVI), which employs a soil
adjustment coefficient (0.16) to minimize NDVI’s sensitivity to variation in soil
background under a wide range of environmental conditions.
Normalized Built-up Area Index (NBAI), Band Ratio for Built-up Area (BRBA) and
Bare Soil Index (BSI)
The normalized burn ratio (NBR)

Normalized Difference Water Index (NDWI), Modified Soil-adjusted Vegetation


Index (MSAVI2). All of the indexes have been used for classifying different land
covers within wetlands border Equation 1 -3.
SPECTRAL BAND COMBINATION
Sensor True clour / False Pseudo Identificati
Natural colour random on of fire
colour colour pixels /
burnt areas

Lansat 5 (TM sensor) 3,2,1 4,3,2 5,4,3 7,4,2


Landsat 7 (ETM+ 3,2,1 4,3,2 5,4,3 7,4,2
sensor)
Lansat 8 / OLI 4,3,2 5,4,3 6,5,4 7,5,3
Sentinel-2A 4,3,2 8,4,3 11,8,4 12,8,3
Table 1: Spectral characteristics of Landsat 8 and Sentinel-2 bands

Landsat-8 / OLI Sentinel-2A / MSI


Bands Wavelength Resolution Bands Central Resolution
(μm) (m) Wavelength (m)
(μm)
Bandd1-Coastal Bandd1- Coastal
Aerosol 0.43-0.45 30 Aerosol 0.443 60
Band2-Blue 0.45-0.51 30 Band2-Blue 0.490 10
Band3-Green 0.53-0.59 30 Band3-Green 0.560 10
Band4-Red 0.64-0.67 30 Band4-Red 0.665 10
Band5-NIR 0.85-0.88 30 Band5-Red Edge 1 0.705 20
Band6-SWIR1 1.57-1.65 30 Band6 -Red Edge 2 0.740 20
Band7-SWIR2 2.11-2.29 30 Band7-Red Edge 3 0.783 20
Band8-
Panchromatic 0.50-0.68 15 Band8-NIR 1 0.842 10
Band9- Cirus 1.36-1.38 30 Band 8A-NIR 2 0.865 20
10.60-11.19
Band10-TIRS 1 100 (30*) Band 9-Water Vapour 0.945 60
Band11-TIRS 2 11.50-12.51 100 (30*) Band10 - Cirus 1.375 60
Band11- SWIR 1 1.610 20
Band12 - SWIR 2 2.190 20
Table 1: Spectral characteristics of Landsat 5 and 7 bands

Landsat-7 /ETM+ Landsat-5 / TM


Bands Wavelength Resolution Bands Wavelength Resolution
(μm) (m) (μm) (m)

Band1-Blue 0.45-0.51 30 Band1-Blue 0.45-0.51 30


Band2-Green 0.53-0.59 30 Band2-Green 0.53-0.59 30
Band3-Red 0.64-0.67 30 Band3-Red 0.64-0.67 30
Band4-NIR 0.85-0.88 30 Band4-NIR 0.85-0.88 30
Band5- 1.57-1.65 30 Band5- 1.57-1.65 30
SWIR1 SWIR1 /Mid IR
Band 6- 10.40- 30 Band6-Thermal 10.40- 12.50 30
Thermal IR 12.50 IR
Band7- 2.11-2.29 30 Band7-SWIR 2 / 2.11-2.29 30
SWIR2 Mid IR
Band8-
Panchromatic 0.50-0.68 15
True-color Landsat TM composite 3, 2, 1 and
False color composite 4, 3, 2.(RGB respectively)

Water, sediment, and land surfaces appear bright.


Composite that highlights healthy vegetation (shown in red);
water with little sediment appears black.
a. The true color image appears with these bands in the visible part of the spectrum
b. Using the near infra-red (NIR) band (4) in the red gun, healthy vegetation
appears red in the imagery.
c. Movingthe NIR band into the green gun and adding band 5 to the red gun
changes the vegetation to green.
b). Landsat TM bands 4, 3, 2 (RGB) image, a false
color composite, highlights vegetation in red
Landsat scene bands 5, 4, 2 (RGB). This
composite highlights healthy vegetation, which is
indicated in the scene with bright red pixels.
Figure 5-34. Landsat image of Mt. Etna eruption of
July 2001. Bands 7, 5, 2 (RGB) reveal the lava flow (orange)
and eruptive cloud (purple).
IMAGE PREPROCESSING
• Image preprocessing also called image restoration involves the corrections of
distortion, degradation, and noise introduced during the imaging process.
• These processes produce a corrected image that is as close as possible, both
geometrically and radiometrically, to the radiant energy characteristics of the
original scene.
• Geometric correction : To conform an image to corresponding ground feature, this
is like georeferencing of ArcGIS. Here we use a pre-projected image (Master) to
rectify another image (Slave). There are two types (1) Map to image (2) Image to
image.
• Resampling, Atmospheric Correction and Subset selection are necessary in pre-
processing satellite images. Resampling ensures that images of each band have the
same resolution and number of pixels.
• CONDITIONAL{(Band1>75) Band1-75,(Band1<=75 AND Band1>0)1, (Band1=
= 0)0}
• NB: DON’T TYPE THE Band1 yourself but rather double click the band in
the available input dialog box
NB: THE THIRD METHOD WAS APPLIED ON THE STACKED IMAGE
INSTEAD OF JUST BAND 1 OF THE IMAGE
Image Classification
• Raw digital data can be sorted and categorized into thematic maps. Thematic maps allow
the analyst to simplify the image view by assigning pixels into classes with similar spectral
values.
• The process of categorizing pixels into broader groups is known as image classification.
• Classification allows for cost-effective mapping of the spatial distribution of similar objects
(i.e., tree types in forest scenes); a subsequent statistical analysis can then follow.
• Thematic maps are developed by two types of classifications:
• Supervised and Unsupervised.
• Both types of classification rely on two primary methods, training and classifying.
• Training is the designation of representative pixels that define the spectral signature of the
object class.
• Training site or training class is the term given to a group of training pixels.
• Classifying procedures use the training class to classify the remaining pixels in the image.
Landsat image (left) and its corresponding thematic map (right) with 17 thematic
classes. The black zigzag at bottom of image is the result of shortened flight line
over-lap. (Campbell, 2003).
SUPERVISED CLASSIFICATION
• (1) Supervised classification requires some knowledge about the scene, such as specific
vegetative species. Ground truth (field data), or data from aerial photographs or maps can all
be used to identify objects in the scene.
• (2) Steps Required for Supervised Classification.
• (a) Firstly, acquire satellite data and accompanying metadata. Look for information regarding
platform, projection, resolution, coverage, and, importantly, meteorological conditions
before and during data acquisition.
• (b) Secondly, chose the surface types to be mapped. Collect ground truth data with
positional accuracy (GPS). These data are used to develop the training classes for the
discriminant analysis. Ideally, it is best to time the ground truth data collection to coincide
with the satellite passing overhead.
• (c) Thirdly, begin the classification by performing image post-processing techniques
(corrections, image mosaics, and enhancements).
• Select pixels in the image that are representative (and homogeneous) of the object. If
GPS field data were collected, geo-register the GPS field plots onto the imagery and
define the image training sites by outlining the GPS polygons.
• A training class contains the sum of points (pixels) or polygons (clusters of pixels). View
the spectral histogram to inspect the homogeneity of the training classes for each
spectral band.
• Assign a color to represent each class and save the training site as a separate file.
• Lastly, extract the remaining image pixels into the designated classes by using a
discriminate analysis routine (discussed below).
Landsat 7 ETM image of central Australia (4, 3, 2 RGB). Linear
features in the upper portion of the scene are sand dunes. Training
data are selected with a selection tool (note the red enclosure).
Classification Algorithms

• (3) Image pixels are extracted into the designated classes by a


computed discriminant analysis.
• The three types of discriminant analysis algorithms are: minimum
mean distance, maximum likelihood, and parallelepiped.
• All use brightness plots to establish the relationship between
individual pixels and the training class (or training site).
• (a) Minimum Mean Distance. Minimum distance to the mean is a
simple computation that classifies pixels based on their distance
from the mean of the training class.
Unsupervised Classification
• Unsupervised Classification. Unsupervised classification does not require
• prior knowledge. This type of classification relies on a computed algorithm
• that clusters pixels based on their inherent spectral similarities.
• (b) Advantages of Using Unsupervised Classification.
• Unsupervised classification is useful for evaluating areas where you have little
or no knowledge of the site. It can be used as an initial tool to assess the scene
prior to a supervised classification.
• Unlike supervised classification, which requires the user to hand select the
training sites, the unsupervised classification is unbiased in its geographical
assessment of pixels.
Disadvantages of Using Unsupervised Classification

• The lack of information about a scene can make the necessary algorithm decisions
difficult. For instance, without knowledge of a scene, a user may have to experiment
with the number of spectral clusters to assign.
• The final image may be difficult to interpret
• The algorithm may mistakenly separate pixels with slightly different spectral values and
assign them to a unique cluster when they, in fact, represent a spectral continuum of a
group of similar objects.
POST- CLASSIFICATION ACCURACY ASSESSMENT/ CONFUSION
MATRIX
Accuracy assessment
• Digital image classification output assessment for accuracy is very important.
• This assessment is done to determine the quality of information obtained from the image.
• It is important to conduct this assessment for the individual classification if the classified
images are to be used for change detection analysis .
• This is done either by using a new set of ground truth data or by comparing the classified
image with a previously classified reference map for selected sampling points.
• This was followed by the computation of the overall accuracy, user’s accuracy, producer's
accuracy and the Kappa's coefficient.
• The overall accuracy is the ratio between the total number of samples which are correctly
classified and the total number of samples considered for the accuracy assessment.
• User's accuracy corresponds to error of commission and measures how many of the samples
of a particular class matched correctly.
• In other words, it measures how the probability of a pixel on the image actually represents a
class on the ground.
• Producer's accuracy on the other hand correspond to error of omission and measures how
much of land in each LULC class was classified correctly or how well an area can be
classified.
• The Kappa statistic estimates the agreement between a modeled scenario and
reality.
• In other words, it determines whether the results showed in an error matrix are
significantly better than random .
• Kappa statistic for an error matrix for a number of rows and column is
calculated as:

• Where, N = total number of observations included in the error


matrix
A= the sum of correct classifications contained in the diagonal
elements
B = the sum of the products of row total and column total for each
LULC category in the error matrix.
So along the diagonal of the error matrix are the correct predictions. For example,
there were 48 ground truth locations that in real life or on the ground they were water
and in our classified raster they were predicted within water.
And there were 40 ground truth points that were sea ice and those were all correctly
predicted in our classified raster to be sea ice or as sea ice.
• So the total number of predictions that were correct were 48
+40 divided by 110 (i.e. [(48+40) / 110].
• So the overall accuracy is the total number of correct
prediction divided by the total number of predictions or the
total number of ground truth points.
• So in this case there is 48 correct for water plus 40 correct for
sea ice divided by 110 which is the total number of predictions
made.
• And then we can also ask OK for any class what was the
accuracy.
• For instance, what was the accuracy for sea ice.
• We can look at that from two different perspectives.
• So, whenever we are talking about class accuracy there will be
2 estimates. One will be out of the ground truth points (40/50),
how many were correctly predicted or out of the predictions or
our classification (40/52), how many were correctly predicted.
PREDICT Truth 1 Truth 2 Truth 3 Truth 4 Truth 5 Grand
Correct classification attributed to chance or chance correct
Total
classification
Water 1 138 0 0 0 0 138

Cropland / 0 74 19 0 0 93 Water = (138 x 138) / 301 = 63.27


Farmland 2 Cropland = (93 x 83) / 301 = 25.64
Rangeland / 0 9 30 0 0 39
Fallowland 3 Rangeland = (39 x 51) / 301 = 6.61
Forest 4 0 0 2 14 0 16 Forest = (16 x 14) / 301 = 0.74
Built up = (15 x 15) / 301 = 0.75
Built up / Bareland 5 0 0 0 0 15 15
Total number of correct classification
Grand Total 138 83 51 14 15 301 attributed to chance = 97.01

Overall accuracy = Correct predictions / total predictions


Correct predictions = 138 +74+ 30+ 14 + 15 = 271 Calculate the Kappa index
Total predictions = 301 = (271 – 97) / (301 – 97)
Overall accuracy = 271 / 301 = 90.03 %
= 174 / 204 = 0.853
APPLICATION OF GIS AND RM IN HYDROLOGY
Deriving Runoff Characteristics
ArcGIS Flow Diagram
Load DEM

Fill sinks

Compute flow
direction

Compute flow
accumulation

Define a
pourpoint

Generate
watershed
Filling Sinks
• DEM creation results in
artificial sinks in the
landscape
• A sink is a set of one or more cells which
has no downstream cells around it
• Unless these sinks are filled they will isolate
portions of the watershed
• Filling sinks is the first step for processing a
DEM for surface water systems
Hydrologic Slope
- Direction of Steepest Descent
30 30
67 56 49 67 56 49

52 48 37 52 48 37

58 55 22 58 55 22

67  48 67  52
Slope: 0.45 0.50
30 2 30
Flow Direction Arrows
Based on Direction of Steepest Descent

Elevation Flow Direction


Eight Direction Pour Point Model

32 64 128

16 1

8 4 2

ArcGIS Flow Direction Encoding


ArcGIS Flow Direction Raster Encoding
Flow Accumulation
Number of Cells Contributing Flow

Flow Direction Flow Accumulation


Value = Number of Cells Flowing Into
Delineating Surface Water Drainage
PRACTICAL EXERCISE

Hands-on Exercise
Importing and plotting GPS coordinates in
ArcMap

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy