0% found this document useful (0 votes)
16 views300 pages

Combined Photogrammetery

GIS based

Uploaded by

kakswix16
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views300 pages

Combined Photogrammetery

GIS based

Uploaded by

kakswix16
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 300

LSG 2203:

PHOTOGRAMMETRY 1

D R . J O H N R I C H A R D OT U K E I
J R O T U K E I @ YA H O O . C O M
GENERAL REMARKS
Introduction to photogrammetry provides students with an overview of
theory and concepts of photogrammetry.
Photogrammetry is an engineering discipline affected by developments in
computer science and electronics
Just like many disciplines, photogrammetry is in constant change mainly
attributed to developments in computer science and electronics. This can
be seen from the shift from analogue to digital photogrammetry
 photogrammetry and remote sensing and related fields. Traditionally, the
main difference between photogrammetry and remote sensing was in
generating accurate 3D products. This difference is however, getting further
greyed due to developments in remote sensing such as LiDAR and INSAR.
HOW DID IT START

To know the enemy position and nature of the terrain


THERE COMES THE
AIRCRAFT
PHOTOGRAMMETRY IN WORLD
WAR 1 AND 2
TECHNIQUES WERE THEN
DEVELOPED
• Aerial films to detect camouflage
• Photo interpretation in the army and civilian life
• Mapping applications developed in both army and civilian
• Survey was opposed by surveyors involved in ground surveys. Main
reason loss of jobs because of the speed with which aerial surveys
was carried out as well as inertia
PHOTOGRAMMETRY….

Photogrammetry is a mapping technique that can be used for many


applications
 topographic mapping
 site planning
 Earthworks
 orthophotos
 DTM
SO WHAT IS
PHOTOGRAMMETRY?
• The term comes from three Greek words
 Photo >>> Light
 Graphein >>>>> write
 Metron >>>> measure

Hence photogrammetry is simply the science of measurement from


photos.
DEFINITION
Photogrammetry and Remote Sensing are the art, science and
technology of obtaining reliable information about physical
objects and the environment through the process of recording,
measuring and interpreting imagery and digital representations
thereof, derived from non-contact sensors

Zhizhou, 1996

Photogrammetry - Dr. John Richard Otukei 9


DEFINATION

• The American Society of Photogrammetry and Remote Sensing


(ASPRS) defines photogrammetry as ‘the science, art and techniques
of obtaining reliable information about physical objects and the
environment. This is done through a process of recording, measuring,
and interpreting aerial and terrestrial photographs ’.
WORKFLOW OF
PHOTOGRAMMETRY
• A standard workflow of Photogrammetry contains three main phases
which are: 1) Data Acquisition, 2) Photogrammetric Procedures and 3)
Photogrammetric Products
INFORMATION IN
PHOTOS/IMAGES
There are two types of information in images

1. Radiometry: Description of the way in which the terrain


reflects electromagnetic energy. Radiometry is characterised
by the amount of energy reflected (radiance and brightness)
and the spectrum of energy reflected (the colour/hue).

2. Geometry: Description of the spatial relationship between


features in the terrain. Geometry enables the measurement
of the position, size and shape of terrain features. Arguably
the most important information from photos

Photogrammetry - Dr. John Richard Otukei 12


INFORMATION IN
PHOTOS/IMAGES
Additionally

1. Semantic information: Related to the meaning of an


image and is always obtained after interpreting the data
in the photos

2. Temporal information: relates to change of an object


with time. This is usually obtained by obtaining time
series images and observing the state of an object over
time

Photogrammetry - Dr. John Richard Otukei 13


ASPECTS OF
PHOTOGRAMMETRIC
• Acquisition

• Measurement

• Interpretation

• Presentation

Each of the above has a Quantitative and Qualitative


aspect

Photogrammetry - Dr. John Richard Otukei 14


DATA ACQUISITION AND
PLATFORMS
Object platform Specialisation

Space Space vehicles Space photogrammetry

Earth surface Aerial vehicle/airplane Aerial photogrammetry

Buildings/industrial assembly Tripod Architectural or industrial


applications
ASPECTS OF PHOTOGRAMMETRIC

Quantitative aspects
• Position X,Y and/or Z (height)
• Precision/accuracy
• Reliability

Purpose:
Modelling the image formation process
Camera and sensor calibration
Modelling measurement errors
Improving precision and reliability

Photogrammetry - Dr. John Richard Otukei 16


ASPECTS OF PHOTOGRAMMETRIC

Qualitative aspects
Pattern recognition and interpretation
Radiometric classification (e.g., in IR imagery bright
areas represent dense vegetation)

Photogrammetry - Dr. John Richard Otukei 17


CLASSIFICATION
Various classifications of photogrammetry

Metric vs. Non-Metric

Camera angle (FOV)

Range to object

Process/Instrumentation

Photogrammetry - Dr. John Richard Otukei 18


CLASSIFICATIONS
Metric
Metric vs.
Semi-metric
Non metric
Non-metric

Narrow

Normal
Camera
Wide
angle
Super wide
Vertical Low
Classification
Aerial/Airborne
Range to Oblique High
Terrestrial
object Panoramic
Close range

Analogue
Process/
Analytical
Instrument
Digital

Photogrammetry - Dr. John Richard Otukei 19


CLASSIFICATION
Metric vs. Non-Metric

• Metric

– stable geometry, fixed principal distance, small lens distortion

• Semi-metric

– unstable geometry, variable principal distance, larger lens


distortion

• Non-metric

– unstable geometry, variable principal distance, strong lens


distortion

Photogrammetry - Dr. John Richard Otukei 20


CLASSIFICATION
Camera angle
Angle  Focal Application
f degrees length f
(mm)
Narrow 10 –20 610 – 915
angle

Normal 50 – 75 210 – 300 Orthophotography
angle
(NA)
Wide angle 85 – 95 152 Mapping
(WA)
Super wide 110 – 88 Small scale
angle 130 mapping
Why is Normal Angle photography(SWA)
used for orthophotograhy?
Photogrammetry - Dr. John Richard Otukei 21
CLASSIFICATION
Range to object/Platform
• Airborne
– Photography from an airborne platform

• Terrestrial
– Photography from the ground. Camera to object distance more
than 300m.

• Close range
– Photography from the ground. Camera to object distance less
than 300m.

Photogrammetry - Dr. John Richard Otukei 22


CLASSIFICATIONS
Range to object/Platform - Airborne

Vertical Low Oblique

Photogrammetry - Dr. John Richard Otukei 23


CLASSIFICATIONS
Range to object/Platform – Airborne -
Oblique
High Oblique

Low Oblique
Panoramic

Photogrammetry - Dr. John Richard Otukei 24


ANALOGUE, ANALYTICAL AND
DIGITAL
ANALOGUE
PHOTOGRAMMETRY
• Analog Photogrammetry is the branch of Photogrammetry
that includes all methods and techniques to extract
information from analog photos based on mechanical and
optical methods or their combination.
• The Principle of Analog Photogrammetry is to produce in
the laboratory, and on a smaller scale, the configuration of
the camera when taking pictures in two positions. This
configuration is reconstructed by using optical and
mechanical instruments.
EXAMPLE OF ANALOGUE
INSTRUMENTS
ANALITICAL
PHOTOGRAMMETRY
Analytical Photogrammetry is also based on the reconstruction of
camera’s positions during the flight mission. However, the reconstruction
is not performed mechanically. Although the used photos are analog, the
principle of Analytical Photogrammetry is to reconstruct mathematically
the configuration of cameras during the flight using computers.
ANALYTICAL
PHOTOGRAMMETRY
DIGITAL PHOTOGRAMMETRY

• Digital Photogrammetry uses the same mathematical principles as


Analytical Photogrammetry.
• However, Digital Photogrammetry (in contrast to Analytical
Photogrammetry) uses Digital Photos.
• Digital Photogrammetry Digital Photos may come either from
scanning existing Analog Photos, or directly acquired from Digital
camera (see picture below)
DIGITAL PHOTOGRAMMETRY
CHARACTERISTICS OF PHOTOGRAPHS

i) A photograph is a central projection. All points in the


image are formed by rays passing through the same
point – The projection centre.
ii) In a photograph, scale is non-uniform (except in
vertical photographs of flat ground !!!).
iii) In photographs objects are foreshortened and
sometimes occluded.
iv) Photographs depict all objects visible in the field of
view.
v) The image in photographs contain distortions
because of central projection, deficiencies of camera
lenses, stability of both photographic film and
cameras, and the relief of the objects photographed.

Photogrammetry - Dr. John Richard Otukei 32


MAGINAL DATA ON AERIAL
PHOTOS
• Fiducial marks: small registration marks
exposed on the edges of a photograph. The
distances between fiducial marks are
precisely measured when a camera is
calibrated.
• Roll and Photo Numbers: each aerial photo
is assigned a unique index number
according to the photo's roll and frame.
• Geographic location, time and date, etc.
CHARACTERISTICS -
SCALES
i) Large scale: => 1:10000. Small format aerial
photography, Terrestrial and Close range photography
ii) Small scale: < 1:10000. Aerial photography for
mapping.


H Scale = f
H

Photogrammetry - Dr. John Richard Otukei 34


OBJECTIVES OF PHOTOGRAPHY

i) To Accurately reproduce both the radiometric and


geometric properties of the terrain on the final
photographic product.

ii) To obtain complete stereoscopic coverage of the


object so that a three dimensional (3D) record is
obtained.

iii) To reproduce the photographic images as


efficiently and as economically as possible

Photogrammetry - Dr. John Richard Otukei 35


APPLICATIONS OF
PHOTOGRAMMETRY

• Mapping/ GIS • Manufacturing


• Medicine • As-Built-Surveys
• Arts/ Entertainment • Geology
• Navigation • Geomorphology
• Architecture • etc.,
• Object reconstruction
• Archeology
• Engineering
• Astronomy

Photogrammetry - Dr. John


36
Richard Otukei
MAPPING

Photogrammetry - Dr. John


37
Richard Otukei
DTM AND DEM

Photogrammetry - Dr. John


38
Richard Otukei
DTM and DEM

Photogrammetry - Dr. John


39
Richard Otukei
OBJECT
RECONSTRUCTION

Photogrammetry - Dr. John


40
Richard Otukei
MEDICAL

Photogrammetry - Dr. John


41
Richard Otukei
CLASSIFICATION – LASER
SCANNING

Photogrammetry - Dr. John


42
Richard Otukei
PHOTOGRAMMETRIC PRODUCTS

1. Compilations, 2D Digital Models

– Maps (paper)
– Digital Maps

Photogrammetry - Dr. John


43
Richard Otukei
PHOTOGRAMMETRIC PRODUCTS

2. Control points-Triangulation

• Control densification
• Adjustment of multiple stereo-models
• GPS reduces need for ground control

Photogrammetry - Dr. John


44
Richard Otukei
PHOTOGRAMMETRIC PRODUCTS

3. DTM (Digital Terrain Model)

• Surface Models

Photogrammetry - Dr. John


45
Richard Otukei
PHOTOGRAMMETRIC
PRODUCTS
4. 3D Models

3D Maps
City Maps
Industrial metrology
Accident reconstruction
etc.,

Photogrammetry - Dr. John


46
Richard Otukei
PHOTOGRAMMETRIC
PRODUCTS
5. Image products

• Orthophotos

• Orthomosaics

• Orthophoto maps

• Stereomates

Photogrammetry - Dr. John


47
Richard Otukei
PHOTOGRAMMETRIC
PRODUCTS
A DTM is required in the production of these
products:

• Orthophotos - differentially rectified image.

• Orthomosaics

• Orthophoto maps

• Stereomates

Photogrammetry - Dr. John


48
Richard Otukei
INSTRUMENTS

Photogrammetry - Dr. John


49
Richard Otukei
RMK TOP ZEISS

Photogrammetry - Dr. John


50
Richard Otukei
LSG 2203: Photogrammetry 1

OPTICS

DR. JOHN RICHARD OTUKEI


JROTUKEI@YAHOO.COM
Optics for photogrammetry
Optics is the study of light
All photogrammetry instruments use optics for their function
The number of optical elements depend on the equipment
 Pocket stereoscopes>>> use only thin lenses
Aerial Cameras>>>> contain highly corrected and expensive compound
lenses
Stereoscopic plotters>>> use many lenses, mirrors and prisms.
Basic optics
A simple, converging lens will produce an image at a distance, i, from
the lens (along the optic axis) of an object at a distance, o, on the
opposite side of the lens . The object distance and image distance are
related by:
Relating object and image
distances
The object and image distances are related using the following
equation.

where f is the focal length of the lens.


Introduction to light
Light is basic to almost all life on Earth.
Light is a form of electromagnetic radiation.
Light represents energy transfer from the source to the observer.
Many phenomena depend on the properties of light.
◦ Seeing a TV or computer monitor
◦ Blue sky, colors at sunset and sunrise
◦ Images in mirrors
◦ Eyeglasses and contacts
◦ Rainbows
◦ Many others
Characterisation of optics
 Physical optics
Geometric optics
Quantum optics
In physical optics, light is considered to travel
through a medium such as air in a series of
electromagnetic waves emanating from a source.
This can be visualised as a group of concentric circles
radiating from a point source.
Demonstration:
Drop a stone into a pool of water to create waves
radiating from a point where the stone dropped.
Physical optics
Each wave has its own frequency, wavelength and amplitude.
 frequency is the number of wavelength that pass a given
point per second
Amplitude is the measure of the height of the crest or trough
 wavelength is the distance between successive
crests/troughs
Geometric optics
Light is considered to travel
from point source through a
medium in straight lines
Quantum optics
Light is considered as particles
Reflection of light
A ray of light, the incident ray, travels in a medium.
When it encounters a boundary with a second medium, part of the
incident ray is reflected back into the first medium.
◦ This means it is directed backward into the first medium.

For light waves traveling in three-dimensional space, the reflected light


can be in directions different from the direction of the incident rays.
There are two types of reflection
◦ Regular reflection ( specular), off a smooth surface
◦ Diffuse reflection, off a rough surface
Specular reflection
Specular reflection is reflection
from a smooth surface.
The reflected rays are parallel
to each other.
Diffuse reflection
Diffuse reflection is reflection
from a rough surface.
The reflected rays travel in a
variety of directions.
A surface behaves as a smooth
surface as long as the surface
variations are much smaller than
the wavelength of the light.
Law of reflection of light
The normal is a line
perpendicular to the surface.
◦ It is at the point where the
incident ray strikes the surface.

The incident ray makes an


angle of θ1 with the normal.
The reflected ray makes an
angle of θ1’ with the normal.
Law of reflection of light
The angle of reflection is equal to the angle of incidence.
θ1’= θ1
◦ This relationship is called the Law of Reflection.

The incident ray, the reflected ray and the normal are all in the same
plane.
Multiple reflections
The incident ray strikes the
first mirror.
The reflected ray is directed
toward the second mirror.
There is a second reflection
from the second mirror.
Apply the Law of Reflection
and some geometry to
determine information about
the rays.
Retroreflection
Assume the angle between two mirrors is 90o .
The reflected beam returns to the source parallel to its
original path.
This phenomenon is called retroreflection.
Applications include:
◦ Measuring the distance to the Moon
◦ Traffic signs
Refraction
When a ray of light traveling through a transparent medium encounters
a boundary leading into another transparent medium, part of the energy
is reflected and part enters the second medium.
The ray that enters the second medium changes its direction of
propagation at the boundary.
◦ This bending of the ray is called refraction.
Refraction
The incident ray, the reflected ray, the refracted ray, and the normal all
lie on the same plane.
The angle of refraction depends upon the material and the angle of
incidence.
sin θ2 v 2

sin θ1 v1
◦ v1 is the speed of the light in the first medium and v2 is its speed in the
second.
Refraction of light
The path of the light
through the refracting
surface is reversible.
◦ For example, a ray
travels from A to B.
◦ If the ray originated
at B, it would follow
the line AB to reach
point A.
Refractive index
The speed of light in any material is less than its speed in vacuum.
The index of refraction, n, of a medium can be defined as

speed of light in a vacuum c


n 
speed of light in a medium v
Indices of refraction
Snell’s law
n1 sin θ1 = n2 sin θ2
◦ θ1 is the angle of incidence
◦ θ2 is the angle of refraction

The experimental discovery of this relationship is usually credited to


Willebrord Snell and is therefore known as Snell’s law of refraction.
Prism
A ray of single-
wavelength light
incident on the prism
will emerge at angle 
from its original
direction of travel.
◦  is called the angle
of deviation.
◦  is the apex angle.
Assignment
Use the refraction concept to explain how a rainbow is formed.
Differentiate between virtual and real images
 explain the meaning of the scheimpflug condition
 what is meant by the term lateral magnification
LSG223: PHOTOGRAMMETRY1
STEREOSCOPY

Dr. John R. Otukei


Content
• Vision and depth of perception
• Stereoscopy and stereoscopes
• Parallax measurement
• Height measurement from parallax
• Photographic interpretation using stereoscopes
Stereoscopy -Introduction
Is a term given when a person looks at looks at
two overlapping photos simultaneously to achieve
3D vision. This is achieved by using each eye to
view one image
Stereoscopy is achieved through binocular or
stereoscopic vision
Normal two eye vision is required to achieve
stereoscopic vision albeit with the help of
stereoscopic instruments
4 Photogrammetry - Dr. George Sithole

Vision and depth perception


Monoscopic Vision
Vision with one eye. The depth of objects in the field of view is perceived using
depth cues.

Binocular Vision
Vision with two eyes. The depth of objects in the field of view is perceived using
stereoscopy.
5 Photogrammetry - Dr. George Sithole

Vision and depth perception


Monoscopic Depth Perception (Depth cues)
• Relative size of objects

• Hidden objects

• Shadows

• Placement of objects against foreshortened objects

• Differences in focusing of the eye for objects at different distances

• Amount of detail visible on objects (visual acuity)

• etc.,
6 Photogrammetry - Dr. George Sithole

Vision and depth perception


Monoscopic Vision – Depth cues

Hidden objects Relative size Foreshortening


7 Photogrammetry - Dr. George Sithole

Stereoscopy
Stereoscopic Depth Perception - Formation

DB
DA
L

A
B
 
Eye base

Parallactic angle

R
 therefore DA < DB
8 Photogrammetry - Dr. George Sithole

Vision and depth perception


Stereoscopic Depth Perception - Formation

DB
DA
L

A
B
 

Parallax
R
 therefore DA < DB
Stereoscopy definantion
The use of binocular vision to achieve 3-dimensional effects.

• Enables you to view an object from 2 different camera


positions to obtain a 3-dimensional view
Conditions for stereo viewing
• Two adjacent and overlapping photos in the same flight line

• The optical axes of the cameras must be near parallel

• Cameras must be at approximately the same height

• The optical axes of the eyes must be near parallel

• The left eye must see the left image and the right eye the right image

• | A - B |  1.17 (1.3 gon)

• The images must only differ in horizontal parallax (x-parallax), i.e. py = 0

• Difference between objects in both images may not exceed 14%


Advantages of stereo vision
• Facilitates measurement of depth
• Has high visual acuity than monoscopic vision
Stereoscopes
• A binocular optical instrument that
helps us view two properly oriented
photos to obtain a 3-dimensional
model. There are two types of
stereoscopes
1. lenses
2. Mirror stereoscopes
1. Lens (pocket) stereoscopes
 Simplest with two magnifying
glasses mounted with separation of
about the interpupillary distance of
human eyes
 Least expensive
 Small
 2-4 x magnification
 Used in the field

If the distance between the lenses and the table is equal to the focal length
of the lenses then the images appear to come from infinity.
Mirror stereoscope
Mirror stereoscope

Photos can be placed separately


for viewing

Used in the field?

Has a pair of reflecting and viewing mirrors


oriented at 45 degrees to the plane of the
photographs.
A pair of lenses is also provided to facilitate
comfortable viewing.
Some stereoscopes have removable
binoculars to help in stereoscopic vision
Scanning mirror stereoscope

Scanning mirror stereoscope

A series of lenses and prisms

Relatively expensive

Not used in the field


Zoom stereoscopes

Zoom stereoscope

Variable magnification:
2.5 - 20 x

Very Expensive

Not used in the field


Zoom transfer stereoscopes

Variable magnification:
2.5 - 20 x

Used to transfer features from


a stereo-pair of photos onto a
map or other photo

Very Expensive

Not used in the field


Stereo photography Geometry
Principal Point
Geometric center of the photograph, and the intersection of the X and Y axes.
The intersection of the North-South and East-West fiducial marks.

Conjugate Principal Points


Geometric center of a photograph's stereo pair (not the one at hand), located on
the photograph at hand.
Orienting photos for viewing.
• Ensure that the photos are consecutively numbered. This helps
to identify overlapping photos
• Lay one photograph down on the other and ensure that overlap
areas coincide
• When using lens stereoscope, separate the photos such that
the distance between them is about the distance between
lenses.
• For mirror stereoscope, separate the photos such that the
conjugate images are separated by a distance equal to the
distance of large wing mirrors.
• Place the mirror on top of the photos such that line joining the
lenses is parallel to the flight line.
• While looking at the lenses make any adjustment that will
permit stereoscopic vision.
Rick Lathrop, Rutgers University
Rick Lathrop, Rutgers University
Flight characteristics
• Overlapping photos are necessary to produce stereo effect. Photos taken along the flight
line need to have at least 50% overlap. Normally 60% overlap is specified for a flight
mission. Overlap along flight line is also known as endlap.
• The area that is common between successive photos is called overlap. A stereo model is
created when successive photos are viewed with a stereoscope.
• Normally when an area is flown for stereo coverage, sidelap between flight lines is
necessary for complete coverage. 30% sidelap is normal for most flight plans.

Direction of flight
Factors affecting stereo vision
• Eye strength needs to be balanced between your two eyes. Wear vision aids when
viewing stereo pairs.
• Eye fatigue from mental and physical condition, poor illumination, uncomfortable
seating and viewing positions, misaligned photos, and low-quality photos.
• Align shadows properly and sequence photos correctly or else you will create a
pseudoscopic view.
• Moving objects between photos will not view in stereo. They'll show up as blurs.
• Rapid changes in topography between photos can bias stereoscopic interpretation.
• Clouds, shadows, and Sun glint can degrade stereoscopic viewing and cause loss of
information.
Parallax
Defination:
The apparent displacement of an object with respect to a frame of reference, caused by a
shift in the position of observation.
Rick Lathrop, Rutgers University
26 Photogrammetry - Dr. George Sithole

Stereoscopy and Parallax


At the time of Left Right
Photography
Formation
al ar

Flight line
27 Photogrammetry - Dr. George Sithole

Stereoscopy-Parallax
At the time of Left Photo/Camera Right
Photography Base, B
Formation
al ar

60 % overlap
A
Photogrammetry - Dr. George Sithole 28

Stereoscopy
In the lab Left Right
Observation Eye Eye

al ar

Stereo model
Photogrammetry - Dr. George Sithole 29

Stereoscopy
In the lab Left Right
Eye Base, Be
Observation Eye Eye

al ar

Stereo model
Parallax measurement

Method 1: Using distances from the principal points (PP)

Method 2: Using distances between objects


Photogrammetry - Dr. George Sithole 31

Stereoscopy - Parallax
In the lab Left Right
Observation Eye Eye

ppl al ar ppr

pxl pxr

Parallax, pxa = pxl - pxr A


Differential parallax
Difference between the stereoscopic parallax at the top and bottom of the object
(if necessary take absolute values)

If C2 = 2.06 in and C1 is 1.46 in then


dP = 2.06 – 1.46 = 0.6 in
Average photo distance
Average photo distance can be computed as:

See diagram in the previous slide


Absolute parallax
• Read about it!!!! How can it be computed?
Height measurement using parallax

General formula for calculating height using parallax

 H  dP 
h   
 ( D  dP ) 

Where:
h = object height (required)
H = flying height ( can be obtained from photograph)
dP = differential parallax ( see slide 32)
D = avg. photo base length (see slide 33)

** Above equation is for level terrain only.


Height measurement using parallax Example
Measurements for parallax height calculations:
1. Determine average photo-base (P)

Average distance between PP and CPP for stereopair

P1 P2

PP CPP CPP PP

Example: if then
P1 = 4.5 in. P = 4.4 in.
P2 = 4.3 in.
Height measurement using parallax Example
Measurments for parallax height calculations:
2. Determine differential parallax (dP)

Difference of the distances between feature bases and tops while stereopair is
in stereo viewing postion.

dt

db

PP CPP CPP PP

Example: if then
db = 2.06 in. dP = 0.6 in.
dt = 1.46 in.
Height measurement by parallax example

Required: compute the height of the tree. Take H as 2,200 ft

 H  dP 
h   
 ( P  dP) 
h = (2,200 ft. * 0.6 in.) / (4.4 in. + 0.6 in.)
= 1320 ft. in. / 5 in.
= 264 ft.
Recap
• Name the two types of vision used for depth perception
• How can you perceive depth using the two approaches
above?
• What are the conditions for stereoscopic vision?
• Give the advantages of stereo vision.
• What are stereoscopes? Give any 3 examples of
stereoscopes
• What is the difference between principal point and
conjugate points
• How do you orient a pair of photographs for stereo vision?
• What is parallax?
Co-ordinates by parallax measurement
• We have already seen how parallax measurement can be used for height
measurement
• Let us explore how parallax can also be used for measuring X and Y co-ordinates
(No theodolites!!!)
Parallax equations

Conditions:

• Camera axes are parallel


• The flying height is the same at the two exposure stations
Geometry

A
Useful equations

 X, Y are the horizontal co-ordinates


 y and x are the image co-ordinates of point
a measured from left photo
 B is the airbase
 f is the focal length
 p is the parallax of the image point (a)
 H is the flying height
 h is the height of the point A above sea
level.
These equations are often called parallax
equations and are the most useful to a
photogrammetrist
Parallax difference equations for height
measurement
• Work when the assumptions made in parallax equations do not hold
 variable flying height
Tilted photographs
Image distortions ( scale variation, relief distortions)
 that means scale errors in parallax resulting in errors in the (H-h) distance
We need another method to take into account these variations.
Parallax difference equations for height
measurement

The above is an example of parallax difference equation where

 The formular should be applied for points close by.


 The differencing technique cancels out systematic errors
affecting parallax at each point.
 In the above example, C is a control point whose
elevation is known and can be used to compute the
elevation of A
Vertical exaggeration
Vertical exaggeration is apparent scale disparity in the stereomodel whereby the vertical
scale appears greater than horizontal scale.
Causes:
Lack of correspondence between B/H and be /h
Where:
B = airbase
H = height above average ground
be = eye base (approx. 65mm if not given)
h = distance between the eyes and stereomodel (approx. 400 mm for mirror stereoscope)
Magnitude of vertical exaggeration
The magnitude of vertical exaggeration can be approximated by the following
equation:

B/H
e
be / h
Increasing the airbase (B) or be increases the vertical exaggeration
48 Photogrammetry - Dr. George Sithole

Stereoscopy – Exaggeration
L R R

In the lab
Observation

Increasing Be
results in
magnification
Assignment:
1. Develop an equation relating the airbase base B of photography, the percentage
overlap (PE) between photos and the ground coverage G of the photo on the
ground.
2. Develop also an equation relating the ground coverage G, flying height above datum,
focal length l and the photographic size d.
3. Combine equations in 1 and 2 to establish the formular for the base height ratio
(B/H).
Principle of floating mark
Stereoscopic measurements are possible if the floating mark is introduced in the viewing
system.
Concept:
• Identical half marks (eg. Cross, small circles) are placed in the field of view of each eye.
• As the stereomodel is viewed the two half marks are viewed against the photographed
scene by the each eye.
• If the half marks are properly adjusted, the brain will fuse their images into a single
floating mark that appears in 3D surface relative to the model.
• If the half marks are moved closer, their parallax increases and the fused mark will
appear to rise
Principle of floating mark
• If the half marks are moved far apart, the parallax decreases and the fused mark will
appear to fall.
• The fused mark can therefore be moved up and down until it rests on the model surface
(terrain).
• The position and elevation of the mark can be determined and plotted on the map using
a transfer device.
52 Photogrammetry - Dr. George Sithole

Stereoscopy – Floating mark


Left Right
In the lab Eye Eye
Observation
ppl ppr

A
53 Photogrammetry - Dr. George Sithole

Stereoscopy – Floating mark

In the lab Left Right


Observation Eye Eye

ppl ppr

A
54 Photogrammetry - Dr. George Sithole

Stereoscopy – Floating mark


In the lab Left Right
Observation Eye Eye

ppl ppr

A
Errors theory
Accuracy:
Degree of conformity to the true value. A value which is close to the true value has a high
accuracy. Unfortunately, it is not easy to know what a true value is and as a result the accuracy can
also never be known. Accuracy can only be estimated for example by checking against an
independent higher accuracy standard.

Precision:
Is a degree of refinement of a quantity or measuerent. This can be measured by taking several
measurements and checking the consistency of the values. If the values are close to each other then
the precision is high and the reverse implies a low precision.
Error theory
• An error: Difference between measured and true value

Types of errors:
 Mistakes or blunders: Gross errors caused by carelessness or negligence and include:
misidenfication of points, misreading a scale and transposing numbers. These errors
can generally be avoided by exercising care during measurements
 Systematic errors: errors that follow some mathematical or physical law. That means
that if the condition causing the error are known, measured and properly modeled, a
correction can be calculated and applied to the measurement. This helps to eliminate
the systematic errors. These errors always remain constant in magnitude and algebraic
sign if the condition causing them remains the same. Since the sign remains the
same, systematic errors accumulate and they are often reffered to as cumulative
errors. Examples in photogrametry include: shrinkage and expnasion of photographs,
camera lens distortions and atmospheric refraction distortions.
Error theory-Types of errors
Random errors: these are errors that remain after blunders and systematic errors have been
accounted for. They are generally small and do not follow any physical laws like systematic
errors. These type of errors can be be assessed using laws of probability. Random errors
are likely to be positive or negative and hence they compensate each other. This is the
reason they can also be refered to as compensating errors. In photogrammtry some sources
of random errors include: estimating the the least graduations of the scale and indexing the
scale.
Error theory
• Errors are enevitable in any measurement and also computed quanties from measured
values.
• Sources of error:
Locating and marking flight lines on photos
Orienting stereopairs for parallax measurement
Parallax and photo coordinate measurement
Shrinkage and expansion of photographs
Unequal flying heights
Tilted photographs
Errors in ground control
Camera lens distortion and atmospheric errors.
Error propagation
Error propagation deals with approaches of estimating errors in computed quantities based
on errors from the measurements.
Assumptions:
Errors in the variables of the equations are correlated i.e. Error in one variable are
dependent upon errors in other variables.
Errors in the measured quantities are independent
Error propagation
Asssume for example that we have a quantity F which we want to compute from n independent
observations x1, x2,.........xn.

Then mathematically:

F =f(x1, x2,.........xn)

If are the corresponding errors in the measured quantities,


then it can be shown that the standard error in the computed quantities is
Fundamental problems in photogrammetry
There are two fundamental problems in photogrammetry
Resection
Intersection

• In field surveying intersection involves locating a point (p) without


occupying it
• Resection: determining co-ordinates of point by taking measurements
from it to known points. At least three control points are required.
Photogrammetry space resection
• Resection is the process of recovering the
exterior orientation of a single photograph from
image measurements of ground control points.
• The solution requires at least three total control
points that do not lie in a straight line
• It assumes that camera information is available
i.e. focal length, and principal point location. In
aerial photogrammetric mapping, the exact
camera position and orientation are generally
unknown.
• The exterior orientation must be determined
from known ground control points by the
resection principle.

The determination of the position and orientation of an image in space from known
ground positions of control points in the images.
Intersection problem in photogrammetry?

• Intersection (space forward intersection) is a technique used to


determine the ground coordinates X, Y, and Z of points that appear in
the overlapping areas of two or more images based on known interior
orientation. and known exterior orientation parameters.
• The collinearity condition is enforced, stating that the corresponding
light rays from the two exposure stations pass through the
corresponding image points on the two images and intersect at the
same ground point.
• If the interior ( focal length, principle point) and exterior orientation
parameters (camera position, 3 orientation angles) of the photographs
are known, then conjugate image rays can be projected from the
photograph through the lens nodal point (exposure station) to the
ground space.
• Two or more image rays intersecting at a common point will determine
the horizontal position and elevation of the point.

The calculation of the object space coordinates of a point from its coordinates
in two or more images.
Photogrammetric solutions
Photogrammetric solution requires knowledge of :
Interior orientation parameters (focal length, principle point location)
Exterior orientation paramters (camera position and 3 rotation angles)
Ground coordinates of points to be mapped.
Interior orientation parameters are always known throw camera callibration
Exterior orientation parameters are established through resection (or using Inertial
positioning systems and GPS)
Photogrammetric solutions
The overall solution to photogrammetric problems involves carrying out:
Inner/interior orientation
Relative orientation
Absolute orientation
 Relative and absolute orientation are generally called exterior orientation
These can be accomplished using
Analogue and
Analytical/digital approaches
Why bother with orientations???
Maps Vs. images
Why bother with orientations?
• We want to make maps from images BUT
Images
• Have perspective projection
• Relief displacement
• Scale variation

AND Maps
• Orthogonal projection (2D representation of 3D)
• No scale variation
• No relief displacement
Why bother with orientations

Orientation therefore helps to transform centrally projected images into a three


dimensional model, which can be used to plot an orthogonal map.
Perspective Vs orthogonal
Interior orientation
Defn: Reconstruction of the geometry of the bundle of imaging rays as they existed at the time
of photography.
Purpose:
• Reconstruct the bundle of light rays (as defined by the perspective center and the image points)
in such a way that it is similar to the incident bundle on the camera at the moment of
exposure. i.e. determine the internal geometry of a camera.
• Interior orientation is defined by the position of the perspective center w.r.t. the image plane
(xp, yp, c).
• Another component of the interior orientation is the distortion parameters (Radial Lens
Distortion, Decentric Lens Distortion (axis misalignment), Atmospheric Refraction, Affine
Deformations, Out of Plane Deformations)
Relative orientation
Objective:
• Orient the two bundles of a stereo-pair relative to each
other in such a way that all conjugate light rays intersect.

Result:
• A stereo Model, which is a 3-D representation of the
object space w.r.t. an arbitrary local coordinate system
• If we make at least five conjugate light rays intersect, all
the remaining light rays will intersect at the surface of the
stereo-model.
• Data registered in arbitrary co-ordinate system-no ground
co-ordinates
Absolute orientation
Purpose: rotate, scale, and shift the stereo model resulting from relative orientation until it
fits at the location of the control points.
• Absolute Orientation is defined by: Three Rotations, One Scale factor, and Three Shifts
• All data is assigned ground co-ordinates
Exterior orientation
Exterior Orientation has two components:
• The position of the perspective center w.r.t. the ground coordinate system (Xo, Yo, Zo).
• The rotational relationship between the image and the ground coordinate systems (w, f,
k)
These are the rotation angles we need to apply to the ground coordinate system to make
it parallel to the image coordinate system.
Position of perspective centre and rotation
angles
LSG223:PHOTOGRAMMETRY 1

Geometry of vertical photograph

Dr. John Richard Otukei


jrotukei@yahoo.com
CENTRAL PERSPECTIVE PROJECTION
Most of the cameras used in photogrammetry can be
sufficiently approximated by the central perspective projection.
The assumptions made in modelling the image formation
process by such a projection are that light rays travel in a
straight line to the lens and that they pass through the lens
undeviated. The image making process can thus be
simplistically described in the following way:
 A bundle of light rays are reflected from the 3D object or
surface being photographed
 They travel in a straight line to the camera lens
 They pass through the lens undeviated (i.e. unrefracted or
in a straight line) and are brought to a focus at a point on
the image plane
 The photographic film or electronic sensors (in the case of
digital photography) are located in this image plane. In this
way, 3D real world objects are projected (or transformed)
into 2D image objects.
CENTRAL PERSPECTIVE-COLLINEARITY
CONDITION
 The object point, the perspective centre (or optical
centre of the lens) and the image point will all lie on a
straight line. In other words they will be collinear

Vertical photography views. 3D view (a), 3D view seen


sideways (b) and object points in diapositive (c)
CENTRAL PERSPECTIVE PROJECTION
It is not always possible to achieve the central
perspective projection in a manner that has been
explained due to a number of reasons. These will be
covered in the later part of the course
FUNDAMENTAL IMAGE PROPERTIES
 Principal distance: The distance between the perspective centre and the image plane is
called the principal distance (or camera constant). In metric cameras this distance is fixed
 Principal point: Point where a perpendicular dropped from the perspective centre (O)
intersects with the image plane (PP)
 Fiducial marks: (Also referred to as collimation marks). Metric cameras, at the time of
exposure produce fiducial marks on the edge of the image. These marks are images of
artificially illuminated objects embedded in the body of the camera. They are used to
recover the principal point of the image. The intersection of the lines joining opposite
fiducial marks should, theoretically, define the position of the principal point, and the lines
should be orthogonal at their point of intersection. In reality, the fiducial marks are not
perfectly aligned, so there is a slight offset between the principal point and the intersection
of the fiducial lines and the lines are not perfectly orthogonal.
FUNDAMENTAL IMAGE PROPERTIES
 Nadir Point (n): is the point at which the plumbline passing through the perspective centre
intersect the negative plane.
 The Isocentre (1) is the point at which the bisector of the angle of tilt intersects the negative. It is
denoted by (i) on the negative.
 Principal Axis: - is the line, which connect the principal point and the perspective centre.
 Principal Line: - is the line, which connect the principal point and the Nadir point. This line
exists on the negative.
 Principal Plane: - This is the plane in which we have the principal point, the Nadir point, the
Isocentre and all the points on the negative.
 Flying Height: - This is the vertical distance between the perspective centre of the camera and
the terrain.
 Horizon: - This is the apparent visible junction of the earth and the Sky as seen by the visible eye
from the fixed position.
 Homologous Points: - These are points, which appear on the ground. (Corresponding ray)

n is homologous to N and Vice Versa


i is homologous to I
p is homologous to P
Datum: - is a level surface that can be used as a reference.
Perspective Centre: - is the point at which all rays emanating from the object space (ground) to the
image plane pass through. OR It is the bordenline between the object space and the image space.
FUNDAMENTAL IMAGE PROPERTIES
FIDUCIAL MARKS
FUNDAMENTALS OF VERTICAL
PHOTOGRAPH

Principle distance
Principle point

Fiducial marks

Principle point vs.


intersection of fiducials
FIDUCIAL CO-ORDINATE SYSTEM
 The intersection of the fiducial marks is the
centre of the axes
 Co-ordinates of any point can be obtained with
reference to fiducial axes.
RELATIONSHIP BETWEEN OBJECT AND IMAGE CO-
ORDINATES

 Any image co-ordinates must be related to its equivalent on the


ground. This relationship has to be obtained mathematically
RELATING IMAGE AND OBJECT SPACE CO-
ORDINATES
3D CARTESIAN CO-ORDINATE SYSTEM FOR
THE IMAGE CO-ORDINATES

 Origin: O – the perspective centre


 x-axis: parallel to the xf axis

 y-axis: parallel to the yf axis

 z-axis: along the optical axis of the camera,


positive towards the image plane of the negative

 In order to determine a mathermical relationship between image


and object co-ordinates, we have to make the collinearity
condition assumption
INTERIOR ORIENTATION

 Starting at coordinates measured in the fiducial coordinate system,


these need to be transformed to coordinates in the image coordinate
system. This means that the interior orientation parameters need to
be known. The interior orientation of an image refers to the
perspective geometry of the camera and is defined by:
 The calibrated principal distance of the camera
 The position of the principal point in the image plane (fiducial coordinate
system)
 The geometric distortion characteristics of the lens system
 We first assume that there are no lens distortions. The interior
orientation parameters are therefore defined by: xo, yo and c.
 To relate a point on the image (P) for which the fiducial coordinates
are known, to its position in the image coordinate system, the
following relationship can be used:
EXTERIOR ORIENTATION
Determines the position and orientation of a
camera during photography.
 requires six parameters:

 co-ordinates of the cameral position ( X, Y, Z)

 3 rotations and x, y and z directions the


rotations are defined using the Omega-phi-kappa
(w,,) and azimuth-tilt-swing (a,t, s)

Sequential rotations in the x, y


and z axes
SEQUENCE OF ROTATIONS
 The exact sequence in which the rotations are
done is a matter of choice, but the most widely
used convention is to adopt:
 the x-axis and the  rotation as the primary
rotation axis and primary rotation angle
 the y-axis and the  rotation as the secondary
rotation axis and secondary rotation angle
 the z-axis and the  rotation angle as the tertiary
rotation axis and tertiary rotation angle.
ROTATION MATRICES

These are orthogonal rotation matrices, i.e: R-1 =


RT
OVERALL ROTATION MATRIX

 The overall rotation matrix is a product of the


above three rotation matrices. The order of
multiplication affects the resulting rotation
matrix
 a general rotation matrix is of the form

r11  cos cos


r12  cos sin  sin sin cos
r13  sin sin  cos sin cos
r21   cos sin
r22  cos cos  sin sin sin
r23  sin cos  cos sin sin
r31  sin
r32   sin cos
r33  cos cos
RELATING IMAGE AND OBJECT POINTS
 Using image co-ordinates, Rotation matrix, and
object points.
 We can achieve this through collinearity principle
COLLINEARITY EQUATIONS
Objective:
Mathematically represent the relationship between
object and image co-ordinates
Assumption:
The collinearity principle states that Image point,
perspective centre and object point are collinear

Equations expressing the


collinearity condition are
called collinearity equations
COLLINEARITY
 For the line O'P' and O'P, we first assume that the
origins of the 2 coordinate systems coincide and that the
axes of the 2 systems are parallel. Since O'P' and O'P are
collinear, they differ only by the scale factor k:

x  X 
 y   k Y 
   
 z  P  Z  P
COLLINEARITY
 We then make compensation for (x,y,z) for interior
orientation, (X,Y,Z) for non-coincidence of the
coordinate system and non parallel axis.

 x  xo  X  X o 
 y  y   kR Y  Y 
 o  o 

 c  P  Z  Z o  P
COLLINEARITY
 Since we know already the nature of the rotation
matrix, we substitute its value in the equation and
expand the equation. The following equations result
x  x0   k r11  X  X 0   r12 Y  Y0   r13 Z  Z 0 
 y  y0   k r21  X  X 0   r22 Y  Y0   r23 Z  Z 0 
 c   k r31  X  X 0   r32 Y  Y0   r33 Z  Z 0 
• These can be reduced to

x  x0   c r11  X  X 0   r12 Y  Y0   r13 Z  Z 0  Converts object co-ordinates to


r31  X  X 0   r32 Y  Y0   r33 Z  Z 0  image co-odinates

 y  y0   c r21  X  X 0   r22 Y  Y0   r23 Z  Z 0 


r31  X  X 0   r32 Y  Y0   r33 Z  Z 0 
r11 x  x 0   r21  y  y 0   r31  c 
 X  X 0   Z  Z 0 
Converts image co-ordinates to r13  x  x 0   r23  y  y 0   r33  c 
object co-ordinates r12  x  x 0   r22  y  y 0   r32  c 
Y  Y0   Z  Z 0 
r13  x  x 0   r23  y  y 0   r33  c 
COLLINEARITY EQUATIONS
 These equations are different forms of the collinearity equations. They
express the fundamental relationship that the perspective centre, the
image point and the object point lie on a straight line and are
fundamental to many procedures in photogrammetry.
 Except for lens distortion and other deviations from perspective
projection, each frame photograph bundle is defined by three IO
parameters and six EO parameters. These nine parameters are
constant for all rays in the bundle. Any individual ray can be
expressed by the collinearity equations.
 Since the photograph is a 2D representation of a 3D object, the scale
is different between image points, i.e. k takes on a different value for
each ray in the bundle. K is usually unknown and is hence
eliminated. The scale factor implies that for any image point in a
photograph, there are an infinite number of possible related object
points, i.e. one photograph is, in the general case, insufficient to
reconstruct a spatial object. One needs either at least a second
photograph or additional information about the Z co-ordinate (e.g. all
object points lie in a horizontal plane with known elevation).
SCALE OF VERTICAL PHOTOGRAPH
 Scale dictates the amount and level of detail we can see in
a map or photograph.
 The common concept of scale is the ratio of a distance
measured on a map to its true distance on the ground, e.g.
1: 25000 implies that 1mm on the map represents 25
000mm, or 25m, on the ground. A map is an orthographic
projection of the ground surface, hence all points in the
map are in their true relative horizontal positions. The
scale of a map is uniform throughout.
 A photograph may appear to be like a map, especially if the
ground is flat and level and the photograph is taken
vertically. However, since a photograph is a perspective
projection, areas on the terrain lying closer to the camera
at the instant of exposure will appear larger than areas
lying further from the camera. Terrain features are also
displaced relative to their heights, so-called terrain
displacement.
SCALE OF VERTICAL PHOTOGRAPH

The figure (a) shows a section through a vertical photograph with the
lens positioned at O. The elevation of the lens is known as the flying
height (FH) (i.e. it is the distance of the perspective centre from the
target) while the ground (flat) lies at elevation h above the datum. Point
O' is the principal point of the photograph. Distance c is the principal
distance.
SCALE OF VERTICAL PHOTOGRAPHY

Scale of image = (Length of any line on image)/Length of same line on ground


i.e. S = ab/AB
But by similar triangles: ab/AB = c/(H-h).

Thus, the scale at elevation h is given by:

c is usually given in mm, H and h in meters, hence a conversion factor may be necessary.
Generally, scale is a function of c (principle distance), flying height and Terrain elevation.

sh 
 H  h
c
VARIABLE PHOTO SCALE/MEAN SCALE
 For a vertical photograph taken at variable
terrain, there is going to be an infinite number of
different scales. This is one of the principle
differences between a map and a photograph.
 It is therefore convenient to determine and
overall scale to use. This is what is called the
mean/average scale.
 Average scale is the scale at the average
elevation of the terrain
EXAMPLES
 Given the highest elevations, average elevation,
lowest elevation to be 600, 450, 300 m
respectively. Calculate the maximum, minimum
and average scales for photography carried at
flying height of 2000m above mean sea level
using a camera of focal length 152.4mm
OTHER METHODS OF COMPUTING SCALE
Scale = (photo distance)*(map scale)/map distance

Example: The distance on the map between two


road junctions in a flat terrain is 40mm. The
corresponding distance on the vertical photograph
is 80mm. If the scale of the map is 1:25000. what is
the scale of photography?
GEOMETRY OF VERTICAL PHOTOGRAPH-
ADDITIONAL NOTES
PROJECTION SYSTEMS
 Orthogonal Projection
 Central Projection and

 Parallel projection
CENTRAL PROJECTION
 All aerial photographs are based on central projection
while maps are based on orthogonal projection.
 Central projection is the projection in which the rays
emanating from the object pass through a central point,
call the perspective centre (o)
ORTHOGONAL PROJECTION
 Orthogonal projection is a projection in which the projected
rays intercept the other medium at a right angle.
PARALLEL PROJECTION

 Parallel projection is the projection in which the rays


intercept the other medium at any angle.
DIFFERENCE BETWEEN A MAP AND A
PHOTOGRAPH
 Relief displacement
 Tilt displacement
 Slope displacement
 RELIEF DISPLACEMENT: -Is the linear difference
in the position of an image on the photograph caused
by elevation, compared with its true position on the
datum.
 TILT DISPLACEMENT:- This is the linear
difference in position of an image point on a tilted
compared with its position on a vertical photograph.
 A terrain is said to be flat if the average changes in
elevation is less than 10% of the flying height. It is
mountaneous when it is more than 10% of the flying
height.
 SLOPE DISPLACEMENT: May be defined as any
inclination on the earth surface.
EFFECT OF TILT
EFFECTS OF SLOPE ON IMAGE
EFFECTS OF RELIEF
SL213:PHOTOGRAMMETRY 1

Aerial camera

Dr. John Richard Otukei


jrotukei@yahoo.com
Aerial Camera
• Fundamental device in the field of photogrammetry used to acquire
images for photographic measurements
• It may be defined as a light proof chamber in which the image of
the exterior object is projected upon a light sensitive film
(Analogue cameras) or semi conductor electronics (digital
cameras).
Parts of a camera
• Camera magazine
• Camera body
• Lens assembly
Parts of a camera
Aerial Camera parts

Note the
representation of
the front and rare
nodal points. The
rays of light from
the object
converge at the
front nodal point
and pass through
the optical axis of
the lens before
emerging from
the rare nodal
point.
Camera magazine
• The camera magazine houses the reels (Take up reel for exposed
film, unexposed film reel )
• Contains the film advancing and film flattening devices. The film
advancing device transport the film over a length corresponding to
one exposure format size
• The flatting device keeps the film perfectly flat on the focal plane at
each instant of exposure. Film flattening is essential in order to
reduce the distortions in resulting images.
Cemera body
• The camera body essentially houses the camera drive mechanism.
This drive mechanism provides the necessary force to operate the
camera through its repeated cycle comprising of:
Flatting the film
Tripping the shutter at the exposure station.
Locking the shutter and
Advancing the film
• The energy required for the drive mechanism may be applied
manually or may come from an automated electric motor. Usually,
handle for carrying the camera are fitted to the camera body, also it
is the camera body to which electrical connections are made.
Lens assembly
• The camera lens cone assembly is comprised of several parts each
with its function. These several parts include the:
 Lens
 Shutter
Diaphragm.
Filter

• Lens: collect light rays from the object space (terrain) and bring
them to focus on the focal plane.
• THE FILTER: - The filter serves the following purposes:
 It reduces the effect of atmospheric refraction.
 It helps to provide uniform light distribution over the entire format.
It protects the lens from damage and dust.
Lens assembly
• The shutter and diaphragm complement each other on their functions.
They both regulate the amount of light that is allowed to pass through the
lens. While the shutter controls the length of time that light is permitted
to pass through the lens, the diaphragm controls the size of the opening
and hence the size of the bundle of light rays that is allowed to pass
through the lens.
• The focal plane of the aerial camera is the plane in which all radiated light
rays are brought to focus. Compare to the image distance, the object
distance is by far greater and therefore implies that the bundle of rays
reaching the camera lens during photography from the object space
comes from infinity. Aerial cameras therefore have their focus fixed for
infinite object distances. This condition satisfies the Newton’s lens
equation.

• Hence indicating that the image distance V must be exactly equal to the
lens focal length (f) behind the rear node point of the camera i.e. the focal
plane is defined by the upper surface of the focal plane frame. This is the
surface upon which the film emulsions rest when an exposure is made.
Types of cameras
• Single lens frame camera
• Multi- lens frame camera
• Continuous strip camera and
• Panoramic camera
• Digital Camera
Single frame cameras
• The single lens frame camera is used almost exclusively in obtaining
photograph for mapping and photo interpretation purposes
because they provide the highest geometric picture quality.
• The lens of this type of camera is held fixed relative to the focal
plane of the lens.
• The entire format size of one exposure is exposed to light rays
simultaneously with snap of the shutter.
• The general format size here is 230 x 230mm and film capacity of
about 120m long.
• There are different types of single lens according to their focal
lengths depending on the choice of the manufacturer. While there
are some with nominal focal length of 300mm, 210mm and 88mm.
• Note that the focal length of the cameral lens is synonymous with
area of coverage.
Single lens frame cameras
Single lens frame cameral are generally classified according to the
angular field of view (f.o.v.) There are:
• Narrow or small angle single lens frame cameral with average
(f.o.v.) equal to 300 (f =300mm)
• Normal angle single lens frame camera with 600 coverage (f.o.v.) (f
= 210mm)
• Single lens frame camera with 900 coverage (f.o.v.) (f = 152mm)
• Super or ultra- wide-angle single lens frame camera with 1200
coverage (f.o.v.) (f = 88mm)
Field of view of a camera

The angular field is given as


)

The angular field of


view is the angle
subtended by the
diagonal joining the two
opposite fiducial marks
at the corner at the rare
nodal point
Examples of single lens frame cameras
• Zeiss RMR 15/23
• Fair Wild RC- 6A
• Wild RC 7
• Wild RC 8
• Wild RC 9 and
• Wild RC 10
Multi-lens frame cameras
• Similar to the single lens camera except that they have two or more
lenses and expose two or more pictures simultaneously. The two or
more camera are exposed simultaneously to produce two or more
photograph of the same survey area from the above.
• These types of camera, provide data suitable for environmental
monitoring, mapping of natural and cultural resources, e.t.c.
• All cameras simultaneously expose the same area but the different
camera contains films with emulsion that are sensitive to different
regions of the electromagnetic energy spectrum; hence they are
also commonly called multi-spectral cameras
• Differences in the resulting photographs provide clues that are
useful in identifying, interpreting photographed objects
Continuous strip cameras
• In the continuous strip camera, as the film advances it steadily pass
over a narrow slot in the focal plane of the camera at a rate equal to
the speed of passage of ground images across the focal plane
• Thus, as the aircraft fly along the flight line, the ground directly
beneath the camera is photographed in one long continuous strip
of fairly uniform width.
• The speed with which the film advances is a function of the height
of the camera above the ground level, the lens focal length and the
ground velocity (v) or the aircraft velocity.
• Strip camera may use a single lens or they may have two lenses- one
pointing say 230 forward in the flight direction and the other
pointing say 230 backward although both are supposed to lie in the
vertical plane
• This arrangement gives continuous stereoscopic coverage of a strip
of terrain.
Panoramic cameras
• The camera photographs a strip of terrain from horizon to horizon (right to
left) in a direction normal to or scanned from side to side, transverse to the
direction of flight
• a panoramic camera views only a comparatively narrow angular field at any
given instant through a narrow slit.
• Ground areas are covered by either rotating the camera lens or rotating a
prism in front of the lens
• Compared to frame cameras, panoramic cameras cover a much larger ground
area
• With their narrower lens field of view, panoramic cameras produce images
with greater detail than from frame camera images.
• These factors make panoramic cameras ideal sensors in large area
photographic analyses; however, panoramic photographs have the
disadvantage that they lack the geometric fidelity of frame camera images.
• The principal advantages of the panoramic camera, in the areas of forest
studies and environmental monitoring, are its high image resolution and large
area of coverage. The principal disadvantages are its unusual image format of
115mm x 1500mm and the continuously changing photo scale.
Camera calibration
• After manufacture , cameras are calibrated to determine precious
and accurate values of a number of constants called elements of
interior orientation. These values are needed so that accurate
information can be determined from images. The methods for
camera calibration are:
Laboratory
Field
Stella
Laboratory methods are the most common and is often done by the
camera manufacturers or governmental agencies.
Elements of interior orientation
• Equivalent focal length
• Calibrated focal length
• Symmetric radial lens distortion
• Decentering lens distortion
• Principal point location
• Fiducial marks co-ordinates
Elements of interior orientation
Equivalent focal length
• One which is effective near the centre of the lens
Calibrated focal length
• The focal length that produces an overall mean distribution of lens
distortions. It is the distance between the rare nodal point and the
principal point in the photograph
Symmetric radial lens distortions
Are lens distortions that occur along radial lines from the centre of
the photograph (pp). This distortion is however, negligible but often
exists regardless of the perfection made during lens manufacture
Decentering lens distortions
These are distortions that remain after the compensating for
symmetric lens distortions
Elements of interior orientation
Principal point (PP) location
This is the location of the PP with respect to the intersection of the
fiducial axes.

Fiducial co-ordinates
These are the locations of the fiducial marks and provide a 2D
reference for the PP location as well as the determination of the
images on the photograph
Other parameters
• Resolution of camera ( often highest resolution is achieved at the
centre of the photograph as compared with the edges)
• Film flatness ( should not deviate by more than 0.01mm)
• Shutter efficiency
Laboratory method for camera calibration
This method is the most common and comprises of:
 Multicollimator
 Goniometer
The above methods are mainly used for analogue
cameras
Multi-collimator method
• Involves a number of collimators (each
representing a different target)
• Each collimator consists of a lens and a cross
• Individual collimators are mounted such that
optical axes of the neighbouring collimators
intersect at an angle such θ.
• The camera is placed such that its image plane is
perpendicular to the optical axis of the central
collimator/target
• Secondly the lens (front nodal point) of the camera
should be where the axes of the collimators
intersect. This means the image of the central
collimator(g) called the Principal point of Auto
correlation is near the principle point (PP) and also
near the intersection of the fiducial marks)
• Normally, the collimators are arranged in more
than one plane but should be perpendicular planes
Camera calibration
• The camera is further oriented such that the images of the
collimators when photographed lie along the diagonal of the
camera format
• The images of the collimators are obtained by photographing the
collimators and will lie along the diagonal line of the film format
Determination of the Calibrated focal length
• Since the angle of the intersection of the collimator axes is known, we
consider images of the 4 central collimators.
• Measure the distance of the 4 images from the principle point (centre
collimator)
• Compute the corresponding focal length using the equation

• Take the average of the resulting f values and this gives the calibrated
focal length.
• After determining the calibrated focal length, use its value to compute
the theoretical distances of the collimator images i.e.
=
• We can then use this value to compute the radial distortion i.e.

• If the tangential distortion exists, the images will not lie in a straight line
and the offset can be determined.
Principal point location
• This requires establishing the relationship between the fiducial axes
with PP location. The offset can then be determined.
Sources of errors in measured photo co-ordinates
• Film distortion due to shrinkage, expansion, and lack of flatness
• Failure of fiducial axes to intersect at principal point
• Lens distortions
• Atmospheric refraction correction
• Earth curvature
Expansion-shrinkage , non flatness problems
• Photographic materials do shrink or expand resulting into errors
• Lack of film flatness causes errors
• Some materials upon which photographic prints are made have
strong stability such as glass or polyester. However, materials such
as paper have low stability
• The function of shrinkage and expansion is temperature, humidity,
paper type and thickness as well as the method used to dry to the
prints. Hot drum dryers or hanging the prints to dry results in high
distortions. Prints air-dried lying flat at room temperature have low
distortions
Correction due to expansion and shrinkage
• Use co-ordinates of fiducial marks (calibrated and measured from
the photo) and apply the following equation.

and are the corrected co-ordinates of a point


and are the measured co-ordinates
and fiducial co-ordinates
and are the measured fiducial co-ordinates
example
The measured x and y fiducial distances are 233.8mm and 233.5mm
respectively. The corresponding x and y calibrated distances are
232.604mm and 232.621mm respectively. Compute the corrected
values of measured photo co-ordinates of 65.7mm and 61.8mm for
the x and y respectively
Reduction of the co-ordinates to the origin
• The offset of the principal point from the intersection of the
fiducial axes needs to be known from camera calibration..
Lens distortions
Causes displacement of images from ideal locations. It has two
components, the symmetric radial and decentering distortions.
Symmetric radial lens distortion is a function of lens manufacture
and can not be avoided. Decentering is a result of failure to align
the principal axes of the lenses and not the actual design

Decentering
Radial Lens distortions
• Ray changes its direction after passing the rare nodal point
• Occurs along radial lines from the PP
• Increases as we move away from PP
• Normally accounted for after reducing measurements to principle
point and correcting for shrinkage and expansion
Radial lens and decentering

Decentering lens has two


distortions: Radial and
Tangential
Radial lens distortion
Radial lens distortion based on polynomials

Dr = Radial lens distortion


K1, K2, K3, K4 = polynomial coefficients
r= is radial distance

Procedure
 Reduce the co-ordinates to principle point
 Compute radial distance
Radial lens distortion
Radial lens distortion

K1….. Kn are coeffs that define the shape of the curve and are
determined through camera calibration
Example
• A camera calibration report shows a calibrated focal length of
153.206 mm and co-ordinates of the calibrated pp as 0.008mm and
-0.001mm respectively for x and y axis. Field of view (FOV) = 30
degrees. Compute the corrected image co-ordinates of a point
whose co-ordinates are x=62.579mm, y=-80.916mm. Assume the k1,
k2, k3 and k4 to be 0.2296, -35.89, 1018, 12100 respectively
Approach
 Reduce the co-ordinates to the PP
 Compute R
 Compute dr using polynomial
 Compute errors in x and y
 Compute the corrected co-ordinates of a point.
Decentric lens distortion

P1, P2 and P3 are the decentric lens distortion parameters


determined during camera callibration
Atmospheric correction
• Due to the variation of
pressure, temperature
and humidity of air along
the path of the ray. The
refractive index of air
decreases with height.
This leads to the image ray
being curved and not
straight. For vertical
imagery, the refractive
effect is radial.
Atmospheric refraction distortion
• The light ray from the object point to the perspective center passes
through layers with different temperature, pressure, and humidity.
• Each layer has its own refractive index.
• Consequently, the light ray will follow a curved and not a straight
path.
• The distortion occurs along the radial direction from the nadir
point.
• It increases as the radial distance increases.
Atmospheric refraction correction
For a vertical photograph, the following equation can be used

 r2 
r  rK 1  2 
c  are always displaced
 points
K: is the atmospheric refraction coefficient. Image
outwardly along the radial direction.
c is the camera constant (focal length)
The coefficient K varies with meteorological conditions at the time of exposure
and with the wavelength for which the photographic emulsion is sensitive
 K can be estimated using the following equation (Hand h are measured in KM
and represent flying height above mean sea level and terrain elevation )
Atmospheric refraction correction
• We can also compute the radial lens distortions in the x
and y directions as shown in the following equations
Example
• Given an aerial photograph (c=85m) taken at 9100m above sea
level, determine the atmosphere correction at radii 2, 4, 8 cm. The
earth surface is flat and lies at 700m.
Correction due to earth curvature

The correction of the co-ordinates due to the earth


curvature is computed using the following equation

 H’ is the flying height above ground


 r is the radial distance to the point
 R is the earth’s radius
 f is the focal length
LSG2203: PHOTOGRAMMETRY I
SURVEY MISSION AND FLIGHT PLANNING

Dr. John R. Otukei

Department of Geomatics and Land


Management- Makerere University
Introduction
Survey missions are very expensive:

 specialised crewmen
Specially adopted aircraft
Air camera
Other equipments (computers, GPS e.t.c)

Survey missions begin when aircraft takes off and ends


when it has landed. This assumes that the study area has
been covered by a series of overlapping photos
Project planning
Project planning for photogrammetry requires the following
to be undertaken:

• Feasibility study

• Needs and constraints analysis

• Development of a flight plan

• Planning of ground control

• Estimation of project cost

• Planning of processing steps


These processes are interdependent and interrelated . They can not be
performed in isolation.
Feasibilty study
The aim of the feasibility analysis is to determine the
suitability of photogrammetric mensuration (measurement),
documentation and/or photogrammetric interpretation to the
proposed project. Remember that to be able to measure
objects in a photographic image, one has to be able to
recognise and identify the object by interpreting the image
content. There are a number of instances in which the
measurement of objects from photographic images is of
secondary importance to their identification and
interpretation. In such instances it may be acceptable to
procure photography of lower accuracy, so long as the
interpretation aspect is not compromised in any way.
Feasibilty study
• The accuracy required (both from an interpretation and
measurement point of view) and an assessment of
whether or not this will be fulfilled given the circumstances
of a particular project.
• The complexity of the object or terrain to be photographed

• The equipment available

• The personnel and skills available to conduct the


photographic mission and to process the data
• The time frame in which it has to be completed

• The finances available.


Points to note during feasibilty study
• Photogrammetric process is expensive
• There is need to assess whether data collection by
photogrammetry is the most economical approach.
When to use photogrammetry
• Objects to be measured are clearly recognizable in the
photographs.
• Photogrammetry becomes more favourable as the difficulty of
terrestrial measurement increases. For example, in a situation
where topographic mapping of a very large, rugged and remote
area is to be performed it will probably prove more economical
to use photogrammetry than ground-based survey
measurements.
• The larger and more regular the area, the more economical is
photogrammetry. Small and irregular areas are less suitable.
The irregularity of the area or object has particular pertinence
to close-range applications where it is often necessary to
photograph irregular objects. The more irregular the object is,
the more photographs will be needed to ensure sufficient
photographic and stereo coverage.
When to use photogrammetry
• As the number of points to determine increases,
photogrammetry becomes more favourable. It is much easier to
gather large amounts of information and measurements quickly
from photographs, than it is from more traditional surveying
techniques.

• Photogrammetry is a multi-use method. Numerous and diverse


datasets can be produced from the photography including a
variety of 2D and 3D point, line and polygon measurements
(DTMs, contour lines, interpreted data such as rivers, roads,
houses, building outlines, architectural details (for close range),
orthophotos, photomosaics, etc. This make photogrammetry an
attractive option when a lot of different products are required,
especially when there are interpretation aspects in addition to
the measurement aspects. One is then able to derive a number
of different datasets from the original raw data and gain
maximum value from the photos.
Needs analysis
A needs analysis is linked to feasibility study and aims at
determining factors such as:
• The amount of detail that needs to be discernable from
the image.
• The degree of contrast that needs to be distinguishable
between different objects in the image.
• The type of film to be used (pan-chromatic, colour, IR)

• The accuracy of measurement.

• The products that are to be produced from the


interpretation and measurement process
Flight planning
Flight planning is a term that has it’s origins in aerial
photogrammetry and deals with technical specifications
such as required photo scale, flying height, camera specs,
base length, etc.
Flight plan docummentation
• project description

• image scale number (to achieve minimum resolution,


accuracy)
• camera constant

• flying height (minimum and maximum possible)

• base distance

• forward and side-lap

• photograph exposure time


Flight plan docummentation
• time between exposures

• flight plan

• ground area covered by the block

• estimated number of photographs include, number of


strips and number of photos per strip
• time of day and season of flight

• choice of aircraft and

• crew constellation
Flight plan
• Usually drawn on a map or in CAD

• Flight lines aligned with ground co-ordinate axis


Flight plan
Flight plan
Flights are usually designed with:
• flying direction along one of the ground co-ordinate system
axes (X or Y)
• A = distance between flight lines
• B = base
• c = principal distance
• s = image side
• h = flying height above ground
• Z = ground height (above datum)
• Zo = absolute flying height
• v = flying speed over the ground
• L = length of the strip of block
• Q = side length of block
Flight planning parameters

Photo – scale number: mb=h/c


Image side in the ground S=s* mb
Base in the photograph b=B/ mb
Flying height above ground h=c* mb
Absolute flying height Z0=h+Z
Flight planning parameters
Overlap (%)
SB  B
l * 100   1   * 100
S  S
Side lap (%)
SA  A
q * 100   1   * 100
S  S
Ground area of one photograph Fb=S2=s2*mb2

 l 
Base length for 1% overlap B  S * 1  
 100 

 q 
Distance between strips for A  S * 1  
 100 
q% side lap
L 
Number of models in a strip n m    1
B 
Flight planning parameters

Number of photographs in a strip nb  nm 1

Q 
Number of strips in a block nS   1
A 
Area of stereoscopic model Fm=(S-B)*S
New area for each model in a block Fn=A*B

Bm
Time between photographs t s   2.0
m / s
Selection of flying height
Flying height is one of the major parameters in flight design
and depends on:
 desired the scale
 The relief and the tilt
 Photogrammetric equipment used for acquisition and
processing of the data.
• C-factor = Flying height/contour interval (usually, the
contour interval is selected, followed by the corresponding
C-factor. These two values help to compute the flying
height).
Factors affecting flight planning
Project purpose
Camera
 Image/photo scale
Ground coverage
 image motion
 strip interval
 season and time of the day.
Factors affecting Flight planning
Purpose
• Only with the purpose known can the optimum equipment
and procedures be selected.
• Metric Vs pictorial qualities
• Metric photos are required for quantitative
photogrammetric measurement
• Photos with high pictorial qualities are good for qualitative
analysis e.g mosaic formation and interpretation.
• Metric photos are obatined with calibrated cameras with a
good B/H ratio that allows larger parallactic angles.
22 Photogrammetry - Dr. George Sithole

Project Planning
Project purpose: Topographic map compilation
• Most common photogrammetric application

• Employs stereoscopic plotting instrument

• WA (c = 152mm) lenses favoured to obtain BH ratio that enhances


elevation measurement accuracy
• If flat, SWA (c = 88mm) can be used

• If forest area, NA (c = 210mm) used to allow operator to measure


between the trees
• Standard are 60% forward overlap and 15-30% sidelap, providing for
good stereo coverage and sufficient leeway or prevent gaps
• Flight line orientation dictated by economy rather than geometric
consideration
23 Photogrammetry - Dr. George Sithole

Project Planning
Project purpose: Photomosaics
• Use the longest length lens available and fly as high as
feasible to give desired photo scale
• Reasons: (1) limit relief displacement; (2) limit
photographic tilt effects; (3) limit variations in scale
between photographs
• As relief and tilt displacement are proportional to the
distance from the centre of the photo, the problem of
mismatching photos can be reduced by increasing the
overlap and sidelap. If the ground is flat 60% and 15-30%
are standard
24 Photogrammetry - Dr. George Sithole

Project Planning
Project purpose: Orthoimagery
• Generally same photographic parameters as for map
compilation

• Flight line orientation is often normal to the general trend


of the topography (for analogue orthophoto production)

• If the orthoimages will form a mosaic, they should be


taken with constant sun angle and at the same time of the
year (to minimise radiometric (tone, texture) differences)
25 Photogrammetry - Dr. George Sithole

Project Planning
Project purpose: Triangulation
• Flight plan governed by topographic mapping
consideration (this is often the final objective)

• To enhance accuracy, 60% in both directions often used.


Then internal tie points will appear on 9 photographs
giving 9 collinearity equations for the point.
26

Project Planning
Project purpose: Cadastral surveys
• As for triangulation. Higher accuracy requirements
demand 60% overlap in both direction for establishing fill-
in ground control points
Project Planning

Choice of scale
• Selection of a reliable photograph scale is of
major importance, because the quality of the final
digital mapping product hinges primarily upon it.
Scale selection can be done on the basis of:
Required planimetric details
 Required Vertical and horizontal accuracy
 Flying height
 Project economy optimization
Factors affecting scale choice
Generally depends of purpose and the restrictions of the
flying height.
 superwide angle (8.5/23)
Wide angle (15/23)
Normal angle ( 30/23)
Factors affecting flight planning
Ground coverage:
Ground coverage can be estimated from the endlap and side
lap
Endlap
Minimum 60%
For aerotriangulation or cadastral purposes, 80% - 90%
 Side lap
minimum 20%
commonly (25-30)%
In order to save height control points and to increase the
accuracy and reliability of a block, sidelaps of up to 60% may
be employed.
Factors affecting flight planning
Strip interval:
 Depends on the scale
 Sidelap

Generally, aim for minimum number of strips which should


be aligned in north-south or east-west direction.
Factors affecting flight planning
Image motion:
 Image motion negatively influences image resolution by
blurring an object’s image formation in the film.
If images movement is not corrected for, images of
ground objects will appear in form of lines on the
photograph and lie in the direction of flight
Factors affecting flight planning
Weather conditions:
 ideal day should have no clouds. Less than 10% cloud
cover is good enough
Over 10% cloud cover is still possible if the clouds are
above the planned flight height BUT... Shadows casts
might affect the quality of the photos.
 industralised areas with dust, smoke e.t.c should be
photographed after the raining. This helps to clear the
atmosphere.
Windy days should be avoided since they course image
motion as well as difficulty in keeping the camera vertical.
Factors affecting flight planning
Season of the year:
 not during snow period as it covers the objects of interest
 Spring for winter countries is the best since the trees will
have full leaves.
 sun‘s angle shoud be considered. Low sun‘s angle
creates longer shadows which obscure details.
Ideal conditions for photography
• For topographical applications, best flying times is
before deciduous trees sprout.
• Aim for seasons when haze is at a minimum
• Best flying time is towards midday to avoid long
shadows
• ± 5° in , ± 3° in and ± 15° in .
• A variation of ±2% in the flying height is acceptable
• The track of the aircraft can, with visual navigation and
good navigation information, be held within ± 1cm in
the photograph
Ground control points
• The objective of ground control is to determine the ground
position of points that can be located in aerial photographs.
The ground position of a point can be defined by its horizontal
position with respect to a horizontal datum or by its vertical
position with respect to a vertical datum, or both. In
photogrammetry – especially aerial photogrammetry, it is not
uncommon to use different points to provide horizontal and
vertical control for a project.
• Ground control is necessary in order to establish the position
and orientation of each photograph in space relative to the
object space coordinate system. Ground control also enables
the photogrammetrist to establish the elements of exterior
orientation and provide a basis for extending control
photogrammetrically.

Classification of ground control
Basic control
 Photo control.

 The basic control is the basic network of monuments (i.e. trig stations,
town survey marks, height benchmarks, etc.) that form part of the
geodetic control network. In a close range application it is usually
necessary to establish a local control network. These control points
will not necessarily be used as control points used to establish the
absolute orientation of the photographs, but they are necessary to
establish the positions of the points used in photo control.
• The photo control points are points whose images can be identified in
the photos, and whose positions are determined from the basic
control. Depending on the scale of the photography, it may be feasible
to incorporate some of the existing basic control as photo control.
37

Photo-control
• Pre-marked
• Post-marked (natural features). Not suitable for high accuracy
work
• Often the surveying of photo control is only done after the
photography has been acquired and developed.
• Size of marks determined by photo scale.
• Accuracy of photo-control determines accuracy of exterior
orientation
• The control phase of photogrammetry, in general, may
account for as much as 50% of the total cost of the project
Establishment of photo control
• Hence, each control points must definitely contribute to the operation:
• It must lie in the correct position on the photograph in order to
accomplish its purpose
• It must be positively identifiable in the photo and on the ground
• The image of the point must be sharp and well-defined to permit
accurate measurement and must contrast well with the background
• Be symmetrical if possible
• Not be in shadow
• It should, if possible, be easily accessible on the ground
• The point must be properly marked and documented in the field.
• Each stereo model should contain 3 horizontal and 4 vertical control
points.
Establishment of photo control
• Redundant control points allow for the detection and isolation of
erroneous control points
• The more ground control, and the higher the accuracy of the
survey (the more expensive the project becomes).
• Control must be located so that mapping does not occur beyond
the limits of the control.
survey mission and Flight
Planning
Introduction
Survey missions are very expensive:
 specialised crewmen
Specially adopted aircraft
Air camera
Other equipments (computers, GPS e.t.c)
Survey missions begin when aircraft takes of and
ends when it has landed.
This assumes that the study area has been covered
by a series of overlapping photos
Project planning
Project planning for photogrammetry requires the
following to be undertaken: FND-PEP

• Feasibility study
• Needs and constraints analysis
• Development of a flight plan
• Planning of ground control
• Estimation of project cost
• Planning of processing steps
These processes are interdependent and interrelated . They can not
be performed in isolation.
Feasibilty study
The aim of the feasibility analysis is to determine the
suitability of photogrammetric mensuration
(measurement), documentation and/or photogrammetric
interpretation to the proposed project. Remember that to
be able to measure objects in a photographic image, one
has to be able to recognise and identify the object by
interpreting the image content. There are a number of
instances in which the measurement of objects from
photographic images is of secondary importance to their
identification and interpretation. In such instances it
may be acceptable to procure photography of lower
accuracy, so long as the interpretation aspect is not
compromised in any way.
Feasibilty study
• The accuracy required (both from an interpretation and
measurement point of view) and an assessment of whether or not
this will be fulfilled given the circumstances of a particular project.

• The complexity of the object or terrain to be photographed

• The equipment available ATEP-TF

• The personnel and skills available to conduct the photographic


mission and to process the data

• The time frame in which it has to be completed

• The finances available.

Points to note during feasibilty study

 Photogrammetric process is expensive


 There is need to assess whether data collection by photogrammetry is the
most economical approach.
When to use photogrammetry
• Objects to be measured are clearly recognizable in the
photographs.
• Photogrammetry becomes more favourable as the difficulty of
terrestrial measurement increases. For example, in a situation where
topographic mapping of a very large, rugged and remote area is to
be performed it will probably prove more economical to use
photogrammetry than ground-based survey measurements.
• The larger and more regular the area, the more economical is
photogrammetry. Small and irregular areas are less suitable. The
irregularity of the area or object has particular pertinence to close-
range applications where it is often necessary to photograph
irregular objects. The more irregular the object is, the more
photographs will be needed to ensure sufficient photographic and
stereo coverage.
When to use photogrammetry
• As the number of points to determine increases, photogrammetry
becomes more favourable. It is much easier to gather large amounts of
information and measurements quickly from photographs, than it is
from more traditional surveying techniques.

• Photogrammetry is a multi-use method. Numerous and diverse


datasets can be produced from the photography including a variety of
2D and 3D point, line and polygon measurements (DTMs, contour
lines, interpreted data such as rivers, roads, houses, building outlines,
architectural details (for close range), orthophotos, photomosaics, etc.
This make photogrammetry an attractive option when a lot of different
products are required, especially when there are interpretation aspects
in addition to the measurement aspects. One is then able to derive a
number of different datasets from the original raw data and gain
maximum value from the photos.
Needs analysis
A needs analysis is linked to feasibility study and aims at
determining factors such as: DRAF-P

• The amount of detail that needs to be discernable from


the image.
• The degree of contrast that needs to be
distinguishable between different objects in the image.
• The type of film to be used (pan-chromatic, color, IR)
• The accuracy of measurement.
• The products that are to be produced from the
interpretation and measurement process.
Flight planning
Flight planning is a term that has it’s origins in
aerial photogrammetry and deals with technical
specifications such as required photo scale,
flying height, camera specs, base length, etc.
Flight plan docummentation
• project description • flight plan
• image scale number (to • ground area covered by the
achieve minimum resolution,
accuracy) block
• camera constant • estimated number of
• flying height (minimum and photographs include,
maximum possible) depending number of strips and
on the capability of the aircraft. number of photos per strip
• base distance • time of day and season of
• forward and side-lap flight
• photograph exposure time(time • choice of aircraft and
taken to capture a photo)
• crew constellation
• time between exposures
Flight plan
Usually drawn on a map or in CAD
Flight lines aligned with ground co- Flights are usually designed
ordinate axis with:
• flying direction along one of the
ground co-ordinate system axes
(X or Y)
• A = distance between flight lines
• B = base
• c = principal distance
• s = image side
• h = flying height above ground
• Z = ground height (above datum)
• Zo = absolute flying height
• v = flying speed over the ground
• L = length of the strip of block
• Q = side length of block
Flight planning parameters
Photo – scale number: •B = base
mb=h/c • c = principal distance
Image side in the ground • s = image side
S=s* mb • h = flying height above
Base in the photograph ground
b=B/ mb • Z = ground height (above
Flying height above ground datum)
h=c* mb • Zo = absolute flying height
Absolute flying height
Z0=h+Z
Flight planning parameters
Overlap (%)

S B  B
l *100 1  *100
S  S
Side lap (%)
SA  A
q *100 1  *100
S  S
Ground area of one photograph Fb=S2=s2*mb2

 l 
Base length for 1% overlap B  S *1 
 100

 q 
Distance between strips for A  S *1 
 100
q% side lap
L 
Number of models in a strip nm   1
B 
Flight planning parameters
Number of photographs in a strip nbnm1

Q 
Number of strips in a block nS  1
A 
Area of stereoscopic model Fm=(S-B)*S
New area for each model in a block Fn=A*B
Bm
Time between photographs t s   2.0
m / s
Selection of flying height
Flying height is one of the major parameters in
flight design and depends on:
 The desired scale
 The relief and the tilt
 Photogrammetric equipment used for acquisition
and processing of the data(some aircrafts have a
max height of flight).
• C-factor = Flying height/contour interval
(usually, the contour interval is selected, followed
by the corresponding C-factor. These two values
help to compute the flying height).
Factors affecting flight planning

1. Project purpose PIG-CISS


2. Camera
3. Image/photo scale
4. Ground coverage
5. image motion
6. strip interval
7. season and time of the day.
Factors affecting Flight planning
Purpose
• Only with the purpose known can the optimum
equipment and procedures be selected.
• Metric Vs pictorial qualities
• Metric photos are required for quantitative
photogrammetric measurement
• Photos with high pictorial qualities are good for
qualitative analysis e.g mosaic formation and
interpretation.
• Metric photos are obatined with calibrated cameras
with a good B/H ratio that allows larger parallactic
angles.
Project Planning
Project purpose: Topographic map compilation
• Most common photogrammetric application employs stereoscopic
plotting instrument
• WA wide (c = 152mm) lenses favored to obtain BH ratio that
enhances elevation measurement accuracy
• If flat, S supperWA (c = 88mm) can be used
• If forest area, N narrowA (c = 210mm) used to allow operator to
measure between the trees
• Standard are 60% forward overlap and 15-30% side-lap, providing
for good stereo coverage and sufficient leeway or prevent gaps
• Flight line orientation dictated by economy rather than geometric
consideration

Photogrammetry - Dr. George


17
Sithole
Project Planning
Project purpose: Photomosaics
• Use the longest length lens available and fly as high as feasible to give
desired photo scale

• Reasons: (1) limit relief displacement; (2) limit photographic tilt effects; (3)
limit variations in scale between photographs

• As relief and tilt displacement are proportional to the distance from the
centre of the photo, the problem of mismatching photos can be reduced by
increasing the overlap and sidelap. If the ground is flat 60% and 15-30% are
standard

Project purpose: Orthoimagery


• Generally same photographic parameters as for map compilation

• Flight line orientation is often normal to the general trend of the topography
(for analogue orthophoto production)

• If the orthoimages will form a mosaic, they should be taken with constant
sun angle and at the same time of the year (to minimise radiometric (tone,
texture) differences)
Project Planning
Project purpose: Triangulation
• Flight plan governed by topographic mapping consideration
(this is often the final objective)
• To enhance accuracy, 60% in both directions often used.
Then internal tie points will appear on 9 photographs giving 9
collinearity equations for the point.

Project purpose: Cadastral surveys


• As for triangulation. Higher accuracy requirements demand
60% overlap in both direction for establishing fill-in ground
control points
Project Planning
Choice of scale
• Selection of a reliable photograph scale is of major importance,
because the quality of the final digital mapping product hinges
primarily upon it. Scale selection can be done on the basis of:
 Required planimetric details
 Required Vertical and horizontal accuracy
 Flying height
 Project economy optimization
Factors affecting scale choice
Generally depends of purpose and the restrictions of the flying height.
 superwide angle (8.5/23)
 Wide angle (15/23)
 Normal angle ( 30/23)
Factors affecting flight planning
Ground coverage:
 Ground coverage can be estimated from the endlap and side lap
 Endlap
Minimum 60%
For aerotriangulation or cadastral purposes, 80% - 90%
 Side lap
minimum 20%
commonly (25-30)%
 In order to save height control points and to increase the accuracy and
reliability of a block, side-laps of up to 60% may be employed.

Strip interval:
 Depends on the scale
 Sidelap
Generally, aim for minimum number of strips which should be aligned in north-south
or east-west direction.
Factors affecting flight planning
Image motion:
 Image motion negatively influences image resolution by blurring an
object’s image formation in the film.
 If images movement is not corrected for, images of ground objects will
appear in form of lines on the photograph and lie in the direction of flight

Weather conditions:
 ideal day should have no clouds. Less than 10% cloud cover is good
enough
 Over 10% cloud cover is still possible if the clouds are above the planned
flight height BUT... Shadows casts might affect the quality of the photos.
 industralised areas with dust, smoke e.t.c should be photographed after the
raining. This helps to clear the atmosphere.
 Windy days should be avoided since they cause image motion as well as
difficulty in keeping the camera vertical.
Factors affecting flight planning
Season of the year:
 not during snow period as it covers the objects
of interest
 Spring for winter countries is the best since
the trees will have full leaves.
 sun‘s angle should be considered. Low sun‘s
angle creates longer shadows which obscure
details.
Ideal conditions for photography
• For topographical applications, best flying times is
before deciduous trees sprout.
• Aim for seasons when haze is at a minimum
• Best flying time is towards midday to avoid long
shadows
• ± 5° in , ± 3° in and ± 15° in .
• A variation of ±2% in the flying height is acceptable
• The track of the aircraft can, with visual navigation
and good navigation information, be held within ±
1cm in the photograph
Ground control points
• The objective of ground control is to determine the ground
position of points that can be located in aerial photographs.
The ground position of a point can be defined by its
horizontal position with respect to a horizontal datum or by
its vertical position with respect to a vertical datum, or both.
In photogrammetry – especially aerial photogrammetry, it is
common to use different points to provide horizontal and
vertical control for a project.
• Ground control is necessary in order to establish the
position and orientation of each photograph in space
relative to the object space coordinate system.
• Ground control also enables the photogrammetrist to
establish the elements of exterior orientation and provide a
basis for extending control photogrammetrically.
Classification of ground control
Basic control
 The basic control is the basic network of monuments (i.e. trig stations,
town survey marks, height benchmarks, etc.) that form part of the geodetic
control network. In a close range application, it is usually necessary to
establish a local control network.
 These control points will not necessarily be used as control points used to
establish the absolute orientation of the photographs, but they are necessary
to establish the positions of the points used in photo control.
Photo control.
 The photo control points are points whose images can be identified in the
photos, and whose positions are determined from the basic control.
Depending on the scale of the photography, it may be feasible to
incorporate some of the existing basic control as photo control.
Photo-control
 Pre-marked
 Post-marked (natural features). Not suitable for high
accuracy work
 Often the surveying of photo control is only done after the
photography has been acquired and developed.
 Size of marks determined by photo scale.
 Accuracy of photo-control determines accuracy of
exterior orientation

 The control phase of photogrammetry, in


general, may account for as much as 50% of
the total cost of the project

27
Establishment of photo control
• Hence, each control points must definitely contribute to the operation:
• It must lie in the correct position on the photograph in order to accomplish
its purpose
• It must be positively identifiable in the photo and on the ground
• The image of the point must be sharp and well-defined to permit accurate
measurement and must contrast well with the background
• Be symmetrical if possible
• Not be in shadow
• It should, if possible, be easily accessible on the ground
• The point must be properly marked and documented in the field.
 Each stereo model should contain 3 horizontal and 4 vertical control points.
• Redundant control points allow for the detection and isolation of erroneous
control points
• The more ground control, and the higher the accuracy of the survey (the more
expensive the project becomes).
• Control must be located so that mapping does not occur beyond the limits of the
control.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy