Combined Photogrammetery
Combined Photogrammetery
PHOTOGRAMMETRY 1
D R . J O H N R I C H A R D OT U K E I
J R O T U K E I @ YA H O O . C O M
GENERAL REMARKS
Introduction to photogrammetry provides students with an overview of
theory and concepts of photogrammetry.
Photogrammetry is an engineering discipline affected by developments in
computer science and electronics
Just like many disciplines, photogrammetry is in constant change mainly
attributed to developments in computer science and electronics. This can
be seen from the shift from analogue to digital photogrammetry
photogrammetry and remote sensing and related fields. Traditionally, the
main difference between photogrammetry and remote sensing was in
generating accurate 3D products. This difference is however, getting further
greyed due to developments in remote sensing such as LiDAR and INSAR.
HOW DID IT START
Zhizhou, 1996
• Measurement
• Interpretation
• Presentation
Quantitative aspects
• Position X,Y and/or Z (height)
• Precision/accuracy
• Reliability
Purpose:
Modelling the image formation process
Camera and sensor calibration
Modelling measurement errors
Improving precision and reliability
Qualitative aspects
Pattern recognition and interpretation
Radiometric classification (e.g., in IR imagery bright
areas represent dense vegetation)
Range to object
Process/Instrumentation
Narrow
Normal
Camera
Wide
angle
Super wide
Vertical Low
Classification
Aerial/Airborne
Range to Oblique High
Terrestrial
object Panoramic
Close range
Analogue
Process/
Analytical
Instrument
Digital
• Metric
• Semi-metric
• Non-metric
• Terrestrial
– Photography from the ground. Camera to object distance more
than 300m.
• Close range
– Photography from the ground. Camera to object distance less
than 300m.
Low Oblique
Panoramic
H Scale = f
H
– Maps (paper)
– Digital Maps
2. Control points-Triangulation
• Control densification
• Adjustment of multiple stereo-models
• GPS reduces need for ground control
• Surface Models
3D Maps
City Maps
Industrial metrology
Accident reconstruction
etc.,
• Orthophotos
• Orthomosaics
• Orthophoto maps
• Stereomates
• Orthomosaics
• Orthophoto maps
• Stereomates
OPTICS
The incident ray, the reflected ray and the normal are all in the same
plane.
Multiple reflections
The incident ray strikes the
first mirror.
The reflected ray is directed
toward the second mirror.
There is a second reflection
from the second mirror.
Apply the Law of Reflection
and some geometry to
determine information about
the rays.
Retroreflection
Assume the angle between two mirrors is 90o .
The reflected beam returns to the source parallel to its
original path.
This phenomenon is called retroreflection.
Applications include:
◦ Measuring the distance to the Moon
◦ Traffic signs
Refraction
When a ray of light traveling through a transparent medium encounters
a boundary leading into another transparent medium, part of the energy
is reflected and part enters the second medium.
The ray that enters the second medium changes its direction of
propagation at the boundary.
◦ This bending of the ray is called refraction.
Refraction
The incident ray, the reflected ray, the refracted ray, and the normal all
lie on the same plane.
The angle of refraction depends upon the material and the angle of
incidence.
sin θ2 v 2
sin θ1 v1
◦ v1 is the speed of the light in the first medium and v2 is its speed in the
second.
Refraction of light
The path of the light
through the refracting
surface is reversible.
◦ For example, a ray
travels from A to B.
◦ If the ray originated
at B, it would follow
the line AB to reach
point A.
Refractive index
The speed of light in any material is less than its speed in vacuum.
The index of refraction, n, of a medium can be defined as
Binocular Vision
Vision with two eyes. The depth of objects in the field of view is perceived using
stereoscopy.
5 Photogrammetry - Dr. George Sithole
• Hidden objects
• Shadows
• etc.,
6 Photogrammetry - Dr. George Sithole
Stereoscopy
Stereoscopic Depth Perception - Formation
DB
DA
L
A
B
Eye base
Parallactic angle
R
therefore DA < DB
8 Photogrammetry - Dr. George Sithole
DB
DA
L
A
B
Parallax
R
therefore DA < DB
Stereoscopy definantion
The use of binocular vision to achieve 3-dimensional effects.
• The left eye must see the left image and the right eye the right image
If the distance between the lenses and the table is equal to the focal length
of the lenses then the images appear to come from infinity.
Mirror stereoscope
Mirror stereoscope
Relatively expensive
Zoom stereoscope
Variable magnification:
2.5 - 20 x
Very Expensive
Variable magnification:
2.5 - 20 x
Very Expensive
Direction of flight
Factors affecting stereo vision
• Eye strength needs to be balanced between your two eyes. Wear vision aids when
viewing stereo pairs.
• Eye fatigue from mental and physical condition, poor illumination, uncomfortable
seating and viewing positions, misaligned photos, and low-quality photos.
• Align shadows properly and sequence photos correctly or else you will create a
pseudoscopic view.
• Moving objects between photos will not view in stereo. They'll show up as blurs.
• Rapid changes in topography between photos can bias stereoscopic interpretation.
• Clouds, shadows, and Sun glint can degrade stereoscopic viewing and cause loss of
information.
Parallax
Defination:
The apparent displacement of an object with respect to a frame of reference, caused by a
shift in the position of observation.
Rick Lathrop, Rutgers University
26 Photogrammetry - Dr. George Sithole
Flight line
27 Photogrammetry - Dr. George Sithole
Stereoscopy-Parallax
At the time of Left Photo/Camera Right
Photography Base, B
Formation
al ar
60 % overlap
A
Photogrammetry - Dr. George Sithole 28
Stereoscopy
In the lab Left Right
Observation Eye Eye
al ar
Stereo model
Photogrammetry - Dr. George Sithole 29
Stereoscopy
In the lab Left Right
Eye Base, Be
Observation Eye Eye
al ar
Stereo model
Parallax measurement
Stereoscopy - Parallax
In the lab Left Right
Observation Eye Eye
ppl al ar ppr
pxl pxr
H dP
h
( D dP )
Where:
h = object height (required)
H = flying height ( can be obtained from photograph)
dP = differential parallax ( see slide 32)
D = avg. photo base length (see slide 33)
P1 P2
PP CPP CPP PP
Example: if then
P1 = 4.5 in. P = 4.4 in.
P2 = 4.3 in.
Height measurement using parallax Example
Measurments for parallax height calculations:
2. Determine differential parallax (dP)
Difference of the distances between feature bases and tops while stereopair is
in stereo viewing postion.
dt
db
PP CPP CPP PP
Example: if then
db = 2.06 in. dP = 0.6 in.
dt = 1.46 in.
Height measurement by parallax example
H dP
h
( P dP)
h = (2,200 ft. * 0.6 in.) / (4.4 in. + 0.6 in.)
= 1320 ft. in. / 5 in.
= 264 ft.
Recap
• Name the two types of vision used for depth perception
• How can you perceive depth using the two approaches
above?
• What are the conditions for stereoscopic vision?
• Give the advantages of stereo vision.
• What are stereoscopes? Give any 3 examples of
stereoscopes
• What is the difference between principal point and
conjugate points
• How do you orient a pair of photographs for stereo vision?
• What is parallax?
Co-ordinates by parallax measurement
• We have already seen how parallax measurement can be used for height
measurement
• Let us explore how parallax can also be used for measuring X and Y co-ordinates
(No theodolites!!!)
Parallax equations
Conditions:
A
Useful equations
B/H
e
be / h
Increasing the airbase (B) or be increases the vertical exaggeration
48 Photogrammetry - Dr. George Sithole
Stereoscopy – Exaggeration
L R R
In the lab
Observation
Increasing Be
results in
magnification
Assignment:
1. Develop an equation relating the airbase base B of photography, the percentage
overlap (PE) between photos and the ground coverage G of the photo on the
ground.
2. Develop also an equation relating the ground coverage G, flying height above datum,
focal length l and the photographic size d.
3. Combine equations in 1 and 2 to establish the formular for the base height ratio
(B/H).
Principle of floating mark
Stereoscopic measurements are possible if the floating mark is introduced in the viewing
system.
Concept:
• Identical half marks (eg. Cross, small circles) are placed in the field of view of each eye.
• As the stereomodel is viewed the two half marks are viewed against the photographed
scene by the each eye.
• If the half marks are properly adjusted, the brain will fuse their images into a single
floating mark that appears in 3D surface relative to the model.
• If the half marks are moved closer, their parallax increases and the fused mark will
appear to rise
Principle of floating mark
• If the half marks are moved far apart, the parallax decreases and the fused mark will
appear to fall.
• The fused mark can therefore be moved up and down until it rests on the model surface
(terrain).
• The position and elevation of the mark can be determined and plotted on the map using
a transfer device.
52 Photogrammetry - Dr. George Sithole
A
53 Photogrammetry - Dr. George Sithole
ppl ppr
A
54 Photogrammetry - Dr. George Sithole
ppl ppr
A
Errors theory
Accuracy:
Degree of conformity to the true value. A value which is close to the true value has a high
accuracy. Unfortunately, it is not easy to know what a true value is and as a result the accuracy can
also never be known. Accuracy can only be estimated for example by checking against an
independent higher accuracy standard.
Precision:
Is a degree of refinement of a quantity or measuerent. This can be measured by taking several
measurements and checking the consistency of the values. If the values are close to each other then
the precision is high and the reverse implies a low precision.
Error theory
• An error: Difference between measured and true value
Types of errors:
Mistakes or blunders: Gross errors caused by carelessness or negligence and include:
misidenfication of points, misreading a scale and transposing numbers. These errors
can generally be avoided by exercising care during measurements
Systematic errors: errors that follow some mathematical or physical law. That means
that if the condition causing the error are known, measured and properly modeled, a
correction can be calculated and applied to the measurement. This helps to eliminate
the systematic errors. These errors always remain constant in magnitude and algebraic
sign if the condition causing them remains the same. Since the sign remains the
same, systematic errors accumulate and they are often reffered to as cumulative
errors. Examples in photogrametry include: shrinkage and expnasion of photographs,
camera lens distortions and atmospheric refraction distortions.
Error theory-Types of errors
Random errors: these are errors that remain after blunders and systematic errors have been
accounted for. They are generally small and do not follow any physical laws like systematic
errors. These type of errors can be be assessed using laws of probability. Random errors
are likely to be positive or negative and hence they compensate each other. This is the
reason they can also be refered to as compensating errors. In photogrammtry some sources
of random errors include: estimating the the least graduations of the scale and indexing the
scale.
Error theory
• Errors are enevitable in any measurement and also computed quanties from measured
values.
• Sources of error:
Locating and marking flight lines on photos
Orienting stereopairs for parallax measurement
Parallax and photo coordinate measurement
Shrinkage and expansion of photographs
Unequal flying heights
Tilted photographs
Errors in ground control
Camera lens distortion and atmospheric errors.
Error propagation
Error propagation deals with approaches of estimating errors in computed quantities based
on errors from the measurements.
Assumptions:
Errors in the variables of the equations are correlated i.e. Error in one variable are
dependent upon errors in other variables.
Errors in the measured quantities are independent
Error propagation
Asssume for example that we have a quantity F which we want to compute from n independent
observations x1, x2,.........xn.
Then mathematically:
F =f(x1, x2,.........xn)
The determination of the position and orientation of an image in space from known
ground positions of control points in the images.
Intersection problem in photogrammetry?
The calculation of the object space coordinates of a point from its coordinates
in two or more images.
Photogrammetric solutions
Photogrammetric solution requires knowledge of :
Interior orientation parameters (focal length, principle point location)
Exterior orientation paramters (camera position and 3 rotation angles)
Ground coordinates of points to be mapped.
Interior orientation parameters are always known throw camera callibration
Exterior orientation parameters are established through resection (or using Inertial
positioning systems and GPS)
Photogrammetric solutions
The overall solution to photogrammetric problems involves carrying out:
Inner/interior orientation
Relative orientation
Absolute orientation
Relative and absolute orientation are generally called exterior orientation
These can be accomplished using
Analogue and
Analytical/digital approaches
Why bother with orientations???
Maps Vs. images
Why bother with orientations?
• We want to make maps from images BUT
Images
• Have perspective projection
• Relief displacement
• Scale variation
AND Maps
• Orthogonal projection (2D representation of 3D)
• No scale variation
• No relief displacement
Why bother with orientations
Result:
• A stereo Model, which is a 3-D representation of the
object space w.r.t. an arbitrary local coordinate system
• If we make at least five conjugate light rays intersect, all
the remaining light rays will intersect at the surface of the
stereo-model.
• Data registered in arbitrary co-ordinate system-no ground
co-ordinates
Absolute orientation
Purpose: rotate, scale, and shift the stereo model resulting from relative orientation until it
fits at the location of the control points.
• Absolute Orientation is defined by: Three Rotations, One Scale factor, and Three Shifts
• All data is assigned ground co-ordinates
Exterior orientation
Exterior Orientation has two components:
• The position of the perspective center w.r.t. the ground coordinate system (Xo, Yo, Zo).
• The rotational relationship between the image and the ground coordinate systems (w, f,
k)
These are the rotation angles we need to apply to the ground coordinate system to make
it parallel to the image coordinate system.
Position of perspective centre and rotation
angles
LSG223:PHOTOGRAMMETRY 1
Principle distance
Principle point
Fiducial marks
x X
y k Y
z P Z P
COLLINEARITY
We then make compensation for (x,y,z) for interior
orientation, (X,Y,Z) for non-coincidence of the
coordinate system and non parallel axis.
x xo X X o
y y kR Y Y
o o
c P Z Z o P
COLLINEARITY
Since we know already the nature of the rotation
matrix, we substitute its value in the equation and
expand the equation. The following equations result
x x0 k r11 X X 0 r12 Y Y0 r13 Z Z 0
y y0 k r21 X X 0 r22 Y Y0 r23 Z Z 0
c k r31 X X 0 r32 Y Y0 r33 Z Z 0
• These can be reduced to
The figure (a) shows a section through a vertical photograph with the
lens positioned at O. The elevation of the lens is known as the flying
height (FH) (i.e. it is the distance of the perspective centre from the
target) while the ground (flat) lies at elevation h above the datum. Point
O' is the principal point of the photograph. Distance c is the principal
distance.
SCALE OF VERTICAL PHOTOGRAPHY
c is usually given in mm, H and h in meters, hence a conversion factor may be necessary.
Generally, scale is a function of c (principle distance), flying height and Terrain elevation.
sh
H h
c
VARIABLE PHOTO SCALE/MEAN SCALE
For a vertical photograph taken at variable
terrain, there is going to be an infinite number of
different scales. This is one of the principle
differences between a map and a photograph.
It is therefore convenient to determine and
overall scale to use. This is what is called the
mean/average scale.
Average scale is the scale at the average
elevation of the terrain
EXAMPLES
Given the highest elevations, average elevation,
lowest elevation to be 600, 450, 300 m
respectively. Calculate the maximum, minimum
and average scales for photography carried at
flying height of 2000m above mean sea level
using a camera of focal length 152.4mm
OTHER METHODS OF COMPUTING SCALE
Scale = (photo distance)*(map scale)/map distance
Parallel projection
CENTRAL PROJECTION
All aerial photographs are based on central projection
while maps are based on orthogonal projection.
Central projection is the projection in which the rays
emanating from the object pass through a central point,
call the perspective centre (o)
ORTHOGONAL PROJECTION
Orthogonal projection is a projection in which the projected
rays intercept the other medium at a right angle.
PARALLEL PROJECTION
Aerial camera
Note the
representation of
the front and rare
nodal points. The
rays of light from
the object
converge at the
front nodal point
and pass through
the optical axis of
the lens before
emerging from
the rare nodal
point.
Camera magazine
• The camera magazine houses the reels (Take up reel for exposed
film, unexposed film reel )
• Contains the film advancing and film flattening devices. The film
advancing device transport the film over a length corresponding to
one exposure format size
• The flatting device keeps the film perfectly flat on the focal plane at
each instant of exposure. Film flattening is essential in order to
reduce the distortions in resulting images.
Cemera body
• The camera body essentially houses the camera drive mechanism.
This drive mechanism provides the necessary force to operate the
camera through its repeated cycle comprising of:
Flatting the film
Tripping the shutter at the exposure station.
Locking the shutter and
Advancing the film
• The energy required for the drive mechanism may be applied
manually or may come from an automated electric motor. Usually,
handle for carrying the camera are fitted to the camera body, also it
is the camera body to which electrical connections are made.
Lens assembly
• The camera lens cone assembly is comprised of several parts each
with its function. These several parts include the:
Lens
Shutter
Diaphragm.
Filter
• Lens: collect light rays from the object space (terrain) and bring
them to focus on the focal plane.
• THE FILTER: - The filter serves the following purposes:
It reduces the effect of atmospheric refraction.
It helps to provide uniform light distribution over the entire format.
It protects the lens from damage and dust.
Lens assembly
• The shutter and diaphragm complement each other on their functions.
They both regulate the amount of light that is allowed to pass through the
lens. While the shutter controls the length of time that light is permitted
to pass through the lens, the diaphragm controls the size of the opening
and hence the size of the bundle of light rays that is allowed to pass
through the lens.
• The focal plane of the aerial camera is the plane in which all radiated light
rays are brought to focus. Compare to the image distance, the object
distance is by far greater and therefore implies that the bundle of rays
reaching the camera lens during photography from the object space
comes from infinity. Aerial cameras therefore have their focus fixed for
infinite object distances. This condition satisfies the Newton’s lens
equation.
• Hence indicating that the image distance V must be exactly equal to the
lens focal length (f) behind the rear node point of the camera i.e. the focal
plane is defined by the upper surface of the focal plane frame. This is the
surface upon which the film emulsions rest when an exposure is made.
Types of cameras
• Single lens frame camera
• Multi- lens frame camera
• Continuous strip camera and
• Panoramic camera
• Digital Camera
Single frame cameras
• The single lens frame camera is used almost exclusively in obtaining
photograph for mapping and photo interpretation purposes
because they provide the highest geometric picture quality.
• The lens of this type of camera is held fixed relative to the focal
plane of the lens.
• The entire format size of one exposure is exposed to light rays
simultaneously with snap of the shutter.
• The general format size here is 230 x 230mm and film capacity of
about 120m long.
• There are different types of single lens according to their focal
lengths depending on the choice of the manufacturer. While there
are some with nominal focal length of 300mm, 210mm and 88mm.
• Note that the focal length of the cameral lens is synonymous with
area of coverage.
Single lens frame cameras
Single lens frame cameral are generally classified according to the
angular field of view (f.o.v.) There are:
• Narrow or small angle single lens frame cameral with average
(f.o.v.) equal to 300 (f =300mm)
• Normal angle single lens frame camera with 600 coverage (f.o.v.) (f
= 210mm)
• Single lens frame camera with 900 coverage (f.o.v.) (f = 152mm)
• Super or ultra- wide-angle single lens frame camera with 1200
coverage (f.o.v.) (f = 88mm)
Field of view of a camera
Fiducial co-ordinates
These are the locations of the fiducial marks and provide a 2D
reference for the PP location as well as the determination of the
images on the photograph
Other parameters
• Resolution of camera ( often highest resolution is achieved at the
centre of the photograph as compared with the edges)
• Film flatness ( should not deviate by more than 0.01mm)
• Shutter efficiency
Laboratory method for camera calibration
This method is the most common and comprises of:
Multicollimator
Goniometer
The above methods are mainly used for analogue
cameras
Multi-collimator method
• Involves a number of collimators (each
representing a different target)
• Each collimator consists of a lens and a cross
• Individual collimators are mounted such that
optical axes of the neighbouring collimators
intersect at an angle such θ.
• The camera is placed such that its image plane is
perpendicular to the optical axis of the central
collimator/target
• Secondly the lens (front nodal point) of the camera
should be where the axes of the collimators
intersect. This means the image of the central
collimator(g) called the Principal point of Auto
correlation is near the principle point (PP) and also
near the intersection of the fiducial marks)
• Normally, the collimators are arranged in more
than one plane but should be perpendicular planes
Camera calibration
• The camera is further oriented such that the images of the
collimators when photographed lie along the diagonal of the
camera format
• The images of the collimators are obtained by photographing the
collimators and will lie along the diagonal line of the film format
Determination of the Calibrated focal length
• Since the angle of the intersection of the collimator axes is known, we
consider images of the 4 central collimators.
• Measure the distance of the 4 images from the principle point (centre
collimator)
• Compute the corresponding focal length using the equation
• Take the average of the resulting f values and this gives the calibrated
focal length.
• After determining the calibrated focal length, use its value to compute
the theoretical distances of the collimator images i.e.
=
• We can then use this value to compute the radial distortion i.e.
• If the tangential distortion exists, the images will not lie in a straight line
and the offset can be determined.
Principal point location
• This requires establishing the relationship between the fiducial axes
with PP location. The offset can then be determined.
Sources of errors in measured photo co-ordinates
• Film distortion due to shrinkage, expansion, and lack of flatness
• Failure of fiducial axes to intersect at principal point
• Lens distortions
• Atmospheric refraction correction
• Earth curvature
Expansion-shrinkage , non flatness problems
• Photographic materials do shrink or expand resulting into errors
• Lack of film flatness causes errors
• Some materials upon which photographic prints are made have
strong stability such as glass or polyester. However, materials such
as paper have low stability
• The function of shrinkage and expansion is temperature, humidity,
paper type and thickness as well as the method used to dry to the
prints. Hot drum dryers or hanging the prints to dry results in high
distortions. Prints air-dried lying flat at room temperature have low
distortions
Correction due to expansion and shrinkage
• Use co-ordinates of fiducial marks (calibrated and measured from
the photo) and apply the following equation.
Decentering
Radial Lens distortions
• Ray changes its direction after passing the rare nodal point
• Occurs along radial lines from the PP
• Increases as we move away from PP
• Normally accounted for after reducing measurements to principle
point and correcting for shrinkage and expansion
Radial lens and decentering
Procedure
Reduce the co-ordinates to principle point
Compute radial distance
Radial lens distortion
Radial lens distortion
K1….. Kn are coeffs that define the shape of the curve and are
determined through camera calibration
Example
• A camera calibration report shows a calibrated focal length of
153.206 mm and co-ordinates of the calibrated pp as 0.008mm and
-0.001mm respectively for x and y axis. Field of view (FOV) = 30
degrees. Compute the corrected image co-ordinates of a point
whose co-ordinates are x=62.579mm, y=-80.916mm. Assume the k1,
k2, k3 and k4 to be 0.2296, -35.89, 1018, 12100 respectively
Approach
Reduce the co-ordinates to the PP
Compute R
Compute dr using polynomial
Compute errors in x and y
Compute the corrected co-ordinates of a point.
Decentric lens distortion
r2
r rK 1 2
c are always displaced
points
K: is the atmospheric refraction coefficient. Image
outwardly along the radial direction.
c is the camera constant (focal length)
The coefficient K varies with meteorological conditions at the time of exposure
and with the wavelength for which the photographic emulsion is sensitive
K can be estimated using the following equation (Hand h are measured in KM
and represent flying height above mean sea level and terrain elevation )
Atmospheric refraction correction
• We can also compute the radial lens distortions in the x
and y directions as shown in the following equations
Example
• Given an aerial photograph (c=85m) taken at 9100m above sea
level, determine the atmosphere correction at radii 2, 4, 8 cm. The
earth surface is flat and lies at 700m.
Correction due to earth curvature
specialised crewmen
Specially adopted aircraft
Air camera
Other equipments (computers, GPS e.t.c)
• Feasibility study
• base distance
• flight plan
• crew constellation
Flight plan
• Usually drawn on a map or in CAD
l
Base length for 1% overlap B S * 1
100
q
Distance between strips for A S * 1
100
q% side lap
L
Number of models in a strip n m 1
B
Flight planning parameters
Q
Number of strips in a block nS 1
A
Area of stereoscopic model Fm=(S-B)*S
New area for each model in a block Fn=A*B
Bm
Time between photographs t s 2.0
m / s
Selection of flying height
Flying height is one of the major parameters in flight design
and depends on:
desired the scale
The relief and the tilt
Photogrammetric equipment used for acquisition and
processing of the data.
• C-factor = Flying height/contour interval (usually, the
contour interval is selected, followed by the corresponding
C-factor. These two values help to compute the flying
height).
Factors affecting flight planning
Project purpose
Camera
Image/photo scale
Ground coverage
image motion
strip interval
season and time of the day.
Factors affecting Flight planning
Purpose
• Only with the purpose known can the optimum equipment
and procedures be selected.
• Metric Vs pictorial qualities
• Metric photos are required for quantitative
photogrammetric measurement
• Photos with high pictorial qualities are good for qualitative
analysis e.g mosaic formation and interpretation.
• Metric photos are obatined with calibrated cameras with a
good B/H ratio that allows larger parallactic angles.
22 Photogrammetry - Dr. George Sithole
Project Planning
Project purpose: Topographic map compilation
• Most common photogrammetric application
Project Planning
Project purpose: Photomosaics
• Use the longest length lens available and fly as high as
feasible to give desired photo scale
• Reasons: (1) limit relief displacement; (2) limit
photographic tilt effects; (3) limit variations in scale
between photographs
• As relief and tilt displacement are proportional to the
distance from the centre of the photo, the problem of
mismatching photos can be reduced by increasing the
overlap and sidelap. If the ground is flat 60% and 15-30%
are standard
24 Photogrammetry - Dr. George Sithole
Project Planning
Project purpose: Orthoimagery
• Generally same photographic parameters as for map
compilation
Project Planning
Project purpose: Triangulation
• Flight plan governed by topographic mapping
consideration (this is often the final objective)
Project Planning
Project purpose: Cadastral surveys
• As for triangulation. Higher accuracy requirements
demand 60% overlap in both direction for establishing fill-
in ground control points
Project Planning
Choice of scale
• Selection of a reliable photograph scale is of
major importance, because the quality of the final
digital mapping product hinges primarily upon it.
Scale selection can be done on the basis of:
Required planimetric details
Required Vertical and horizontal accuracy
Flying height
Project economy optimization
Factors affecting scale choice
Generally depends of purpose and the restrictions of the
flying height.
superwide angle (8.5/23)
Wide angle (15/23)
Normal angle ( 30/23)
Factors affecting flight planning
Ground coverage:
Ground coverage can be estimated from the endlap and side
lap
Endlap
Minimum 60%
For aerotriangulation or cadastral purposes, 80% - 90%
Side lap
minimum 20%
commonly (25-30)%
In order to save height control points and to increase the
accuracy and reliability of a block, sidelaps of up to 60% may
be employed.
Factors affecting flight planning
Strip interval:
Depends on the scale
Sidelap
The basic control is the basic network of monuments (i.e. trig stations,
town survey marks, height benchmarks, etc.) that form part of the
geodetic control network. In a close range application it is usually
necessary to establish a local control network. These control points
will not necessarily be used as control points used to establish the
absolute orientation of the photographs, but they are necessary to
establish the positions of the points used in photo control.
• The photo control points are points whose images can be identified in
the photos, and whose positions are determined from the basic
control. Depending on the scale of the photography, it may be feasible
to incorporate some of the existing basic control as photo control.
37
Photo-control
• Pre-marked
• Post-marked (natural features). Not suitable for high accuracy
work
• Often the surveying of photo control is only done after the
photography has been acquired and developed.
• Size of marks determined by photo scale.
• Accuracy of photo-control determines accuracy of exterior
orientation
• The control phase of photogrammetry, in general, may
account for as much as 50% of the total cost of the project
Establishment of photo control
• Hence, each control points must definitely contribute to the operation:
• It must lie in the correct position on the photograph in order to
accomplish its purpose
• It must be positively identifiable in the photo and on the ground
• The image of the point must be sharp and well-defined to permit
accurate measurement and must contrast well with the background
• Be symmetrical if possible
• Not be in shadow
• It should, if possible, be easily accessible on the ground
• The point must be properly marked and documented in the field.
• Each stereo model should contain 3 horizontal and 4 vertical control
points.
Establishment of photo control
• Redundant control points allow for the detection and isolation of
erroneous control points
• The more ground control, and the higher the accuracy of the
survey (the more expensive the project becomes).
• Control must be located so that mapping does not occur beyond
the limits of the control.
survey mission and Flight
Planning
Introduction
Survey missions are very expensive:
specialised crewmen
Specially adopted aircraft
Air camera
Other equipments (computers, GPS e.t.c)
Survey missions begin when aircraft takes of and
ends when it has landed.
This assumes that the study area has been covered
by a series of overlapping photos
Project planning
Project planning for photogrammetry requires the
following to be undertaken: FND-PEP
• Feasibility study
• Needs and constraints analysis
• Development of a flight plan
• Planning of ground control
• Estimation of project cost
• Planning of processing steps
These processes are interdependent and interrelated . They can not
be performed in isolation.
Feasibilty study
The aim of the feasibility analysis is to determine the
suitability of photogrammetric mensuration
(measurement), documentation and/or photogrammetric
interpretation to the proposed project. Remember that to
be able to measure objects in a photographic image, one
has to be able to recognise and identify the object by
interpreting the image content. There are a number of
instances in which the measurement of objects from
photographic images is of secondary importance to their
identification and interpretation. In such instances it
may be acceptable to procure photography of lower
accuracy, so long as the interpretation aspect is not
compromised in any way.
Feasibilty study
• The accuracy required (both from an interpretation and
measurement point of view) and an assessment of whether or not
this will be fulfilled given the circumstances of a particular project.
S B B
l *100 1 *100
S S
Side lap (%)
SA A
q *100 1 *100
S S
Ground area of one photograph Fb=S2=s2*mb2
l
Base length for 1% overlap B S *1
100
q
Distance between strips for A S *1
100
q% side lap
L
Number of models in a strip nm 1
B
Flight planning parameters
Number of photographs in a strip nbnm1
Q
Number of strips in a block nS 1
A
Area of stereoscopic model Fm=(S-B)*S
New area for each model in a block Fn=A*B
Bm
Time between photographs t s 2.0
m / s
Selection of flying height
Flying height is one of the major parameters in
flight design and depends on:
The desired scale
The relief and the tilt
Photogrammetric equipment used for acquisition
and processing of the data(some aircrafts have a
max height of flight).
• C-factor = Flying height/contour interval
(usually, the contour interval is selected, followed
by the corresponding C-factor. These two values
help to compute the flying height).
Factors affecting flight planning
• Reasons: (1) limit relief displacement; (2) limit photographic tilt effects; (3)
limit variations in scale between photographs
• As relief and tilt displacement are proportional to the distance from the
centre of the photo, the problem of mismatching photos can be reduced by
increasing the overlap and sidelap. If the ground is flat 60% and 15-30% are
standard
• Flight line orientation is often normal to the general trend of the topography
(for analogue orthophoto production)
• If the orthoimages will form a mosaic, they should be taken with constant
sun angle and at the same time of the year (to minimise radiometric (tone,
texture) differences)
Project Planning
Project purpose: Triangulation
• Flight plan governed by topographic mapping consideration
(this is often the final objective)
• To enhance accuracy, 60% in both directions often used.
Then internal tie points will appear on 9 photographs giving 9
collinearity equations for the point.
Strip interval:
Depends on the scale
Sidelap
Generally, aim for minimum number of strips which should be aligned in north-south
or east-west direction.
Factors affecting flight planning
Image motion:
Image motion negatively influences image resolution by blurring an
object’s image formation in the film.
If images movement is not corrected for, images of ground objects will
appear in form of lines on the photograph and lie in the direction of flight
Weather conditions:
ideal day should have no clouds. Less than 10% cloud cover is good
enough
Over 10% cloud cover is still possible if the clouds are above the planned
flight height BUT... Shadows casts might affect the quality of the photos.
industralised areas with dust, smoke e.t.c should be photographed after the
raining. This helps to clear the atmosphere.
Windy days should be avoided since they cause image motion as well as
difficulty in keeping the camera vertical.
Factors affecting flight planning
Season of the year:
not during snow period as it covers the objects
of interest
Spring for winter countries is the best since
the trees will have full leaves.
sun‘s angle should be considered. Low sun‘s
angle creates longer shadows which obscure
details.
Ideal conditions for photography
• For topographical applications, best flying times is
before deciduous trees sprout.
• Aim for seasons when haze is at a minimum
• Best flying time is towards midday to avoid long
shadows
• ± 5° in , ± 3° in and ± 15° in .
• A variation of ±2% in the flying height is acceptable
• The track of the aircraft can, with visual navigation
and good navigation information, be held within ±
1cm in the photograph
Ground control points
• The objective of ground control is to determine the ground
position of points that can be located in aerial photographs.
The ground position of a point can be defined by its
horizontal position with respect to a horizontal datum or by
its vertical position with respect to a vertical datum, or both.
In photogrammetry – especially aerial photogrammetry, it is
common to use different points to provide horizontal and
vertical control for a project.
• Ground control is necessary in order to establish the
position and orientation of each photograph in space
relative to the object space coordinate system.
• Ground control also enables the photogrammetrist to
establish the elements of exterior orientation and provide a
basis for extending control photogrammetrically.
Classification of ground control
Basic control
The basic control is the basic network of monuments (i.e. trig stations,
town survey marks, height benchmarks, etc.) that form part of the geodetic
control network. In a close range application, it is usually necessary to
establish a local control network.
These control points will not necessarily be used as control points used to
establish the absolute orientation of the photographs, but they are necessary
to establish the positions of the points used in photo control.
Photo control.
The photo control points are points whose images can be identified in the
photos, and whose positions are determined from the basic control.
Depending on the scale of the photography, it may be feasible to
incorporate some of the existing basic control as photo control.
Photo-control
Pre-marked
Post-marked (natural features). Not suitable for high
accuracy work
Often the surveying of photo control is only done after the
photography has been acquired and developed.
Size of marks determined by photo scale.
Accuracy of photo-control determines accuracy of
exterior orientation
27
Establishment of photo control
• Hence, each control points must definitely contribute to the operation:
• It must lie in the correct position on the photograph in order to accomplish
its purpose
• It must be positively identifiable in the photo and on the ground
• The image of the point must be sharp and well-defined to permit accurate
measurement and must contrast well with the background
• Be symmetrical if possible
• Not be in shadow
• It should, if possible, be easily accessible on the ground
• The point must be properly marked and documented in the field.
Each stereo model should contain 3 horizontal and 4 vertical control points.
• Redundant control points allow for the detection and isolation of erroneous
control points
• The more ground control, and the higher the accuracy of the survey (the more
expensive the project becomes).
• Control must be located so that mapping does not occur beyond the limits of the
control.