Unmanned Aerial Systems Overlap and Elev
Unmanned Aerial Systems Overlap and Elev
by
Jesse Fraser
A PROJECT
CALGARY, ALBERTA
APRIL, 2016
i
Abstract
Unmanned aerial vehicle (UAV) technology has rapidly become more accessible and easier to
use in the last 5 years. However, there is a dearth of research into the use of high resolution
data that UAVs offer. Archeology is one field that would benefit from further research around
UAV applications as archeologists could benefit from the hyperspectral and hypertemporal
CHAR) project I plan on using UAVs to survey portion of the Lower East Channel of the
Mackenzie Delta. The purpose of this research will be twofold: 1) To illustrate the relationship
between spatial resolution and UAV flight elevation and; 2) Explore optimal spatial resolution
and accuracy required for various archeological studies. To reach research goal 1 I will
completed by creating digital elevation models of the UAV survey flights and offering them to
the archeologists for feedback about which data sets are most useful for their needs. The goal
of this project is to create a better understanding of how UAVs can help archeologists with their
research and how UAV flight elevation interacts with spatial resolution.
ii
Table of Contents
1. Introduction ......................................................................................................................... - 1 -
2. Background .......................................................................................................................... - 3 -
2.1 Unmanned Aerial Systems (UAS) ....................................................................................... - 3 -
2.1.1 Names and Meanings.................................................................................................. - 3 -
2.1.2 History ......................................................................................................................... - 4 -
2.1.3 Current Uses ............................................................................................................... - 5 -
2.1.4 Regulations.................................................................................................................. - 7 -
2.2 Principles of Data Creation .......................................................................................... - 8 -
2.2.1 Camera ........................................................................................................................ - 8 -
2.2.2 Photogrammetry ....................................................................................................... - 11 -
2.2.3 Onboard Computers ................................................................................................. - 18 -
2.3 The Data ........................................................................................................................... - 18 -
2.3.1 Digital Elevation Models ........................................................................................... - 18 -
2.3.2 Orthorectified Photomosaic ..................................................................................... - 20 -
2.3.3 Resolution ................................................................................................................. - 22 -
2.3.4 Accuracy .................................................................................................................... - 23 -
2.3.5 Precision .................................................................................................................... - 26 -
3. Methods ............................................................................................................................. - 27 -
3.1 Hardware ......................................................................................................................... - 27 -
3.2 Data Collection ................................................................................................................. - 31 -
3.3 Data Processing ................................................................................................................ - 35 -
3.4 Data Analysis .................................................................................................................... - 37 -
3.4.1 Accuracy .................................................................................................................... - 37 -
3.4.2 Precision .................................................................................................................... - 38 -
3.4.3 Other Explanatory Variables ..................................................................................... - 39 -
4. Results ................................................................................................................................ - 40 -
4.1 Collected Data .................................................................................................................. - 40 -
4.2 Accuracy ........................................................................................................................... - 43 -
4.2.1 Overlap ...................................................................................................................... - 43 -
4.2.2 Altitude ..................................................................................................................... - 44 -
iii
4.2.3 Categorized Values.................................................................................................... - 45 -
4.2.4 Other Variables ......................................................................................................... - 48 -
4.3 Precision ........................................................................................................................... - 51 -
4.3.1 Base Values ............................................................................................................... - 51 -
4.3.2 Categorized Values.................................................................................................... - 56 -
5. Discussion........................................................................................................................... - 59 -
5.1 Other Variables ................................................................................................................ - 59 -
5.2 Precision and Accuracy .................................................................................................... - 60 -
5.2.1 Comparison to Other Research................................................................................. - 60 -
5.2.2 Theory and Practice .................................................................................................. - 63 -
6. Conclusion .......................................................................................................................... - 67 -
Bibliography ............................................................................................................................... - 70 -
Appendices A: GPS Data ............................................................................................................ - 75 -
List of Tables
Table 1: Number of referenced UAS from 2005-2013 Page 15
Table 2: The distance between nadir points of photos necessary to Page 34
achieve certain overlaps at a given altitude
Table 3: The necessary input numbers to calculate GSD for the 20mm lens Page 41
Table 4: The pertinent values for each flights GSD including altitude and Page 41
the difference in altitude for each flight
Table 5: The export values for the DEM and Ortho for each flight Page 43
Table 8: The accuracy values for all CPs at different distances from the centre Page 50
Table 9: Precision of each flight, in both CM and pixels, grouped into altitude Page 53
List of Figures
Figure 1: Focal Length in f Page 10
iv
Figure 2: The x-axis refers to the direction of travel. Page 13
Figure 3: A diagram of the concept of epipolar lines and points Page 14
Figure 4: A schematic of the values in topographic relief. X is the sensor Page 15
location, Y is the nadir point of the photograph
Figure 5: The B/H ratio and overlap Page 24
Figure 6: Lens distortion for GF1 Page 28
Figure 7: The 2 LiPo batteries, above is the longer lasting of the 2 Page 29
Figure 8: Power Distribution Board with 6 Engine Control Boards attached Page 30
Figure 9: The MikroKopter GCS in flight view Page 30
Figure 10: The location of Kuukpaak, marked by the neon green triangle Page 31
Figure 11: The Calgary flying location of the UAS, as marked by the neon green Page 32
triangle
Figure 12: The full coverage of the flights in the Calgary location Page 33
Figure 13: The look and shape of a good quality image GCP or CP Page 35
Figure 14: The Ratio of GSD that should have occurred based on GSD equations Page 42
and the actual GSD as calculated by PhotoScan.
Figure 15: The RMSE compared to the overlap, where there does not seem to be a Page 44
pattern
Figure 16: The RMSE compared to altitude, which proves to not have much of a Page 45
pattern either
Figure 17: The planimetric RMSE grouped into overlap and compared to altitude Page 47
Figure 18: The vertical RMSE grouped into overlap and compared to altitude Page 47
Figure 19: The planimetric RMSE grouped into altitude and compared to overlap Page 48
Figure 20: The Vertical RMSE grouped into altitude and compared to overlap Page 48
Figure 21: A scatterplot for RMSE values sees a slight uptick at the over 100m values, Page 49
but overall no real increase over distance
Figure 22: An example of a poor quality target, and how the square 1 by 1 shape is Page 50
almost unrecognizable
Figure 23: An example of a medium quality target. The shape is recognizable, but Page 50
the edges are a little blurry
Figure 24: An example of a good target. Where the edge of the GCP/CP stops is Page 51
very obvious, and blur is limited.
v
Figure 25: Altitude (X axis) and Pixel error (Y axis) do not see much of a Page 53
pattern emerge
Figure 26: Altitude (X axis) compared to CM precision (Y axis) sees error increase Page 53
as altitude increases
Figure 27: This graph demonstrates a fairly strong positive relationship between Page 54
overlap (X axis) and pixel error (Y axis)
Figure 28: Comparing overlap (X axis) and CM precision (Y axis) not much of a Page 54
pattern emerges
Figure 29: GCP pixel error (X axis) compared to overall pixel error (Y axis) does Page 55
not demonstrate a pattern
Figure 30: No real pattern emerges when comparing GCP pixel error (X axis) to Page 55
overall CM error (Y axis)
Figure 31: Pixel error (Y axis) compared to overlap (X axis) at 50 metres Page 56
demonstrates a positive relationship that sees a sharp increase in error as
overlap increases
Figure 32: Pixel error (Y axis) compared to overlap (X axis) at 75 metres shows Page 57
a positive relationship between error and overlap
Figure 33: Pixel error (Y axis) compared to overlap (X axis) at 100 metres shows Page 57
a positive relationship between the axis values
Figure 34: As overlap (X axis) increases CM error (Y axis) increases in the 50 Page 58
metre category
Figure 35: Increasing overlap (X axis) sees sharply increasing CM (Y axis) error Page 58
at 75 metres
Figure 36: Increasing overlap (X axis) sees increasing CM (Y axis) error at 100 Page 58
metres, but a much smaller relationship than the other 2 altitude
categories
Figure 37: Number of photos covering each location of flight 141001_00. Page 64
As the flight proceeds up the hill there are far fewer photos to cover each
lo atio u til the e is t e ough o e age fo Photo“ a to eate a odel.
Figure 38: Number of photos covering each location of flight 141001_03. As the Page 64
flight proceeds up the hill there are far fewer photos to cover each location
vi
List of Equations
Equation 1: Ground Sampling Distance Page 2
Equation 2: Relief Displacement Page 15
Equation 3: Parallax Page 16
Equation 4: Z Error Page 24
Equation 5: XY error Page 24
Equation 6: CCD Pixel Element Size Page 34
Equation 7: Instantaneous Field of View (IFOV) Page 34
Equation 8: Pixel Size Page 34
Equation 9: Y Axis Size Page 34
Equation 10: X Axis Size Page 34
Equation 11: Root Mean Square Error (RMSE) Page 38
Equation 12: Interquartile Range (IQR) Page 40
Equation 13: Outliers Page 40
vii
1. Introduction
Digital elevation models (DEMs) and orthorectified mosaics (orthomosaics) are widely used
tools. With the recent advent of relatively easy to use and inexpensive unmanned aerial
systems (UASs) and digital cameras, the ability to create DEMs and orthorectified mosaics has
spread to a much wider group of users than prior to these technologies. New users gravitate to
these technologies because they can create data with a high temporal and spatial resolution for
UASs offer these very real advantages. However, a UAS flight covers much less area than a
traditional flight. Overlap from photo to photo and altitude are 2 variables that affect the
possible coverage during a flight and these have an impact on the resolution. Thus users must
consider what resolution is necessary for a particular purpose in combination with how much
area they want to cover, and then they must tailor the overlap and flight altitude to these
needs. Standard operating procedure for traditional aerial photography calls for an overlap of
60 percent between photos and forty percent between flight li es Concepts of Aerial
Photog aph , ). These numbers are very tidy but do not transfer well to UASs, as UASs are
more unstable and therefore need greater overlap to avoid gaps in the data. Furthermore,
UASs have larger variability in the elevation to ground ratio, which can also cause issues with
overlap. Some users claim that the amount of overlap from flight to flight is a matter of
experience and use, and others claim the need for 75 percent overlap (Chao & Chen, 2012;
Step 1. Before Starting a P oje t , ). None of these sources offers a definitive answer or
guideline. The other variable, maximum pixel size or ground sampling distance (GSD), is much
easier to calculate, and thus there is a more definitive answer for this variable, eq.1 (Mikhail,
Bethel, & McGlone, 2001). However, this equation is for an individual image rather than a
mosaic or DEM, and again it leaves some confusion for users hoping to get a certain pixel size
-1-
with their final product. Neither of the above situations addresses the internal1 or external2
accuracy of the orthophoto mosaic or DEM, which in most cases will be more important than
the GSD. This lack of detail leaves users who want to maximize their coverage, or who require a
certain accuracy from the data, without solid answers on how to obtain their needs. Clearly
more testing on the impact of overlap and altitude on internal and external accuracy of UAS
Eq.1 GSD =
Using overlap and altitude as variables, this paper will explore the impact that these 2 variables
have on the pixel size of an orthophoto mosaic and DEM as well as the internal and external
accuracies of the produced DEMs and orthomosaics. The paper will examine images taken at 2
sites with a hexakopter UAS and a Panasonic Lumix GX1 equipped with a 20mm lens at varying
overlap and altitude, and then processed with Agisoft PhotoScan software. The pixel size will be
calculated using Eq.1 and then compared to the pixel stated by PhotoScan. The external
accuracy will be calculated by comparing the output locations of check points (CPs) and their
corresponding coordinates as measured by a GPS unit. Finally the internal accuracy of the
outputs will be provided in the project reports exported from PhotoScan. These results will
offer information about the impact of altitude and overlap on UAS DEMs and orthomosaics and
thus give users a good idea of what sort of overlap and altitude they will have to use to create
1
How accurately the model reflects the distances between 2 objects versus their distances in reality.
2
The accuracy of the model in placing objects in relationship to their location according to the objects real
world coordinates (UTM, Lat/Long etc.).
-2-
2. Background
2.1 Unmanned Aerial Systems (UAS)
2.1.1 Names and Meanings
The politics of names are contentious issues, and nowhere is this idea more obvious than with
UA“s. D o e is u dou tedl the ost o o l used and understood as well as the most
contentious term referring to UASs (Colomina & Molina, 2014). It is also a term that many in the
the d eadful ilita UA“, su h as the aptly named predator drone (Trinder, n.d.). The second
most popular name is unmanned aerial vehicle (UAV), which is slowly gaining recognition as the
little brother term to drone and the term that tends to be favoured by academics3. The term
UAV refers to an unmanned object that has an onboard power source (cordless), is either fixed-
wing or rotary, and is meant to be used multiple times, a vital part of the definition as it
removes missiles and bombs from the category (Chao, Chao, & Chen, 2010; Hardin & Jensen,
2011). Other terms include remotely piloted vehicle (RPV), which must have a ground control
station (GCS) that can control the vehicle, remotely operated aircraft (ROA), unmanned vehicle
system (UVS), and remotely piloted aircraft (Watts, Ambrosia & Hinkley, 2012; Bendea et al,
2007). All of these terms serve their purposes, but this paper uses the term UAS, and there are
several reasons for this choice. First and foremost, it is the term of choice for both the Federal
Aviation Administration (FAA) in the United States, and the Civilian Aviation Authority (CAA) in
the United Kingdom. While Transport Canada uses the term UAV, and the international civil
aviation organization uses the term remotely piloted aircraft system (RPAS), the United States is
the main driver behind most technology and their terminology will most likely become the
sta da d ‘e otel Piloted Ai aft “ ste s “ posiu , .d.; Flying an unmanned aircraft
e eatio all , .d.). Furthermore, the term UAS is more inclusive as it includes both the
3
Just look at the titles in the bibliography
-3-
unmanned aircraft (UA), the GCS, and the data link between the 2 systems. Therefore this name
includes all the components of a successful UAS flight (Colomina & Molina, 2014). In general,
though, any of these terms is appropriate and all of them refer to more or less the same thing.
2.1.2 History
When comparing UASs to digital cameras or computers or GPS, people tend to perceive UASs as
cutting edge and high-tech, but ironically UASs were the first of these technologies to appear.
Finding its origins in the American Civil War, the UAS has a long history (Watts, Ambrosia &
Hinkley, 2012). In the Civil War, UASs were simply hot air balloons with cameras attached to
them and were used for taking pictures of the battle field or scouting the enemy troops (Watts,
Ambrosia & Hinkley, 2012). This concept of the UAS bears no resemblance to the modern UAS
except for their use in military operations. The modern history of UAS diverges from the military
to the non-military. The first remote sensing UAS flight was undertaken in the late 1970s by
archeologists using gas powered remote control (RC) airplanes with cameras attached to take
images of an archeological site (Eisenbeiss, 2004). Unfortunately, the inconsistent altitude and
the vibrations from the motor meant that the data was unusable (Eisenbeiss, 2004). These
flights mark the beginning of UASs as tools for remote sensing as people now know them. Not
lo g afte this a ade i atte pt, the Mi i-“ iffe UA“ p og a as sta ted by NASA (Watts,
Ambrosia & Hinkley, 2012). This program was intended for atmospheric testing at high altitudes,
but once again the data was unusable due to technological shortcomings (Watts, Ambrosia &
Hinkley, 2012). By the 90s NASA had successfully started using UASs in their Environmental
Research Aircraft and Sensor Technology (ERSAT) program for collecting data. This program also
started the miniaturization of the sensors to allow for them to be attached successfully to UASs
(Watts, Ambrosia & Hinkley, 2012). On the other side of the divide between military and
civilian, around the same time the US army was being supplied with 30 centimetre resolution
-4-
imagery by the still ubiquitous Predator drone4 (Rango et al, 2009). As the 90s continued and
spilled into the 20th century, more and more researchers began to see the use of remotely
sensed data of such high resolution. Many started to create their own UAS, combining RC
planes and digital cameras, but most of these systems had not yet achieved the ability to fly
based on a preprogrammed waypoint system. This issue alongside other obstacles, such as
use a UAS (Watts, Ambrosia & Hinkley, 2012). The use and the diversity of UASs have greatly
expanded in the research community in the last 8 years, and will continue to do so until a
company perfects and standardizes the UAS into an easily useable product, table 1 (Colomina &
Molina, 2014). The history of the UAS is a relatively young one, and as such the technology is
rapidly changing.
Table 1: Number of referenced UAS from 2005-2013 (Colomina & Molina, 2014)
does t ea it should e. UA“s offe se e al ad a tages that t aditio al ae ial photog aph
from an airplane or a satellite do not. With a UAS it is possible to survey a site whenever a user
would like whether it is once a month or every 4 hours. Thus the user is in control of the
te po al esolutio of the data d Olei e-Oltmanns et al, 2012; Harwin & Lucieer, 2012).
Furthermore, the data can be of a much higher spatial resolution than possible with airplanes
4
The use of drone instead of UAS in the context of the military is completely intentional. It allows people
to claim that there is a difference between safe civilian UAS and dangerous drones used in the military.
-5-
and satellites (Whitehead, 2013). Also, UASs are able to go where people doing a handheld GPS
survey are not able to go, such as landslide areas, gravel pits or any other highly unstable areas
or areas where contact with the ground will do damage to the site, such as Antarctic Moss beds
(Niethammer et al, 2012; Rango et al, 2009; Lucieer, Turner, King, & Robinson, 2014; Turner et
al., 2014). Finally, the end users can either process the data themselves or hand the data off to
a third party vendor, thus giving the users the ultimate decision on how their data is handled,
whereas due to the cost of airplane or satellite surveys, the data from those technologies is
always processed and collected by a third party vendor (Harwin & Lucieer, 2012). Thus there
are many applications for UASs. They have been used on glaciers to model their change over
time, mitigating the amount of time a person needs to travel on the glacier (Whitehead, 2013).
They have been used at archeological sites all over the world for preliminary DEMs, alternative
views of the site, either nadir or off-angle, or most impressively for mapping the actual dig site
(Verhoeven, 2011; Chiabrando et al, 2011; Bendea et al, 2007). Another application is
vegetation analysis, which benefits from the high temporal resolution as well as from the
possibility of attaching lidar, thermal or hyperspectral sensors to help with the measurement of
plant and tree health (Zarc-Tejada, Gonzalez-Dugo & Berni, 2012; Wallace, Lucieer & Watson,
2012; Wallace et al, 2012). Finally, in fields that previously did ground testing and plotting, UAS
technology helps remove some of the naturally occurring spatial autocorrelation limitations by
covering a larger area with sufficiently high quality imagery to remove the need for in-the-field
measurements (McGwire et al., 2013). As the technology around computer classification, flight
time, and sensors improves and as more people become aware of and trust the technology, the
-6-
2.1.4 Regulations
The major issue currently facing UAS is strict regulation. Regulation in this case is actually a set
of rules patched together because governments generally do not have a comprehensive and
independent plan to deal with UASs, and also the regulation is informed by a fear of new
technolog . The U ited “tates Fede al A iatio Ad i ist atio FAA eaks UA“s i to eithe
civil or public aircrafts, where civil aircrafts are owned by non-government operators, and public
aircrafts are owned by federal or state governments or by qualifying universities. Both of these
types must obtain a certificate of waiver or authorization (COA) that permits the user group to
fly a UAS (Unmanned Aircraft Systems (UAS) Frequently Asked Questions, n.d.). Civil operators
may also obtain a special airworthiness certification (SAC), but to do so the user must fully
describe the system, including how it was constructed and its QA/QC procedures. As well the
operator must have a fully valid pilot certificate (Civil Operations (Non-Governmental), n.d.;
Airworthiness Certification of Unmanned Aircraft Systems and Optionally Piloted Aircraft, n.d.).
However, the FAA is working on a set of rules for small UASs under 55 pounds, the category that
most research UASs fall under, which in the future should make it much easier to obtain a
license to use a UAS (Unmanned Aircraft Systems (UAS) Frequently Asked Questions, n.d.). The
drive for better regulation does not mean that there will be any relaxation of requirements as
the public is still very cautious about UASs, and the FAA insists it will enforce a high safety
standard (Unmanned Aircraft Systems (UAS) Frequently Asked Questions, n.d.) Thus one can
Flying in Canada is a simpler but still a time-consuming proposition. If the UAS is under 25
kilograms, is kept within visual line-of-sight and is not an autonomous UAS, it can be operated if
the user informs Transport Canada of the flight location and time (EXEMPTION FROM SECTIONS
602.41 AND 603.66 OF THE CANADIAN AVIATION REGULATIONS, n.d.). To use an autonomous
-7-
UAS, a special flight operations certificate (SFOC) is necessary. The SFOC will have a series of
conditions, including maximum altitude, distance from people, time of use, and location of use.
Also, an SFOC can take a long time to get because Transport Canada has become overwhelmed
with the increase in requests for SFOCs. Luckily, as a user gains experience within the Transport
Canada framework, SFOCs will become more flexible with greater geographic areas, quicker
application approval, and longer periods of validity (Flying an unmanned aircraft for work or
research, n.d.). The Canadian system makes it possible for users to gain more freedom as they
gain the trust of Transport Canada officials, and thus the Canadian system is much more flexible
Both of these sets of regulations are of interest because they determine how the user can use a
UAS. There are many other sets of regulations. For example, the EU has its own set of
regulations, as do almost all countries in the world (Remotely Piloted Aircraft Systems (RPAS),
n.d.; France - UAV Law Expert, n.d.). However, these are outside the scope of this project. All of
these regulations tell the user how and where they can fly. These regulations also limit the area
that a UAS can cover due to the need to keep the UAS in line of sight. The small coverage area
due to altitude li itatio s a d li e of sight li itatio s akes this pape s atte pt to fi d the
most efficient way to get the data all the more important.
as a lightproof device with an aperture and a shuttered lens that allows light to be projected
onto a recording surface or transfer electronic signals (Full Definition of Camera, n.d.). This
definition creates 4 unique parts of the camera: the body, the recording surface, the aperture,
and the lens. The body of a camera is fairly simple as it simply keeps the light out of the inner
-8-
chamber. Inside this inner chamber one finds the recording surface, which used to be film.
However, the advent of the digital camera has meant that the sensor is now a device that traps
electrical impulses. Semiconductors convert the light energy into electrical energy. Each pixel in
an image corresponds to a cell that creates the electrical energy, and thus an image is created
(Graham, & Koh 2002). These sensors come either as a complementary metal oxide
semiconductor (CMOS) or a charged-coupled device (CCD). The CMOS capacitor is used because
it offers simple construction and low cost, but the CCD sensor produces better results
(Sheppard, 2008). Therefore the CCD option is much more common, and it can be broken down
even further into either a linear or matrix sensor. The linear sensor is a single line of pixels in
each frame that are then combined to create an image. This type of sensor offers very high
resolution imagery but does not have particularly good geometry and is available only in very
heavy payloads (Egles, & Kasser, 2001). Thus this type of camera is not used in combination
The 3rd and 4th part of the camera, the shuttered lens and aperture hole, are found in the lens.
In the case of cameras with interchangeable lenses, the aperture and shutter can be changed by
simply changing the lens. A lens is a clear curved piece of glass that makes images clearer or
bigger (Full Definition of Lens, n.d.). In the case of camera lenses there are multiple lenses in
each camera lens (Allen & Triantaphillidou, 2012). Camera lenses all have a certain focal length,
be it a range, in the case of zoom lenses, or fixed, in the case of fixed length lenses. The focal
length is the distance from the front of the lens to the point where all light rays converge at the
point of focus, and focal length correlates roughly to the corner to corner length of an image,
figure 1 (Allen & Triantaphillidou, 2012). The shutter part of the lens is simply a blind on a hinge
that opens and closes when the user takes a picture. The shutter speed, the time that the
shutter remains open, controls the amount of light that the CCD sensors are exposed to. Finally,
-9-
the aperture is the variable hole that light passes through to enter the camera body (Full
such as f/5.6. The f stands for the focal length of the camera at the given moment. Aperture is
written in this manner to remind the user that it is a ratio and as the focal length changes the
All of these different parts make up a camera, but a camera can come in several different
packages. Most traditional cameras are single lens reflex (SLR). This type of camera has a
removable lens that also houses the shutter and the aperture, as well as a mirror that allows the
viewer to see the field of view through a view hood (Allen & Triantaphillidou, 2012). Another
type of traditional camera is the compact. It does not have an interchangeable lens, but is, as
the name suggests, much smaller than an SLR. Furthermore it does not allow the viewer to see
the image, but instead has either a digital display, in the case of digital compact cameras, or a
viewfinder that roughly approximates the field of view (Allen & Triantaphillidou, 2012). More
cameras (MILC). These cameras have interchangeable lenses, but do not have mirror based
- 10 -
optical viewfinders, and instead have digital displays (Haala, Cramer & Rothermel, 2013). Thus
they are lighter than SLRs but have more flexibility than compacts. All 3 of these cameras are
used on UASs, but in general SLRs and MILCs are preferred because of their specific lenses that
2.2.2 Photogrammetry
A big part of this research is the photo interpretation technique called photogrammetry. By
using images capturing certain objects or locations photogrammetry derives quantitative data
about the objects or locations in question. This technique should not be confused with photo
locations through human intervention (Linder, 2009). Also, photogrammetry should not be
confused with remote sensing, which is closely related to photo interpretation but uses both
human and computer input. Remote sensing also applies these techniques to both photographic
and non-photographic imagery (Mikhail, Bethel, & McGlone, 2001). Photogrammetry has a long
history, but the first examples of photogrammetry as a system with camera and procedure are
found in France in 1849 when it was invented by Aime Laussedat, a colonel in the French
military (Mikhail, Bethel, & McGlone, 2001). For the next hundred and 50 years or so, the field
was confined to experts as the equipment for photogrammetry was expensive and the training
necessary to carry out the procedures was time consuming. However, since the mid-90s and the
move away from analytic plotters towards scanners, computers, digital photography and
computer matching software, the field has opened to more users to the point where non-
professionals are able to use photogrammetry as a tool (Linder, 2009). Thus we are at a point in
history where it is possible for any users to pick up a basic understanding of photogrammetry
and be able to create their own digital elevation models and orthophoto mosaics.
- 11 -
There are 3 vital components in the basic process of going from an image coordinate, also
known as pixel value, to a ground coordinate: Interior Orientation Parameters (IOPs), Exterior
Orientation Parameters (EOPs), and Ground Control Points (GCPs). The IOPs, also called the
sensor model, a e the a e a s i te al geo et i alues, i ludi g the fo al le gth of the le s5,
the principal point6, and the lens distortions, both radial7 and decentring8 distortions (Mitishita
et al., 2010). The EOPs, also called the platform model, refer to the position and orientation of a
given image. The position refers to the x, y and z coordinates of the image in the ground
2 (Mikhail, Bethel, & McGlone, 2001). Finally the GCPs are ground coordinates of known points
in the imagery. The number of GCPs necessary depends on how accurate the EOPs and IOPs
are, and GCPs may not be necessary if the EOPs and IOPs that are produced during the flight are
accurate enough (Mikhail, Bethel, & McGlone, 2001). However, in most UAS flights the EOPs are
powerful, which in turn causes serious inaccuracy (Turner, Lucieer & Watson, 2012). Thus it is
necessary and helpful to have the inaccurate EOPs, IOPs, and GCPs, as all 3 of these values are
5
Known as principal distance or PD.
6
The centre of the image.
7
The distortion occurring as distance from the centre of the image increases.
8
The distortion that occurs in a non-concentric manner
9
The rotation of the platform along the x, y and z axis respectively.
10
See section 3.1 and 3.3 for more description of the unit and its output
- 12 -
Figure 2: The x-axis refers to the direction of travel. (The quadcopter: Control the orientation, n.d.)
Once the position and orientations are obtained, the epipolar lines between image pairs must
be discovered. An epipolar line is the line at which 2 overlapping images intersect along a plane
(Mikhail, Bethel, & McGlone, 2001). This line is vital because it allows a computer to search
along the epipolar line for the epipolar point, the matching point in the image along a given
plane, figure 3 (Linder, 2009). When combined with the EOPs, IOPs, and GCPs, this concept
gives a general outline of how it is possible to go from pixel coordinates to ground coordinates
- 13 -
Figure 3: A diagram of the concept of epipolar lines and points (Linder, 2009)
How one extracts 3 dimensional data from a flat object is a somewhat more confounding
problem at first glance. However, with some relatively simple mathematical equations it makes
perfect sense. First, the concept of displacement is important. There are 2 types of
displacement: one caused by height and one caused by tilt, both of which are important. The
tilt displacement allows the user or computer to calculate the orientation EOPs, if they are not
available (Mikhail, Bethel, & McGlone, 2001). The second displacement, height difference, is
from the centre of the photograph or the height of an object, eq. 2 and figure 4(Morgan, &
- 14 -
Falkner, 2010; Mikhail, Bethel, & McGlone, 2001). Another way to understand topographic
relief is to place a can on the ground, and look straight down at it. You cannot see either side of
the can. However, placing it slightly off of straight down, one can see the side of the can to a
certain extent. This example roughly demonstrates what happens between an object that is at
the exact nadir of an image and an object that is off nadir in an image.
Figure 4: A schematic of the values in topographic relief. X is the sensor location, Y is the
nadir point of the photograph. (3D Models, n.d.)
The second important concept is parallax. It exists in human vision and is what allows our vision
to be stereo. It describes the idea that things closer to the viewer appear to move more quickly
than things farther away (Mikhail, Bethel, & McGlone, 2001). This idea is best explained in UAS
- 15 -
photogrammetry by looking at a tree top in 2 images and observing how it moves less distance
from image to image than a shrub beside the tree because the shrub is further from the camera
than the tree top. This example demonstrates how parallax can be used to calculate the
difference in height between objects, eq. 3 (Mikhail, Bethel, & McGlone, 2001).
larger number of objects, and is more accurate due to more robust mathematical methods
(Morgan, & Falkner, 2010). An understanding of these techniques is vitally important, although
they do not give the whole picture because they are used to find a single location.
With these single derived locations, it is possible to create a DEM and orthophoto mosaic. To do
so, bundle block adjustment (BBA), the most predominant technique used in aerial
accumulation of all the imagery lines from the image to a single point, the camera, without a
position or orientation in the real world (Mikhail, Bethel, & McGlone, 2001). Thus BBA is the act
of combining these bundles and then giving them location and orientation. These values are
found by using the pixel values from the previous equations and the known ground coordinates
to create a transformation that will locate the images in space using x, y, z, roll, pitch, yaw, and
scale (Turner et al., 2012). However, the unknown values that one is trying to discover, usually
ground coordinates at a given location, must not outweigh the known values, such as IOPs,
EOPs, and GCPs, otherwise a model is unsolvable. The greater the number of known values
compared to unknown values the greater the redundancy is in a model. This redundancy allows
- 16 -
(Mikhail, Bethel, & McGlone, 2001). The need for extra known values when there are
insufficient known values in a location is mitigated through the addition of tie points (TP) that
give the models extra known pixel coordinates. Most commonly these points are added by
users in the photogrammetric software (Turner et al., 2012). With more known points than
One main issue with the traditional photogrammetric method is that it requires low pitch and
roll values, as too great a change in scale caused by roll or pitch makes matching 2 or more
points very difficult (Mikhail, Bethel, & McGlone, 2001). This issue is serious for UAS users
because UAS flights tend to have some very high roll and pitch values, as well as big differences
in illumination from image to image that can make matching pixels difficult (Turner et al., 2012).
However, modern improvements in the field of computer vision (CV), getting computers to see
how humans see, have allowed a move towards a hybrid system of point to point matching, as
well as object extraction and texture analysis (Mikhail, Bethel, & McGlone, 2001). The most
advanced of these systems allows the mitigation of issues caused by illumination and scale by
extracting features that are immune to changes in scale. Of all the different algorithms that do
this extraction, the most common and the original is Scale Invariant Feature Transformation
(SIFT), which has proven to be very useful in dealing with the issues that UASs have, as outlined
above (Harwin & Lucieer, 2012). With these extracted objects instead of tie points, it is possible
to continue on with a BBA that aligns the photos, and in combination with parallax and
topographic relief makes it possible to create a point cloud that allows the creation of a DEM
- 17 -
2.2.3 Onboard Computers
Within the scope of this paper, 4 diffe e t ele e ts olle t a d i fo a UA“ s lo atio . The
first of these systems is the navigation board, which is the main brain of the UAS and is often
called the autopilot. The autopilot houses the commands and flight plans which in turn send
signals to the other parts of the UAS (Prats et al., 2013). Most vitally the navigation board sends
commands to the camera and to the other 2 computer systems, the GPS unit, and the power
distribution system. The GPS unit, or GPS/IMU, tracks the X, Y, and Z location, as well as the roll,
pitch, and yaw of the UAS at a given moment. In traditional aerial photography these units are
very accurate and allow for direct georeferencing, but because of the need for low weight
components UAS GPS/IMU are fairly inaccurate, and can only be used as secondary references
(Laliberte, Winters & Rango, 2011). The autopilot then takes the location and sends it to the
power distribution board. The power distribution board is a circuit board that regulates the
amount of power to the electrical components, including rotors, propellers, ailerons, and
elevators. The power distribution board thus allows the UAS to fly without excess electrical
interference and magnetic faults in the case of an aeroplane style UAS. The final computer
piece on any UAS is the modem. This unit allows communication between the user on the
ground and the onboard navigation system, which allows the user to see the vital statistics of
the UAS, and in some cases to control where the UAS goes mid-flight, without the remote
control (Erdos & Watkins, 2008). These onboard components combine to allow a UAS to fly
aerial photographic missions, and without any one of these components it would be much more
either random data points or as gridded data point patterns. The random data point technique
- 18 -
generally only has points in areas where there is a large change in elevation (Mikhail, Bethel, &
McGlone, 2001). In general this type of digital elevation model is used when data is collected at
discrete points as in the case when a GPS system is used to collect XYZ points at a certain
location, but it can also be used with continuous data to reduce the size of the data, such as
when a triangular irregular network (TIN) is created (Egles, & Kasser, 2001). More relevant to
this research, the other type is the gridded elevation point patter, which sets out points at
regular intervals without any consideration for elevation change (Li, Zhu & Gold, 2010). This
technique usually uses continuous data, such as that created by lidar or, more relevantly to this
paper, photogrammetry (Mikhail, Bethel, & McGlone, 2001). Creation of a DEM using
points. With manual creation an individual places points on a grid using 2 overlapping images
and stereo vision. It is time consuming but more accurate because users are able to use their
understanding of the landscape to avoid mistakes in creating a surface that includes elevation
points that are trees or other non-surface objects (Mikhail, Bethel, & McGlone, 2001).
Automated point creation uses either computer vision and feature matching or pixel
coordinates and rotation values to place points on a grid without any human intervention, as
explained in section 2.2.2 (Mikhail, Bethel, & McGlone, 2001). It is more efficient but suffers
because current computer algorithms struggle to decipher where a tree or building is sitting
above the ground surface, and thus the DEM does not reflect ground elevation but instead the
elevation of the objects that appear in the imagery (Li, Zhu & Gold, 2010). Ultimately what
system and technique one uses to create a DEM depends on a multitude of factors including
data sources available, necessary accuracy, and time restraints. For maximum accuracy, using a
GPS base and rover station and processing the data manually would allow a user to get very high
accuracy data, but the process would be very time consuming. Traditional aerial surveying with
- 19 -
automated DEM creation would be much more efficient but less accurate, and filled with
objects such as trees and buildings, as well as more general errors (Li, Zhu & Gold, 2010). Clearly
where UAS data fits into this scheme is an important topic, as it allows users to make even more
pieces. First and most simply is a photomosaic, which is the piecing together of multiple images
into a single large composite image (Saint-Amour, 2013). Orthorectification is the removal of the
perspective aspect from an image caused by relief displacement (Mikhail, Bethel, & McGlone,
2001). To create one of these, a user must first produce a DEM from the imagery available, and
then use this DEM data to remove the perspective from the mosaic (Li, Zhu & Gold, 2010).
Many users may ask why it is necessary to create an orthorectified mosaic versus simply
creating a mosaic. Whereas the purpose of a DEM is fairly obvious because it creates elevation
obvious. Due to relief displacement and each image s central projection, the scale at a given
location changes depending on the real world elevation of that location and its distance from
the nadir (Mikhail, Bethel, & McGlone, 2001). To be able to use imagery to measure distances, a
user must remove the geometric distortions in the mosaic to get a planimetric and consistently
scaled map (Fundamentals of orthorectifying a raster dataset, n.d.). There are 2 basic ways that
rectification can be done, either with backward or with forward projection. Forward projection
i ol es taki g the sou e i age s pi els a d d api g these pi els o to the eated DEM then
original image, the pixels are irregularly spaced, making interpolation necessary to create the
normally spaced grid necessary for an image (Mikhail, Bethel, & McGlone, 2001). Backward
- 20 -
projection involves taking the pixels xyz location and comparing them to the xyz location of the
DEM to get the pixel s colour values to create the orthoimage. In this case there are 2 issues.
The first is that the 2 xyz values do not necessarily match and thus the final orthoimage must be
resampled. The second issue occurs when the DEM has done an excellent job of obtaining the
ground elevation and therefore the buildings and other tall objects appear without any
elevation to them. In turn this lack of elevation does not allow for the removal of topographic
relief as the buildings have no relief in them according to the incorrect DEM (Mikhail, Bethel, &
McGlone, 2001). This issue often occurs in situations where the user does not want certain
objects, usually trees or other vegetation, in the DEM, but does want a properly orthorectified
mosaic. Another issue that can arise with rectified imagery, no matter the technique, is the
deformation of pixels, which go from square to trapezoid. This occurs as pixels get further from
the nadir of an image, and is caused by the changing of the scale of the pixels in the
orthorectification process. This issue is often caused by high variation in roll and pitch, which is
especially prevalent in UASs, and generally has larger variation than traditional photogrammetry
finds suitable (Mikhail, Bethel, & McGlone, 2001). The final issue with orthorectification is
occlusion, which is when an area lacks imagery because that location is either blocked by taller
objects or overlap misses an area. Both of the orthorectification methods have issues, and it is
impossible to perfectly orthorectify something because all orthorectified images will have either
occlusion, locations with multiple elevations, or pixels with deformation. However, these issues
can be mitigated to a certain extent by increasing overlap and the focal length of the camera,
both of which reduce relief displacement in the part of the image that needs to be used
(Mikhail, Bethel, & McGlone, 2001). Understanding the process of creating an orthorectified
mosaic is vital in understanding the accuracy and precision of this type of model.
- 21 -
2.3.3 Resolution
Resolution is the measurement of the smallest unit available. Many things have resolution.
Temporal resolution refers to the frequency of measurement through time, and is a vital part of
why researchers are intrigued by UASs. The ability to be able to repeatedly measure one area as
often as possible offers the opportunity for greater understanding of changes throughout the
year, and changes throughout many years, as well as better models of these changes (Laliberte,
Goforth, Steele & Rango, 2011). However, temporal resolution must be used with an
understanding of GSD, model accuracy, and real world change. Otherwise the user will end up
modelling error as if it were changes in the real world (Niethammer et al., 2012). Another
important type of resolution is spatial resolution. Many people will equate this to pixel size, but
spatial resolution refers to the size of discernible objects. 2 or 3 pixels is the generally accepted
consensus on when an object becomes discernible (CCD Spatial Resolution, n.d.). However,
things such as image blur caused by motion alongside insufficiently quick shutter speeds can
cause the spatial resolution to be much worse than 3 pixels (Egles, & Kasser, 2001). In fact,
when viewing a picture that is out of focus, one is seeing an image with an unresolvable spatial
resolution. It is important because it informs the smallest objects that are possible to discern in
a given image. The final and most commonly used type of resolution is pixel resolution, which is
the real world size that a pixel represents in an image. This type of resolution is interchangeable
with GSD (Morgan & Falkner, 2001). This measurement is a function of the focal length, CCD
Element size, and Altitude of imagery above ground level, Eq. 1. Thus, as one flies any given UAS
higher, the GSD or resolution will decrease, which means that resolution can and will change
over a single flight if the ground has a slope and the UAS has a single altitude above ground level
(AGL) set at the start of the flight. This effect can be seen in a single image if there is slope in
the image.
- 22 -
Several papers on UAS flights have reported GSD, and findings are fairly consistent, and even
when creating orthomosaics the GSD remains stable and in line with expected values based on
equation 3. For example, researchers flying over rangeland at an altitude of 215 metres found a
GSD of 6 centimetres on a mosaic (Rango et al., 2009). Other researchers flying over a landslide
at similar altitudes (~200m) found a resolution of about 6 cm (Niethammer et al., 2012). The
research seems to bear out the fact the GSD, or resolution, is fairly stable in mosaics created
2.3.4 Accuracy
Some people use accuracy and precision interchangeably, and some people use accuracy when
world equivalent coordinates. This means that in practical terms accuracy informs the user of
how far apart the exact location of something in the real world will be from the same thing in
the model. The accuracy of a model when doing a UAS survey is limited by and a function of the
accuracy of the GPS unit that is used to create the GCPs and the logfile (Turner, Lucieer &
the GCPs and the logfile. The accuracy of the GPS unit will usually be about a third to a quarter
of the accuracy of your overall model accuracy (Turner, Lucieer, & Watson, 2012). Therefore
accuracies. Furthermore the number and location of GCPs will have an impact on the accuracy
of a model, meaning that it is vital to place enough GCPs (Harwin & Lucieer, 2012). The final
influence on the accuracy of a model, specifically a DEM, is the base to height ratio (B/H). The
base, in this context, is the distance between the principal points of 2 overlapping images and
the height is the altitude, figure 5. As the B/H increases the accuracy also increases because the
B/H is a key component of the altitude error equation, eq. 4, and in turn the planimetric error
- 23 -
has the altitude error in its equation, eq. 5, all of which is derived from Thales theorem (Egles, &
Kasser, 2001). As the B/H ratio reaches between 0.8 and 1.0, the error stops decreasing
(Hasegawa et al., 2000). Thus, in theory at any given altitude as long as a location has stereo
coverage, the smaller the overlap and therefore the higher the B/H ratio, the more accurate a
data set should be.11 On the more confusing side of things, the base is a function, to a certain
extent, of the length of the lens that is being used. A shorter lens increases the base width but
decreases the GSD. Therefore, in theory, the GSD, or pixel size, will decrease but the accuracy
should in fact increase. The confusing and roundabout nature of photogrammetric accuracy
quite clearly demonstrates the need for real world examples of the interaction between overlap,
Figure 5: The B/H ratio and overlap (Base height ratio, n.d.)
Eq. 4 ealtitude = H/B*ro* ematch where ematch = matching error, h = height, b = base, and ro = ground pixel size
(Egles, & Kasser, 2001)
Eq. 5 eplanimetric= tan(i)*ealtitdue where tan(i) = tangent angle of the image pair, and ealtitude = z error
(Egles, & Kasser, 2001)
Several different researchers have measured the accuracy of their orthomosaics and DEMs
collected with UASs. One set of researchers using a UAS to survey archeological sites in Italy
found a root mean square error (RMSE) of 9 centimetres when flying at 100 metres with a GPS
unit accurate to between 2 and 4 centimetres; they also found an RMSE of 10 centimetres when
flying at 60 metres with a GPS unit accurate to between 1.5 and 3 centimetres (Chiabrando, Nex,
11
It should also be noted that the B/H ratio also has impact upon the precision of a model.
- 24 -
Piatti, & Rinaudo, 2011). Both of these values fit roughly into the 1/3 to 1/4 rule of GPS
a u a . A othe stud o GCPs effect on the accuracy of a DEM found that flying at between
30 and 50 metres at between 70 and 95 percent overlap with 21 targets around 10 metres apart
and a GPS unit accurate between 1 and 2 centimetres produced an RMSE of between 1.5 and
4.9 centimetres. Whereas when the researchers used only 6 GCPs under the same conditions
they recorded an RMSE of between 10 and 17 centimetres (Harwin & Lucieer, 2012). Thus one
can see the large impact that having too few GCPs can have on a model and why it is vital to put
out enough GCPs to cover the area of interest in order to get very high accuracy data. A third
research paper evaluating DEM point cloud generation found RMSE of between 7 and 8
centimetres when flying at 70 metres with GPS accurate to about 3 centimetres, overlap of
between 75 and 90 percent, and side lap of 55 to 70 percent. However, quite contradictory to
the concept of the B/H ratio, this same research achieved the highest accuracy at 90 percent
overlap with a 6 centimetre RMSE versus 9 centimetres when data was processed with 80
percent overlap (Rosnell & Honkavaara, 2012). This research found that greater overlap
achieved greater accuracy, a finding that goes directly against the concept of the B/H ratio
concept. This finding points to the fact that, while theoretical ways to achieve maximum
accuracies are very useful, they exist as a legacy from traditional aerial photography where the
roll and pitch values are much more constant than with UAS photography. The final research
into accuracy results is a topographic reconstruction of moss beds in Antarctica. This research
found that flying at 50 metres with a GPS unit between 2 and 4 centimetres and 30 ground
control points produce RMSE values between 3.7 and 4.5 centimetres (Lucieer, Turner, King, &
Robinson, 2014). Here the research finds a much lower ratio of GPS accuracy to model accuracy
with GPS accuracy somewhere between half and equal to the model accuracy.
- 25 -
2.3.5 Precision
While accuracy, namely GPS accuracy, is to a large extent a function of things outside of the
UAS, and software precision is much more a function of the imagery and the software. In terms
In a DEM there may be a shift in the z that affects the accuracy, but the z distance from 2 points
in the model can still be relatively correct, and this precision is affected by 3 things: pixel size,
image quality, and the B/H ratio (Mikhail, Bethel, & McGlone, 2001). As previously explained,
the B/H ratio is a function of flying height and distance between the principle points in the
stereo pair. How pixel size is derived has also been explained previously, Eq. 1. Image quality
matching process, leading to misaligned images (Agisoft PhotoScan User Manual: Professional
Edition, Version 1.0.0, n.d.). All 3 of these causes of error are software and image dependent,
and thus precision is much more about the imagery itself and is a much more difficult process to
calculate. Many of the software solutions that offer precision are black box, meaning that the
user is not able to see how the precision calculations are done (Mikhail, Bethel, & McGlone,
2001).
There are several sets of research that report the precision of their models, although these
reports are much rarer than reporting the accuracy. One set of researchers flying at 50 metres
AGL found a precision of 11 centimetres from imagery with a resolution of about 1 centimetre
(Wang et al., 2014). Another set of researchers were able to obtain precision of between 9 and
27 millimetres when flying at around 500 metres AGL (d'Oleire-Oltmanns et al., 2013). The
general lack of results is not surprising. Some software programs do not offer internal accuracy
measurements and most offer only the precision of the model based on the GCPs. Finally, many
users do not find a need for precision when in fact they should be considering it instead of the
- 26 -
accuracy, as there are many users of DEMs and ortho-mosaics that do not need accuracy. This
fact is especially true of volumetric calculations, which only need the model to be internally
accurate to record an appropriate value. Whatever the reason, many researchers do not include
the precision of their model, and they are doing a disservice to other researchers.
3. Methods
3.1 Hardware
The major hardware tools used in this project e e Mik okopte s 6 propeller UAS dubbed the
hexakopter, 2 GeoXT GPS units produced by Trimble, and a Panasonic Lumix GF1 camera with
the Lumix G 20 millimetre lens attached. The GeoXT is a single frequency GPS unit that can be
combined so that one acts as a base and the other acts as a rover, thus increasing accuracy
further. The Trimble information paper claims a horizontal root mean square accuracy of up to
1 centimetre after post processing with the base station tracking satellites for 45 minutes or
more (GEOEXPLORER 3000 SERIES: GeoXT Handheld, n.d.). The GF1 is a MILC that utilizes the
micro 4/3rds system, meaning that the system has a smaller board than traditional cameras, and
that the x-axis is 1.33 times larger than the y-axis. This particular sensor measures 17.3mm wide
by 13mm high (Panasonic Lumix DMC-LX100 vs. Panasonic Lumix DMC-GF1, n.d.). It is a 12.1
megapixel camera, which equals approximately 4000 pixels by 3000 pixels with a CCD Pixel size
of 0.00415 mm and a frame weight of about 290 grams (LUMIX® GF1C 12.1 Megapixel
Interchangeable Lens Camera Kit, n.d.). The 20 mm lens has a maximum f/1.7 aperture size,
weighs about 100 grams, and has been factory tested for distortion, figure 6 (Lumix® G 20mm /
F1.7 ASPH Lens, n.d.). The camera with lens attached weighs around 400 grams, which is
- 27 -
Figure 6: Lens distortion for GF1 (Lumix® G 20mm / F1.7 ASPH Lens, n.d.)
The final and most vital piece of equipment is the hexakopter. The hexakopter has 3 major
components: the engines, the autopilot, and the GPS/IMU unit. The 6 engines have the most
obvious job, to keep the UAS afloat and move it through the air. They are powered by a single
3.7 volt 4 cell (14.8 total volts) lithium polymer battery with either 6200 or 5000 mAh and a 25
amp capacity, figure 7. This battery power allows the hexakopter to fly for about 15 minutes or
around a total distance of 2200 metres. The 6 propellers consist of 3 sets of 2 counter rotating
propellers that remove the need for the tail rotor found on traditional helicopters. Each of
these propellers is controlled by an engine control board (ECBs) that interacts with the power
distribution board, figure 8. The power distribution board is in turn controlled by the autopilot.
The autopilot o t ols all aspe ts of the he akopte s flight, i ludi g the t igge i g of the
remote shutter for the camera. The autopilot also houses the logfile of each flight on a micro SD
memory card. Furthermore it interacts with the ground control station (GCS) and carries out the
mission plan created on the GCS through the use of the Kopter Tool software, figure 9. The
GPS/IMU informs the autopilot and the GCS of the location, speed, roll, pitch, and yaw of the
MikroKopter. This GPS/IMU is a 1 hertz unit, meaning that it gets a position every second,
making it only as accurate as the speed that the unit is flying per second, so if the MikroKopter is
- 28 -
flying at 10 m/s the GPS/IMU is accurate +/- 10 metres. This accuracy is sufficient to roughly
locate the camera but is insufficient to allow for the GPS/IMU to accurately locate the camera
position of each photo without the assistance of GCPs. A second reason that the logfile created
by the GPS/IMU is not as accurate as possible is the gimbal, an articulating attachment that
attempts to maintain a level plane for the camera. The GPS/IMU is located on the top of the
hexakopter, and the camera is located on the gimbal, meaning that the logfile data for roll,
advantage of allowing the camera to maintain an almost even plane and therefore avoid missing
data coverage, but it means that the generated logfile is not sufficient to remove the usefulness
of GCPs.
- 29 -
Figure 8: Power Distribution Board with 6 Engine Control Boards attached
- 30 -
3.2 Data Collection
Data collection was done in 2 locations. The first of these is along the Mackenzie River delta, a
location known as Kuukpaak, which is an Inuvialuit archeological site, figure 10. This area is
tundra with tall shrubbery found next to the river. It offers some topographic relief with the
upper tundra area rising about 30 metres above the lower shrubby riverside. The second
location is in the southwest of Calgary in the Springbank Hill Area, figure 11. This location has
less drastic topographic change. However, it does consistently and gradually increase in
elevation from south to north with a total elevation change of about 50 metres. Also, the trees
on site do offer a very drastic change in elevation from ground to tree top, figure 12.
Figure 10: The location of Kuukpaak, marked by the neon green triangle.
- 31 -
Figure 11: The Calgary flying location of the UAS, as marked by the neon green triangle
- 32 -
Figure 12: The full coverage of the flights in the Calgary location.
For data collection to be successful, a calculation of the correct footprint of each image had to
be made so that each flight would have the correct overlap values. To find the correct footprint,
first the CCD pixel element size was calculated based on the number of pixels and the camera
sensor size, eq. 6. Then, using this number, the size of each pixel was calculated. This process
was done 1 of 2 ways: either by first calculating the instantaneous field of view (IFOV) and then
calculating the pixel size or by calculating the GSD, which is equal to the pixel size, eq. 1 & eq. 7
& eq. 8. With the pixel size value calculated, the length of both the X side of each image and the
Y side of each image were calculated, and because the camera uses a micro 4/3rds image
sensor, the equations for these values are fairly simple, eq. 9 & eq. 10 . The X and Y numbers for
each altitude were used to calculate overlap for 60, 70, and 80 percent. To calculate these
- 33 -
numbers the axis length was multiplied by 1 minus the percent overlap expressed as a number,
i.e. 1-0.8, table 2. These final values were used to create the flight plans. However, Mikrokopter
can only trigger a photo at a given distance interval, and therefore overlap was created by
multiplying the speed of the hexakopter (m/s) by the time between photos (s) to get distance
between photos. In each flight plan, the distance between legs was set so that the leg to leg
overlap was correct. With this rather convoluted system to get overlap, the flights with varying
Eq. 7 IFOV =
- 34 -
With the flight plans set up, the GCPs, the odd numbered GPS points, and the CPs, the even
numbered GPS points, which served as accuracy assessment markers, were set out. These 2
markers, also, acted as scaling/real world coordinate markers in the processing stage of the
data. These GCPs were 1 foot by 1 foot pieces of yellow plastic, figure 13. The base station GPS
was set up close to the hexakopter take off location, and then synced with the rover GPS unit.
The rover GPS unit was then taken to each GCP/CP and 30 readings at one-second intervals
exposure and aperture were set and the flights for the given days were undertaken with the
corresponding flight plan for overlap and altitude. Once each flight was completed, the images
were taken from the camera and loaded onto the computer, and the data collection for that
Figure 13: The look and shape of a good quality image GCP or CP
the data from the onboard navigation system (GPS/IMU), the camera and the GPS were
removed from their respective locations and loaded onto a computer. Then a csv file of the
GP“/IMU data, alled the IMU fo this pape s pu pose, as eated usi g the .GPX file that the
- 35 -
hexakopter creates and the MK GPXTool program that came with the hexakopter. Each IMU
was then changed so that the time format found in the csv file was set to HH:MM:SS AM/PM.
With this format, the IMU and the photographs were matched using a python script created for
this project. This script matched the time stamp of the each photo with one of the values from
the IMU and stored it in an excel file. The output, called a logfile for this purpose, was a csv file
that emulated the EOPs of each photograph recorded in its own row, and thus it created the
EOPs for each photo that was taken during a flight. The GPS data had to be post-processed with
Trimble s p op ieta soft a e that ame with the GeoXT GPS units to harness the differential
GPS capabilities. This technique used the accurate location of the base station to increase the
accuracy of the rover station s measurement of each GCP/CP. The post-processed points were
exported out of the Trimble software into a csv format, and then changed into a txt format,
which is hat Agisoft s Photo“ a e pe ted. To further complicate the issue, the hexakopter
recorded altitude (or elevation above ground level (AGL)) not elevation, and the GCP/CPs were
recorded in elevation above sea level (ASL). Because PhotoScan could only accept one of these 2
value sets, it was necessary to average the GCP/CPs elevation and then add this average to the
AGL points found in the logfile. This process was done in Excel, and the end product was the
he akopte s ough A“L ele atio he ea h photog aph as take . The photog aphs e e
then loaded into PhotoScan. Next the logfile was loaded into PhotoScan. It was vital that an
image s a e atched exactly matched the corresponding photo name in the logfile so that the
software could appropriately match the location in the logfile to the image. Once the logfile was
loaded, it was very important to set the known accuracy of each poi t s lo atio so that the
software did not see the input from the logfile as more accurate than it was. Next the photos
were aligned as accurately as possible with an unlimited number of points in the created sparse
point cloud. The GCPs were then located inside the sparse point cloud to help give the data real
- 36 -
world coordinates and scale. Due to the inaccuracies in the logfile, some of the GCPs had to be
located in individual photos until the sparse point cloud was more appropriately placed in terms
of coordinates. Once all the GCPs were placed, the accuracy of the data was set again so that
the software did not rely too heavily on these points or the logfile. Then the sparse point cloud
was optimized using the GCPs and the settings to remove any sort of distortions and more
The confidence in the photo locations, the camera settings, and the GCPs were fairly high, and
as such a dense point cloud was created with the highest quality possible and with an aggressive
filtering to make sure any faulty points were removed. With the dense point cloud it was
possible to create a polygon mesh, which allowed for the creation of a higher quality
orthorectified photomosaic. Once the polygon mesh was created the DEM, orthophoto and
project reports were exported. The project report has all the details of the processing, including
actual elevation of flights calculated by the photogrammetric process and accuracy of GCPs in
both pixel and centimetre values, as well as precision information about the model in general.
The DEMs in this project were exported into a tiff raster with a world file accompanying it (.tfw)
because a point cloud created too large a data set. The pixel sizes for the DEMs were based on
the PhotoScan default, but rounded up to the nearest centimetre or millimetre depending on
the project. The same process was used for creating the ortho mosaics. In all cases WGS 84
12
An excellent tuto ial is a aila le f o Agisoft s e site http://agisoft.ru/ that will helps guide a user
through the process of creating the orthos and DEMs
- 37 -
GCP based on the orthomosaic. This process was repeated for each orthomosaic. Then the X, Y,
and Z coordinates of each point in each flight were added to the attribute table. These point
values were then exported into a CSV for each flight. The values from the CSV were then
compared to their respective CP X, Y, Z values collected with the GPS units. The difference
between the 2 values, with GPS location minus GCP location as the equation, was the error at
that given point. With these values it was possible to get a RMSE for the X, Y, and Z values, as
Eq. 11
The values from each flight were then subdivided into date taken, overlap of photographs, and
altitude of flight. As well, they were divided into altitude of flight at each given overlap,
planimetric and vertical values, and finally into a given overlap at each altitude. With these
divided values an R2 value was calculated in excel for all the values, except for date taken13. All
of these values were also put into scatterplots to visualize any patterns that may have arisen.
3.4.2 Precision
Finding precision in the models was very simple. As precision is the measurement of the
internal accuracy of the created models, see section 2.3.4, GCPs cannot be used for this
measurement. Fortunately in the project report from PhotoScan there is a per pixel error value.
Fo this p oje t this alue ill e o side ed p e isio s e ui ale t to a u a s ‘M“E. This
data and the control points were loaded into an excel spreadsheet for each of the flights. The
report also offered the project s GSD prior to export. This GSD was multiplied by the pixel error
to produce the internal error of the model in centimetres. With these values the R2 values of
13
As there is no Y-axis number there would be no way to calculate R2
- 38 -
pixel error and cm error, as the dependent variable, and altitude or overlap, as the independent
variable, were calculated. Also since GCP/CP precision was listed, it was investigated as a
possible explanation for changes in precision in the same manner that altitude and overlap were
investigated. These different independent and dependent variables were also put into scatter
plots to allow for better visualization of any possible patterns. With all these different ways to
look at data, a fairly clear picture should emerge from the investigation of precision.
accuracy or precision of the flights, investigations into other possible explanations for the
change in accuracy from flight to flight were undertaken. All of the flights were undertaken
between 11 in the morning and 2 in the afternoon as the time of day can have an impact on the
accuracy and precision of a model due to shade causing areas to appear black (Gómez-Gutiérrez
et al., 2014). The time of day was selected not to be the most accurate but instead the most
consistent from flight to flight to allow for evaluation of the impact of altitude and overlap
without interference from changes in shadow cover. Another possible explanation for changes
in accuracy was the distance from the exact centre of the model. This process was done by
finding the exact centre using ArcGIS and then creating buffers of 25, 50, 75, and 100 metres.
Then each CP for every flight was put into 1 of those 4 buffer categories, or a 5th category that
was outside of the 100 metre buffer. The error for each of the CPs was then calculated, and
then an RMSE for the x, y, and z values for each category was calculated, as well as a total RMSE.
Further, the outliers of each group were removed based on the 1.5 times of the inter quartile
range, eq. 12 and eq. 13. This removal made it possible to see if CPs at greater distance were
vulnerable to greater image distortion. The absence of outliers also demonstrated whether
there was a larger range of values as distances from the centre increased. A third variable that
- 39 -
was considered as much as possible was the time of year. All the flights were done within a
month of each other to avoid too much change in the landscape and in the vegetation. Other
variables that were noted but difficult to control were cloud cover and wind speed. Both of
these were generally minimal but could not be controlled to match exactly from day to day and
flight to flight. The final variable was the image quality of each CP without its relationship to the
scale from 1 to 3 where 1 stood for excellent with clearly distinguishable boarders and 3
represented a CP that was closer to round than to square due to blurring. Then each of the 3
image quality categories had the overall RMSE calculated to see the impact of image quality on
RMSE.
4. Results
4.1 Collected Data
This project ended up with a total of 16 flights, one from the arctic at 100 metres altitude and 60
percent overlap, and 15 from the horse ranch in Calgary. The Calgary flights had 3 flights at 50
metres, 6 at 75, and 6 at 100. These were equally distributed among the overlaps with 5 at each
of 60, 70, and 80 percent. There are 5 different sets of GCPs and CPs with different accuracies
and precisions (appendix 1) because the flights were undertaken on 5 different days with
diffe e t GCP/CP lo atio s due o e ship est i tio s. Ho e e , the Calga flights GCP/CPs
are all fairly similar. As previously stated, the GPS units claim an accuracy of 1cm if tracking for
over 45 minutes. With the GPS never reading a position of dilution precision (PDOP) exceeding
5 with a standard deviation never above 1*10-4, the accuracy of the GCP/CP are very high,
comfortably under 20cm. The pixels of every flight at a certain altitude have an expected GSD
based on previously explained equations and input numbers, table 3. However, because of
- 40 -
elevation changes over the 2 sites, especially in Calgary, the actual flying altitude as calculated
by PhotoScan as an average of the altitude of all images taken over a single flight is lower than
the expected values by between 10 and 30 percent, table 4. Furthermore, the ratio of set values
to actual observed values decreases as overlap decreases, and a similar trend is seen as altitude
decreases. GSD has a direct relationship to the altitude of a flight. The same type of relationship
can be seen in the ratio between expected GSD and actual GSD as calculated by PhotoScan,
figure 14.
Table 3: The necessary input numbers to calculate GSD for the 20mm lens
Sensor Width (MM) Focal Length (MM) Image Width (Pixels)
17.3 20 4000
Table 4: The pertinent values for each flights GSD including altitude and the difference in altitude for each flight
Ratio (Set Altitude
Flight # Overlap Set Altitude (m) Agisoft Altitude (m) to Actual Altitude)
140925_01 60% 50 40.03 0.8007
140925_02 70% 50 39.04 0.7809
140925_03 80% 50 43.65 0.8730
141001_00 60% 100 73.72 0.7372
141001_01 70% 100 75.23 0.7523
141001_02 80% 100 83.11 0.8311
141001_03 60% 75 56.40 0.7520
141001_04 70% 75 57.43 0.7657
141001_05 80% 75 61.30 0.8173
141016_01 60% 100 73.58 0.7358
141016_02 70% 100 76.97 0.7697
141016_03 80% 100 83.68 0.8368
141016_04 60% 75 54.58 0.7277
141023_01 70% 75 63.08 0.8410
141023_02 80% 75 66.49 0.8865
140701_02 60% 100 87.62 0.8762
- 41 -
Ratio of Predicted GSD to Actual GSD
1.4
Flight Name
Figure 14: The Ratio of GSD that should have occurred based on GSD equations and the actual GSD as calculated by
PhotoScan.
The data from each of the 16 flights was exported out of PhotoScan into a DEM and an ortho
with pixel sizes ranging from 8mm to 2cm for the orthos and 2cm to 4cm for the DEMs, table 5.
In all cases DEM resolution was about twice as large as ortho resolution export values.
Table 5: The export values for the DEM and Ortho for each flight
Flight # DEM Export Resolution (CM) Ortho Export Resolution (CM)
140925_01 2 0.8
140925_02 2 1
140925_03 2 1
141001_00 3.5 2
141001_01 3.5 2
141001_02 4 2
141001_03 2.5 1.5
141001_04 2.5 1.3
141001_05 3 1.4
141016_01 3.5 2
141016_02 3.5 1.75
141016_03 4 2
141016_04 2.5 1.25
141023_01 3 1.5
141023_02 3 1.5
140701_02 4 2
- 42 -
4.2 Accuracy
4.2.1 Overlap
As previously explained both the GCPs and the CPs were marked as closely to the centre as
possible in the models and then compared to the equivalent GPS location. These 2 positions
were then compared in each flight to get the RMSE values for the flight with a maximum RMSE
of 213.5cm on flight 141001_00 flown with 60 percent overlap, and a minimum of 7.7cm on
flight 141001_01, table 6. When these values are grouped into each different of overlap, one
would expect the 80 percent values to have the greatest accuracy, if the idea that higher overlap
is more accurate is true. However, the group of 80 percent overlap is neither the most accurate
value nor is it on average the most accurate. Instead the 70 percent overlap has the smallest
overall RMSE values, as well as the smallest maximum value. When these values are viewed
from a visual point of view no pattern emerges, except if one were to strain, a u-shape may
emerge, but really there is no pattern at all, figure 15. Furthermore the R2 value gives credence
to the lack of a pattern with a value of only 0.14, which does not indicate a good fit. With the
above numbers in mind, one can safely say that overlap on its own does not have much effect
on accuracy.
- 43 -
Figure 15: The RMSE compared to the overlap, where there does not seem to be a pattern
4.2.2 Altitude
When accuracy is viewed through the lens of altitude, the picture is no clearer. 75 metre
altitude is the most accurate with the lowest RMSE of the 3 altitudes and the lowest maximum
RMSE of the 3 altitudes, Table 6. The 100 metre group has a wide array of values with several
very accurate flights and several very inaccurate flights, table 6. While the 50 metre group is
rather paltry in its sample size, it is slightly more accurate than 100 metres. A visual
representation bares out the above observations and demonstrates that 100 metres has the
greatest deviation, figure 16. Unsurprisingly, the R2 value for the altitude is even smaller than
the overlap residing at 0.02. In essence, as the altitude increases, there is no correlated increase
- 44 -
Figure 16: The RMSE compared to altitude, which proves to not have much of a pattern either
accuracy, XY, and vertical accuracy, Z, as well as into categories of both altitude and overlap
reveals different data, table 7. With both the planimetric and the vertical accuracies, as altitude
increased at 80 percent overlap, accuracy increased. The opposite is true for 60 percent
overlap, figure 17 and 18. Furthermore, these broken down values demonstrate some
correlation as the 80 percent overlap group has an R2 value of 0.59 when comparing RMSE and
altitude, and the 60 percent overlap group has an R2 of 0.28. However, the 70 percent overlap
group shows no relationship between altitude and RMSE, R2 of essentially 0 (0.0007). When
comparing overlap to RMSE with the planimetric and vertical values, the 100 metre group
increases in accuracy as the overlap increases, whereas the 50 metre group decreases in
accuracy as overlap increases, figure 19 and 20. These categories also demonstrate strong
correlations. The 50 metres group has an R2 of 0.71, and the 100 metres group has an R2 of
0.38. However, the 75 metre group has much less of an R2 value of 0.11. Also, as one would
- 45 -
expect, due to equations for deriving error, the planimetric and vertical accuracies from each
flight fit very closely together with a ratio of about 1 cm planimetric RMSE to 1.5cm vertical
RMSE. Breaking the data down into more incremental parts helps explain the impact of altitude
and overlap as well as demonstrating that the middle values, 75 metre altitude and 70 percent
overlap, are largely the reasons that the larger groups have no conclusive outcomes.
- 46 -
Figure 17: The planimetric RMSE grouped into overlap and compared to altitude
Figure 18: The vertical RMSE grouped into overlap and compared to altitude
- 47 -
Figure 19: The planimetric RMSE grouped into altitude and compared to overlap
Figure 20: The Vertical RMSE grouped into altitude and compared to overlap
the precision and in turn to test the variables other than altitude and overlap. In the case of the
distance from the centre of the image, there is little change in accuracy through the different
- 48 -
distances. At the 2 extremes the total RMSE value for within 25 metres was 77.4cm and for
everything greater than 100 metres was 85.2cm. Furthermore, removing the outliers only made
the pattern less obvious. The RMSE decreases as distance from the nadir increases until the final
category where the value almost doubles from around 30 centimetres to 55 centimetres, table
8. A graphic representation of the total RMSE demonstrates that in the over 100m from nadir
Table 8: The accuracy values for all CPs at different distances from the centre
Distance From Nadir <25m 25-50m 50-75m 75-100m >100m
RMSE (m) 0.774 0.657 0.736 0.745 0.852
RMSE w/o Outliers (m) 0.365 0.339 0.319 0.309 0.555
Figure 21: A scatterplot for RMSE values sees a slight uptick at the over 100m values, but overall no real increase over
distance
The second other variable tested for was how image quality affected the accuracy of the
models. The CPs were divided into 3 categories, 1 for the worst quality images, figure 22, 1 for
the average quality images, figure 23, and 1 for the best quality images, figure 24. This variable
demonstrated a change in RMSE depending on the image quality. The worst CPs, of which there
were 14, had an RMSE of 105.9cm, whereas the highest quality images, of which there was 15,
had an RMSE of 52.6cm with the intermediate quality having an RMSE 82.8cm. A pattern
emerges from this analysis where higher quality CPs have a lower RMSE than lower quality CPs,
- 49 -
but due to the ordinal nature of the categories, it is not possible to offer further statistical
observations.
Figure 22: An example of a poor quality target, and how the square 1 by 1 shape is almost unrecognizable.
Figure 23: An example of a medium quality target. The shape is recognizable, but the edges are a little blurry
- 50 -
Figure 24: An example of a good target. Where the edge of the GCP/CP stops is very obvious, and blur is limited.
4.3 Precision
4.3.1 Base Values
Precision, or internal accuracy, is often an overlooked piece of information when we deal with
data of this type, but users who do not need georeferenced data should be looking at precision
not accuracy to fit their needs. The precision for this project was calculated in PhotoScan.
PhotoScan exported the data in pixel error, but centimetre error was also calculated by taking
the pixel error and multiplying it by the GSD that PhotoScan had calculated. Precision in this
experiment ranged from a pixel error of 0.57 pixels in flight 141023_02 to an error of 0.34 in
flight 141001_00, table 9. The largest centimetre error was 0.88 cm in flight 140701_02 and the
- 51 -
Table 9: Precision of each flight, in both CM and pixels, grouped into altitude
Overlap Altitude Pixel Control Point Pixel Size Internal Error
Flight (%) (m) Error Error (Pixel) (cm) (cm)
140925_01 60 50 0.43 0.32 0.85 0.37
140925_02 70 50 0.41 0.35 0.85 0.35
140925_03 80 50 0.49 0.43 0.92 0.45
1410001_00 60 100 0.34 0.50 1.57 0.55
1410001_01 70 100 0.43 0.78 1.61 0.69
1410001_02 80 100 0.47 0.34 1.77 0.83
1410001_03 60 75 0.39 0.39 1.21 0.47
1410001_04 70 75 0.42 0.42 1.22 0.52
1410001_05 80 75 0.45 0.43 1.32 0.59
141016_01 60 100 0.37 0.30 1.58 0.58
141016_02 70 100 0.40 0.42 1.65 0.67
141016_03 80 100 0.47 0.38 1.79 0.86
141016_04 60 75 0.39 0.36 1.17 0.46
141023_01 70 75 0.53 0.61 1.34 0.72
141023_02 80 75 0.57 0.58 1.43 0.82
140701_02 60 100 0.46 0.44 1.90 0.88
Breaking these results down into their altitude and overlap component parts leads to several
findings. Altitude has little relationship to pixel error as an R2 value of 0.04 demonstrates, figure
25. However, due to GSD increasing with altitude, centimetre error is closely related to altitude
relationship with precision at the pixel level has an R2 value of 0.44, and at a CM level an R2 of
because PhotoScan offers it and, more importantly, the comparison is vital in understanding if
these 2 alues ha e a elatio ship. The o t ol poi t p e isio s elatio ship to oth pi el
error and CM error were small, with R2 values of 0.12 and 0.11 respectively, figure 29 and 30.
This fact allows the user to eliminate GCP/CP p e isio as a fa to i the odel s p e isio .
- 52 -
Altitude Explaining
0.7
0.6
0.5
Internal Error (cm)
0.4
0.3
0.2
0.1 R² = 0.04
0
0 20 40 60 80 100 120
Altitude (m)
Figure 25: Altitude (X axis) and Pixel error (Y axis) do not see much of a pattern emerge
0.6
0.5
0.4
0.3
R² = 0.50
0.2
0.1
0
0 20 40 60 80 100 120
Altitude (m)
Figure 26: Altitude (X axis) compared to CM precision (Y axis) sees error increase as altitude increases
- 53 -
Figure 27: This graph demonstrates a fairly strong positive relationship between overlap (X axis) and pixel error (Y
axis)
Figure 28: Comparing overlap (X axis) and CM precision (Y axis) not much of a pattern emerges
- 54 -
0.7
Control Point Error Explaining Precision in Pixels
0.6
0.5
Internal Error (pixel)
0.4
0.3
R² = 0.12
0.2
0.1
0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
GCP Error (pixel)
Figure 29: GCP pixel error (X axis) compared to overall pixel error (Y axis) does not demonstrate a pattern
0.9
0.8
0.7
Internal Error (cm)
0.6
0.5
0.4
0.3
0.2
R² = 0.11
0.1
0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
GCP Error (pixel)
Figure 30: No real pattern emerges when comparing GCP pixel error (X axis) to overall CM error (Y axis)
- 55 -
4.3.2 Categorized Values
Further breaking down values into categories based on either altitude, value or overlap offers
more insight into the data. In the case of using altitude as the explanatory variable but breaking
the data into each of the 3 overlap levels, the data has a similar outcome to when the data was
kept together. Each overlap has a very low R2 value when explaining pixel error and a fairly high
value when explaining CM error, appendix 2. In fact the pixel error graphs for both the 3
categories and the overall values demonstrate an almost flat relationship where changes in
altitude do not affect the precision of the model. On the other hand, grouping overlap into the
3 altitude categories does have an impact. The pixel precision versus overlap has strong
relationships at each altitude with R2 values all around 0.5, figure 31 to 33. The biggest change is
with the relationship between CM error and overlap, where one finds much higher R2 values
ranging from 0.31 to 0.63. Thus relationships between overlap and CM precision are much
stronger when broken down into different altitudes, removing the impact of GSD, figure 34 to
36. The results of examining precision error are that both altitude and overlap have an impact
0.48
0.46
0.44 R² = 0.58
0.42
0.4
0 10 20 30 40 50 60 70 80 90
Overlap (%)
Figure 31: Pixel error (Y axis) compared to overlap (X axis) at 50 metres demonstrates a positive relationship that sees
a sharp increase in error as overlap increases
- 56 -
Overlap Explaining Pixel Error 75m
Altitude
0.7
0.6
Interal Error (pixel)
0.5
0.4
0.3
0.2
0.1 R² = 0.49
0
0 10 20 30 40 50 60 70 80 90
Overlap (%)
Figure 32: Pixel error (Y axis) compared to overlap (X axis) at 75 metres shows a positive relationship between error
and overlap
0.5
Interal Error (pixel)
0.4
0.3
0.2
R² = 0.50
0.1
0
0 10 20 30 40 50 60 70 80 90
Overlap (%)
Figure 33: Pixel error (Y axis) compared to overlap (X axis) at 100 metres shows a positive relationship between the
axis values
- 57 -
Overlap Explaining cm Error 50m
Altitude
0.5
Internal Error (cm)
0.4
0.3
0.2
0.1
R² = 0.63
0
0 20 40 60 80 100
Overlap (%)
Figure 34: As overlap (X axis) increases CM error (Y axis) increases in the 50 metre category
0.8
0.6
0.4
0.2
R² = 0.55
0
0 20 40 60 80 100
Overlap (%)
Figure 35: Increasing overlap (X axis) sees sharply increasing CM (Y axis) error at 75 metres
0.7
0.6
0.5
0.4
0.3 R² = 0.31
0.2
0.1
0
0 20 40 60 80 100
Overlap (%)
Figure 36: Increasing overlap (X axis) sees increasing CM (Y axis) error at 100 metres, but a much smaller relationship
than the other 2 altitude categories
- 58 -
5. Discussion
5.1 Other Variables
As the results section outlined, a series of variables that were neither altitude nor overlap were
examined to see if there was any relationship between these other variables and accuracy or
precision. The 2 major alternate variables investigated were distance from model centre and CP
quality. The distance from model centre variable saw no meaningful increase in RMSE as values
increased even when outliers were excluded from the data. This outcome is somewhat
surprising because distance from the centre of an image causes greater blur and less accuracy in
the imagery, which one would expect to correspond with a mosaic to a certain extent (Cully,
2013). However, this idea is borne out by the data. This variable had no impact because a single
individuality. Thus, overlap removes much of the outer edge of each image, and only at the very
farthest extents of each mosaic will there be a need to use the areas of any single image that
will have high levels of distortion. In turn this idea means that there is no degradation in the
The second major alternate variable identified in measuring accuracy is the quality of the image.
This measurement was done by comparing CP image quality to its RMSE value. The flights
undertaken in this project found that the accuracy of individual CPs, irrespective of the flight
they came from, improved as the quality of the image improved. The worst looking CPs had the
worst accuracy and the best looking had the best accuracy. Admittedly the subjective and
ordinal nature of the data and the simple measuring system are problematic, but the outcomes
are compelling enough to suggest that accuracy is affected by motion blurring whatever the
reason for the blurring. This outcome makes sense in the context of computer vision and
- 59 -
photogrammetry because matching imagery of poor quality can be an issue. Unfortunately,
because the precision measurements used for this paper are black box, we cannot measure the
precision of each CP based on image quality. The relationship between image quality and
accuracy is intriguing and important but not particularly surprising. Furthermore this
relationship reinforces the need for users to understand how to use their cameras in UAS flights,
especially the shutter speeds. However, image quality does not remove the impact that altitude
and overlap have on accuracy. Apparently the image quality is closely related to altitude and
overlap in many ways. Poor image quality can arise from many things, including image tilting
and motion blur, as well as distance from the centre of a given image. All of these causes of
poor image quality are related to altitude and overlap change. As overlap increases there is less
need to use the outer edges of the images, thus reducing image blur and the impact of roll and
pitch. As altitude decreases motion blur will be exacerbated due to the principles of parallax,
and this idea can also be applied to the pixel deformation of objects from increased roll and
pitch. The simplest way to avoid image blurring is through increased shutter speed. This
technique has no relationship per se to altitude or overlap, but as previously explained, changes
in elevation and overlap impact the necessary shutter speed to avoid image blur. There is no
doubt that image quality does have an impact on the accuracy of a given CP, but because image
blur is largely related to altitude and overlap, it is not necessary to discard the impact of overlap
However, what research has been done on accuracy does not look at altitude or overlap, but
instead focuses on the variables needed to create the maximum accuracy possible. No matter
what they are looking at, these studies tend to have much better GPS units with sub 5
- 60 -
e ti et e a u a ies tha this pape s GP“ u it, hi h has GCP/CP a u a to et ee 10 and
20 centimetres. On the whole, this led to data that was much less accurate than the literature
suggests is possible with UASs flying at between 100 and 30 metres. However, using a ratio of
GPS accuracy to RMSE, the values tend to line up with one another. The other papers have a
range of ratio values from 1.37, an RMSE of between 3.7 and 4.5 centimetres and a GPS
accuracy between 1 and 2 centimetres (Lucieer, Turner, King, & Robinson, 2014; Harwin &
Lucieer, 201 . This pape s data has a ratio range of between 14.2 in flight 141001_00 and 1.03
matches what other users have been able to obtain in terms of model accuracy to GPS accuracy
ratios. Model accuracy is in large part a function of the accuracy of the GPS, and the fact that
the numbers and the data from the literature reside in roughly the same area means that the
data for this paper falls roughly in line with what would be expected. When viewing the data
from an altitude standpoint, the 2 largest ratios were at 100 metres, 14.2, and 75 metres, 7.4,
whereas the largest ratio in the literature was found flying at between 30 and 50 metres
(Harwin & Lucieer, 2012). While the effect of flying height is quite obvious, the Harwin &
Lucieer paper researched the number of GCPs necessary to maximize accuracy and that
particular flight used only 6 GCPs, whereas the flights for this project had a minimum of 16
GCPs. This distinction makes it very difficult to compare the 2 datasets and derive any strong
conclusions. However, it does offer the opportunity to see if the altitude accuracies obtained in
this project were roughly the same values as those found in other research, which should assure
the reader that there is no serious error somewhere in the methodology or in the results of this
paper. Another issue with the literature is that there is little to no variability in the amount of
overlap used. All of the papers with a stated RMSE value used very high overlaps, compared to
- 61 -
traditional aerial photography and this project, of between 75 and 95 percent. This lack of
variability and the particular overlaps suggest that users of UASs tend to assume that higher
overlap is better. These high overlaps suggest that these users do not think the B/H ratio is an
important factor in UAS models, and perhaps they are not aware of the error equations in
overlap and altitude, and as use of UAS photogrammetry continues to expand, more data about
If there were a more robust literature on the precision of UAS data, the issues on UAS model
accuracy would not be as problematic. Finding literature that states the precision of that
pape s data is very difficult, a d i this pape s lite atu e e ie o l 2 were found. These
centimetre precisions when flying at 50 metres, and the other found a precision of between 9
and 27 millimetres when flying at around 500 metres AGL (Wang et al., ; d Olei e-Oltmanns
et al., 2013). These 2 contradictory findings are problematic, especially when considering this
pape s fai l o siste t esults ega di g p e isio . Espe iall ote o th is the st o g
correlation between altitude and error as a centimetre value. This correlation makes sense
because, to a large extent, G“D is a p odu t of altitude, hi h akes the othe pape s fi di gs
so surprising. Why would the UASs flying at lower altitudes have a much worse precision? The
answer likely lies in the way that the precision in each respective paper was measured. For
pape s defi itio s. A e o i eithe of the reviewed papers is unrealistic as neither of the
values lines up with the values found in this research. The maximum error in CM was 1.9 cm
flying at 100 metres compared to 2.7 CM flying at 500 metres and 11 CM when flying at 50
- 62 -
Conversely, flying at 500 metres would lead to an error value of somewhere roughly between 7
and 10 CM, assuming a generally linear upward trend, which is a safe assumption based on the
data presented in the results. The explanation for the differences between what previous
researchers have provided about precision is difficult because there is very little data available
from papers, and the vast majority of papers have no precision data.
Calgary lo atio . The g eate the flight s altitude, a d i tu the la ge the a ea o e ed, the
larger the amount of the sloping northern section involved in a flight. Any flight flying over this
sloping section had a reduced overlap in this area due to a reduction in the altitude AGL because
the flight s AGL is ased o the sta ti g lo atio of the flight. This de ease i altitude ea t
that there was not enough data to create models that cover the entire flight. The 2 most glaring
examples, flights 141001_00 and 141001_03, have an overlap of 60 percent and a starting
altitude of 100 and 75 metres respectively, figure 37 & 38. These 2 flights had 2 of the highest
RMSE values, 2.13 and 1.12 metres. The change in topography combined with the lower overlap
meant that there was not as much of a buffer to allow for a loss of overlap. Furthermore, there
were fewer images to cover locations farther up the slope as seen in the figures. This lack of
data means that larger portions of photos have to be used, which in turn means that more
geometrically deformed pixels are used in models with smaller overlap. It also means that
such as trees. These data issues lead to a u a ies that a e u h lo e i flights that did t
have sufficient overlap caused by a decrease in AGL in certain higher elevation locations. To
avoid this situation, a researcher must understand the topography of an area and thus avoid
large swings in the overlap of a project. One particular way to avoid such a situation is the use
- 63 -
of a te h i ue alled ladde i g. Ladde i g i ol es ha gi g the flight pla s altitude ased o
the ground elevation, such that the user ends up with the UAS always being a set altitude above
Figure 37: Number of photos covering each location of flight 141001_00. As the flight proceeds up the hill there are
far fewer photos to cover each lo atio u til the e is t e ough o e age fo Photo“ a to eate a odel.
Figure 38: Number of photos covering each location of flight 141001_03. As the flight proceeds up the hill there are
far fewer photos to cover each location.
- 64 -
A second issue is the difference in accuracy versus precision. The most accurate data had 70
percent overlap, but the most precise data, in CM, had 60 percent overlap. This seeming
contradiction is possible because 2 different forces are working against each other. Precision
and accuracy are affected by pixels that are as geometrically correct as possible and by the B/H
ratio. Thus, as overlap increases, the B/H ratio decreases, causing the data to be less precise.
This increase in overlap also means that the geometrically irregular or incorrect pixels nearer the
edges of the images are not necessary for use. Furthermore, fundamental errors in
orthorectification are more of an issue with objects of a higher topographic relief, such as
objects farther from the nadir of an image. In traditional aerial photogrammetry, there are no
opposing forces between the B/H ratio and topographic relief or, in other words, the distance
from the nadir because roll and pitch values tend to be very small. However, in UAS flights roll
and pitch can be quite extreme, which means that these values and the B/H ratio are in essence
working against each other. To further complicate the process, objects such as trees in a UAS
flight have a larger ratio of elevation to flight altitude, and in turn a larger parallax, compared to
traditional aerial flights, which causes greater opportunity for error in the modelling process.
For precision, where the model only has to have the correct locations relative to itself,
geometric errors and occlusion matter less because they do not affect the location of objects
relative to one another, and therefore the least precision error is found in flights where the B/H
ratio is the highest. This difference between precision and accuracy means that the overlap that
maximizes precision is 60 percent overlap. In fact, a smaller overlap could be used if a user were
o fide t that data ould t e issed goi g as losel to 50 percent as possible. On the
other hand, accuracy needs the orthorectification process to be correct because while the
computer sees the model as being correct based simply on the data provided, the location of
- 65 -
the CPs end up being misplaced to a greater degree in situations with too little overlap. Too
much overlap in accuracy is no good either because the accuracy is affected by the B/H ratio.
Thus, when flying a UAS for photogrammetric purposes, to maximize accuracy, a balance must
be struck between a large overlap that will have a small B/H ratio, and a small overlap that may
miss data or use pixels where very poor geometry occurs. This idea is backed up by the models
produced for this paper, which was most accurate at 70 percent overlap irrespective of altitude.
These models make it very clear that users must understand the topography of the area, and
how that could possibly impact the accuracy of the data, as well as whether they are going to
Compared to altitude, overlap has a much more complex relationship with accuracy and
precision, and 2 very differing impacts. On the other hand, altitude has a fairly simple
relationship with accuracy and precision. In terms of precision, as altitude increases precision
increases there is, in this research, a larger range of values in accuracy, such that a 100 metres
altitude had several large error flights but also several very small error flights. Further, 75 and
50 metre altitude flights have the same low error flights, but the higher error flights are not
nearly as high error as the 100 metre flight. The larger error flights are more dramatic at 100
metres, most likely because as altitude increases the impact of roll and pitch are amplified. This
amplification in turn causes greater error due to larger pixel deformity. Thus while this
amplification may have an impact, flights will not necessarily have high pitch and roll. Altitude
has a smaller impact on the data than overlap when only altitude is considered, but when
combined with overlap the issues become more complex, and users must understand both the
positive and negative possibilities of a UAS when they use different altitudes and overlaps. To
maximize precision, flying at a low altitude is undoubtedly the way to go, but to maximize
- 66 -
accuracy, altitude must be considered in conjunction with overlap. What these different values
prove is that users must be familiar with what they hope to extract from their data, and this
paper offers an introduction into what is possible at different altitudes and overlaps.
6. Conclusion
Using a UAS is advantageous for several reasons, including the opportunity to create high spatial
and temporal resolution data while maintaining control of both the collection and processing of
the data. However, this does not mean that creating the data is simply a process of taking
pictures with a UAS and then plugging this data into some sort of software. Users must consider
first whether they need high precision data, data that has internally correct measurements, or
high accuracy data, data that is correct when compared to its real world location. To make this
decision, the user must first understand the UASs, what they are used for and the regulations
that govern them. Also, if users are going to create their own data, they must understand the
basics of how to do so. This involves understanding the best camera to use. In the case of UASs,
a MILC is the best as it offers measurable interchangeable lenses without the excess weight of a
DSLR. Understanding the basics of photogrammetry is also a must, including the impact of roll,
pitch, and yaw, as well as parallax and topographic relief, also known as relief displacement.
Furthermore, the user needs to understand computer vision and other modern advances in
matching algorithms that make most UAS photogrammetric missions possible. A user, equipped
with an understanding of the UAS and how photogrammetric data is created, must now grasp
the data itself. To do so, a user must understand digital elevation models and orthorectified
mosaics, as well as the different types of resolution that a UAS can harness. Most important of
these are temporal, spatial, and pixel. With this background, a user can make informed
decisions about the altitude and overlap necessary for the given purpose.
- 67 -
Unfortunately for users of UAS photogrammetry, there has not been much research into the
impact of overlap and altitude on accuracy and precision. Thus, the thrust of this paper is to
explore the impact of these 2 variables, altitude and overlap. The research finds that overlap
has a more complex impact on the data than altitude does. The impact of altitude on accuracy
is very dependent on the level of overlap used. For precision, the greater the altitude the larger
the precision error. In the case of precision and overlap, the result is simple, as greater overlap
sees a decrease in the precision of the models due to the B/H ratio. However, for accuracy and
overlap, the results are not nearly as simple because the B/H ratio works against pixel geometry.
In UAS flights where roll and pitch can sometimes be rather high, pixel geometry can become
poor, especially at the edges of images. Therefore using higher overlap helps mitigate this issue,
but e ause of a UA“ s s all B/H atio a i ease i o e lap also de eases the a u a .
These 2 opposing factors, in combination with the field findings, mean that the 70 percent
overlap flights were the most accurate flights in this project. Furthermore, an increase in
elevation from north to south at the Calgary location caused flights with overlap starting at 60
pe e t to iss data i lo atio s he e the UA“ s AGL de eased. This de ease i AGL i tu
caused the overlap to decrease below a useable point. All these different factors point toward
the need for the users to be conscious of both how they collect their data and the location
where they collect their data, especially if it involves substantial topographic change. This
research and the outcomes from it not only inform users of how accurate models can be but
also offer a rough approximation of what to expect from these models when changing a
This is just the beginning of research surrounding UAS photogrammetry, accuracy and precision.
One particularly useful inquiry surrounds GPS and how it affects accuracy. In particular, using
only the accuracy of GPS units as a variable for multiple flights would be a useful line of
- 68 -
research. In doing this research, hopefully a pattern will emerge and this pattern will reflect the
interaction between the accuracy of a given GPS unit and the RMSE accuracy of the model.
a third of the model accuracy rather than much more accurate numbers. Perhaps an expansion
of this research could include multiple altitudes or overlaps with multiple GPS accuracies to see
Another way to expand the research of this paper would be to include many more flights at each
overlap and altitude combination. The research in this paper tracks general patterns, but in
terms of getting a very strong pattern, more flights would help create a more obvious pattern.
Another way to expand on this paper would be to include a greater diversity of altitudes with 5
or more altitudes. Such a test would expand the altitude patterns to the point where a clearer
pattern would show whether accuracy and precision in fact have a linear relationship to altitude
only 3 elevations it is difficult to tell if the relationship between the 2 variables is linear or
otherwise.
Widespread UAS photogrammetry is fairly new and ever expanding. There is a multitude of
different directions for research surrounding accuracy and precision. This sort of research is
vital because many users entering this field are not familiar with photogrammetry. Nor are
many users familiar with aerial photography. Thus creating a framework for accuracy and
precision at different altitudes and overlaps will help demystify the process of photogrammetry
using a UAS. Such a framework will allow more users to harness the power of personal DEMs
and orthorectified mosaics. The ease of using a UAS and creating models from UAS data should
not lull users into a false understanding of their data needs and the geography of where they are
flying. Ultimately, this paper is an outline of what is possible with a UAS at certain altitudes and
- 69 -
overlaps. It demonstrates that users need to be conscious of their surroundings to carry out
Bibliography
3D Models. (n.d.). Retrieved July 8, 2015, from http://www.seos-project.eu/modules/3d-
models/3d-models-c02-p02-s01.html
Airworthiness Certification of Unmanned Aircraft Systems and Optionally Piloted Aircraft. (n.d.)
Retrieved July 8, 2015, from
http://rgl.faa.gov/Regulatory_and_Guidance_Library/rgOrders.nsf/0/10947cee0052205
886257bbe0057bd76/$FILE/8130.34C.pdf
Agisoft PhotoScan User Manual: Professional Edition, Version 1.0.0. (n.d.). Retrieved July 9,
2015, from http://downloads.agisoft.ru/pdf/photoscan-pro_1_0_0_en.pdf
Allen, E., & Triantaphillidou, S. (2012). The manual of photography (10th ed.). Oxford:
Elsevier/Focal Press.
Bendea, H., Chiabrando, F., Giulio Tonolo, F., & Marenchino, D. (2007, October). Mapping of
archaeological areas using a low-cost UAV. The Augusta Bagiennorum test site. In XXI
International CIPA Symposium (pp. 01-06).
CCD Spatial Resolution. (n.d.). Retrieved April 21, 2015, from http://www.andor.com/learning-
academy/ccd-spatial-resolution-understanding-spatial-resolution
Chao, H., Cao, Y., & Chen, Y. (2010). Autopilots for small unmanned aerial vehicles: a survey.
International Journal of Control, Automation and Systems, 8(1), 36-44.
Chao, H., & Chen, Y. (2012) Remote Sensing Using Single Unmanned Aerial Vehicle. Remote
Sensing and Actuation Using Unmanned Vehicles, 101-120.
Chiabrando, F., Nex, F., Piatti, D., & Rinaudo, F. (2011). UAV and RPV systems for
photogrammetric surveys in archaeological areas: two tests in the Piedmont region
(Italy). Journal of Archaeological Science, 38(3), 697-710.
Colomina, I., & Molina, P. (2014). Unmanned aerial systems for photogrammetry and remote
sensing: A review. ISPRS Journal of Photogrammetry and Remote Sensing, 92, 79-97.
- 70 -
Concepts of Aerial Photography. (2015, March 27). Retrieved April 28, 2015, from
http://www.nrcan.gc.ca/earth-sciences/geomatics/satellite-imagery-air-photos/air-
photos/about-aerial-photography/9687.
Cully, A. (2013). Camera Elevation for Use in Aerial Photography on Unmanned Aerial Vehicles
Maste s Thesis . The U i e sit of Calga , Calgary, Alberta, Canada.
d'Oleire-Oltmanns, S., Marzolff, I., Peter, K. D., & Ries, J. B. (2012). Unmanned Aerial Vehicle
(UAV) for monitoring soil erosion in Morocco. Remote Sensing, 4(11), 3390-3416.
Erdos, D., & Watkins, S. E. (2008, April). UAV autopilot integration and testing. In Region 5
Conference, 2008 IEEE (pp. 1-6). IEEE.
EXEMPTION FROM SECTIONS 602.41 AND 603.66 OF THE CANADIAN AVIATION REGULATIONS.
(n.d.). Retrieved July 8, 2015, from
http://www.tc.gc.ca/civilaviation/regserv/affairs/exemptions/docs/en/2880.htm
Flying an unmanned aircraft for work or research. (n.d.). Retrieved July 8, 2015, from
http://www.tc.gc.ca/eng/civilaviation/standards/standards-4179.html#submission
France - UAV Law Expert. (n.d.). Retrieved July 8, 2015, from http://uavlaw.expert/uav-laws-by-
country/france/
Fundamentals of orthorectifying a raster dataset. (n.d.). Retrieved June 15, 2015, from
http://help.arcgis.com/en/arcgisdesktop/10.0/help/009t/009t000000ms000000.htm
GEOEXPLORER 3000 SERIES: GeoXT Handheld. (n.d.). Retrieved July 10, 2015, from
http://www.compasstoolsinc.com/files/GeoXT3000.pdf
- 71 -
Graham, R., & Koh, A. (2002). Digital Aerial Survey: Theory and Practice. CRC Press.
Haala, N., Cramer, M., & Rothermel, M. (2013). Quality of 3D point clouds from highly
overlapping UAV imagery. ISPRS–Int. Arch. Photogramm. Remote Sens. Spatial Inform.
Sci., XL-1 W, 2, 183-188.
Hardin, J.P., & Jensen, R.R. (2011). Small-Scale Unmanned Aerial Vehicles in Environmental
Remote Sensing: Challenges and Opportunities. GIScience & Remote Sensing, 48, 99-
111.
Harwin, S., & Lucieer, A. (2012). Assessing the accuracy of georeferenced point clouds produced
via multi-view stereopsis from unmanned aerial vehicle (UAV) imagery. Remote Sensing,
4(6), 1573-1599.
Hasegawa, H., Matsuo, K., Koarai, M., Watanabe, N., Masaharu, H., & Fukushima, Y. (2000). DEM
accuracy and the base to height (B/H) ratio of stereo images. International Archives of
Photogrammetry and Remote Sensing, 33(B4/1; PART 4), 356-359.
Laliberte, A. S., Goforth, M. A., Steele, C. M., & Rango, A. (2011). Multispectral remote sensing
from unmanned aircraft: Image processing workflows and applications for rangeland
environments. Remote Sensing, 3(11), 2529-2551.
Laliberte, A. S., Winters, C., & Rango, A. (2011). UAS remote sensing missions for rangeland
applications. Geocarto International, 26(2), 141-156.
Li, Z., Zhu, C., & Gold, C. (2010). Digital terrain modeling: principles and methodology. CRC press.
Lucieer, A., Turner, D., King, D. H., & Robinson, S. A. (2014). Using an Unmanned Aerial Vehicle
(UAV) to capture micro-topography of Antarctic moss beds. International Journal of
Applied Earth Observation and Geoinformation, 27, 53-62.
Lumix® G 20mm / F1.7 ASPH Lens. (n.d.). Retrieved July 10, 2015, from
http://shop.panasonic.com/support-only/H-H020.html
LUMIX® GF1C 12.1 Megapixel Interchangeable Lens Camera Kit. (n.d.). Retrieved July 10, 2015,
from http://shop.panasonic.com/support-only/DMC-GF1C-
K.html?supportpage=true#supportpage=true&q=gf1&start=1
McGwire, K. C., Weltz, M. A., Finzel, J. A., Morris, C. E., Fenstermaker, L. F., & McGraw, D. S.
(2013). Multiscale assessment of green leaf cover in a semi-arid rangeland with a small
unmanned aerial vehicle. International Journal of Remote Sensing, 34(5), 1615-1632.
Mikhail, E. M., Bethel, J. S., & McGlone, J. C. (2001). Introduction to modern photogrammetry.
John Wiley & Sons Inc.
- 72 -
Mitishita, E., Côrtes, J., Centeno, J., Machado, A., & Martins, M. (2010). Study of stability analysis
of the interior orientation parameters from the small-format digital camera using on-
the-job calibration. In Canadian Geomatics Conference.
Morgan, D., & Falkner, E. (2010). Aerial Mapping: Methods and Applications. CRC Press.
Niethammer, U., James, M. R., Rothmund, S., Travelletti, J., & Joswig, M. (2012). UAV-based
remote sensing of the Super-Sauze landslide: Evaluation and results. Engineering
Geology, 128, 2-11.
Panasonic Lumix DMC-LX100 vs. Panasonic Lumix DMC-GF1. (n.d.). Retrieved July 10, 2015, from
http://www.digicamdb.com/compare/panasonic_lumix-dmc-lx100-vs-panasonic_lumix-
dmc-gf1/
Prats, X., Santamaria, E., Delgado, L., Trillo, N., & Pastor, E. (2013). Enabling leg-based guidance
on top of waypoint-based autopilots for UAS. Aerospace Science and Technology, 24(1),
95-100.
Rango, A., Laliberte, A., Herrick, J. E., Winters, C., Havstad, K., Steele, C., & Browning, D. (2009).
Unmanned aerial vehicle-based remote sensing for rangeland assessment, monitoring,
and management. Journal of Applied Remote Sensing, 3(1), 033542-033542.
Remotely Piloted Aircraft Systems (RPAS). (n.d.). Retrieved July 8, 2015, from
http://ec.europa.eu/enterprise/sectors/aerospace/uas/index_en.htm
Remotely Piloted Aircraft Systems Symposium. (n.d.). Retrieved April 28, 2015, from
http://www.icao.int/meetings/RPAS/Pages/default.aspx.
Rosnell, T., & Honkavaara, E. (2012). Point cloud generation from aerial image data acquired by
a quadrocopter type micro unmanned aerial vehicle and a digital still camera. Sensors,
12(1), 453-480.
Saint-Amour, Paul K. (2013). Photomosaics. In Adey, P., Whitehead, M., & Williams, A. J. (Eds.).
From above: war, violence, and verticality (pp 120 - 142). Oxford Scholarship Online.
Step 1. Before Starting a Project 1. Designing the Images Acquisition Plan a. Selecting the Images
Acquisition Plan Type. (2015, April 24). Retrieved April 28, 2015, from
https://support.pix4d.com/hc/en-us/articles/202557459-Step-1-Before-Starting-a-
Project-1-Designing-the-Images-Acquisition-Plan-a-Selecting-the-Images-Acquisition-
Plan-Type.
The quadcopter: Control the orientation. (n.d.). Retrieved July 8, 2015, from
http://theboredengineers.com/2012/05/the-quadcopter-basics/
Trinder, John. (n.d.) Current Trends in Photogrammetry and Imaging including Lidar. Retrieved
November 25, 2013, from
- 73 -
http://www.gmat.unsw.edu.au/workshops/2012ssis/Current%20Trends%20in%20Photo
grammetry%20and%20Imaging%20including%20Lidar.pdf
Turner, D., Lucieer, A., Malenovský, Z., King, D. H., & Robinson, S. A. (2014). Spatial co-
registration of ultra-high resolution visible, multispectral and thermal images acquired
with a micro-UAV over Antarctic Moss Beds. Remote Sensing, 6(5), 4003-4024.
Turner, D., Lucieer, A., & Wallace, L.O (2014). Direct georeferencing of ultrahigh-resolution UAV
imagery. Geoscience and Remote Sensing, IEEE Transactions on, 52(5), 2738-2745.
Turner, D., Lucieer, A., & Watson, C.S. (2012). An automated technique for generating
georectified mosaics from ultra-high resolution unmanned aerial vehicle (UAV) imagery,
based on structure from motion (SfM) point clouds. Remote Sensing, 4(5), 1392-1410.
Unmanned Aircraft Systems (UAS) Frequently Asked Questions. (n.d.). Retrieved July 8, 2015,
from https://www.faa.gov/uas/faq/#qn2
Wallace, L.O., Lucieer, A. & Watson, C.S. Assessing the Feasibility of UAV-Based LiDAR for High
Resolution Forest Change Detection. Arch. Photogramm. Remote Sens. Spatial Inf. Sci.,
XXXIX-B7, 499-504.
Wang, Q., Wu, L., Chen, S., Shu, D., Xu, Z., Li, F., & Wang, R. (2014). Accuracy Evaluation of 3D
Geometry from Low-Attitude UAV Images: A Case Study at Zijin Mine. In ISPRS Technical
Commission IV Symposium (pp. 297-300).
Watts AC, Ambrosia VG, & Hinkley EA. (2012). Unmanned Aircraft Systems in Remote Sensing
and Scientific Research: Classification and Considerations of Use. Remote Sensing. 4(6),
1671-1692.
Zarco-Tejada, P.J., González-Dugo, V., & Berni, J.A.J. (2012). Fluorescence, temperature and
narrow-band indices acquired from a UAV platform for water stress detection using a
micro-hyperspectral imager and a thermal camera. Remote Sensing of Environment,
117, 322-337.
- 74 -
Appendices A: GPS Data
GCP/CP GPS for the flights Undertaken on September 25th 2014
POINT POINT_X POINT_Y POINT_Z PDOP XY Precision Z Precision Standard Deviation
1 695005.7675 5656384.028 1166.547 2.5 1.2 0.8 0.000013
2 695038.746 5656376.239 1166.211 2.5 0.2 0.2 0.000037
3 695063.8796 5656370.368 1166.516 2.5 0.1 0.1 0.000027
4 695088.6184 5656363.795 1167.013 2.5 0.1 0 0.000014
5 695112.0022 5656356.105 1167.683 2.9 0.1 0.1 0.000015
6 695157.4404 5656379.256 1169.578 4.5 0.2 0.2 0.000009
7 695143.1133 5656385.293 1169.52 4.5 0.2 0.2 0.000026
8 695101.0692 5656399.054 1169.457 3.9 0.2 0.2 0.000029
9 695078.7806 5656405.079 1169.125 3.9 0.2 0.2 0.000042
10 695060.2465 5656408.367 1168.512 1.9 0.2 0.1 0.000011
11 695036.6521 5656414.577 1167.544 1.9 0.1 0.1 0.000008
12 694995.1537 5656436.215 1167.666 2 0 0 0.000012
13 695022.3632 5656475.201 1174.081 2.6 0 0 0.000017
14 695048.1668 5656457.896 1172.966 2.1 0.2 0 0.000009
15 695068.3279 5656452.05 1173.718 2.1 0.2 0.1 0.000021
16 695092.3758 5656440.153 1173.858 2.6 1 0.9 0.000016
17 695116.199 5656428.818 1173.168 2.6 0.8 0.8 0.000031
18 695140.7841 5656416.83 1172.629 2.6 0.7 0.6 0.000021
- 75 -