DRONEIMAGEPROCESSINGORTHOPHOTODEMANDOVERLAYUSINGPi X4 D
DRONEIMAGEPROCESSINGORTHOPHOTODEMANDOVERLAYUSINGPi X4 D
net/publication/369362467
CITATION READS
1 2,123
1 author:
Bipin Paudel
Tribhuvan University
6 PUBLICATIONS 1 CITATION
SEE PROFILE
All content following this page was uploaded by Bipin Paudel on 19 March 2023.
ABSTRACT
The report on "Drone Image Processing Using Pix4D Software" discusses the
use of drone imagery for data collection and analysis. Pix4D software is
popular tools for processing drone. The report includes the technical aspects
of using these software packages and highlights their benefits and limitations..
The report concludes by emphasizing the importance of proper data
processing and interpretation to ensure accurate and reliable results. Overall,
the report provides a comprehensive overview of the use of drone imagery
and the software tools used for processing it. The drone images are taken over
Paschimanchal Campus Pokhara using Drone camera with Camera Model
Name(s) FC300S_3.6_4000x3000 (RGB) and the image covering 0.08 sq. mi.
/ 54.3791 acres of area with Average Ground Sampling Distance (GSD) 1.88
cm / 0.74 inch.
Keyword:
Orthomosaic, DTM, DSM, GSD
vi
Contents
ABSTRACT .................................................................................................. vi
List of Figures .............................................................................................. viii
List of Tables ................................................................................................. ix
List of Abbreviations .......................................................................................x
1. INTRODUCTION .....................................................................................1
2. SOTWARE USED FOR DRONE IMAGE ANALYSIS .........................1
3. METHODOLOGY AND STUDY AREA ................................................2
3.1 STUDY AREA ......................................................................................2
3.2 METHODOLODY ................................................................................4
3.2.1 Initial Processing: ............................................................................4
3.2.2 Point Cloud and Mesh: ....................................................................5
3.2.3 DSM, Orthomosaic and Index: .......................................................6
4. RESULT ....................................................................................................7
4.1 ORTHOPHOTO ....................................................................................7
4.2 DIGITAL SURFACE MODEL.............................................................9
5. ANALYSIS AND DISCUSSION ...........................................................10
6. CONCLUSION .......................................................................................12
7. RECOMMENDATION ..........................................................................12
vii
List of Figures
Fig 1 : Pix4D .................................................................................... 2
Fig 2 : Study Area ............................................................................ 3
Fig 3 : Flight Plan, Top view of the initial image position .............. 4
Fig 4 : Computed Image and showing uncalibrated image position8
Fig 5: Orthophoto ............................................................................. 8
Fig 6 : Orthophoto Overlay with OSM ............................................ 9
Fig 7 : Digital Surface Model......................................................... 10
Fig 8 : Number of overlapping images computed for each pixel of
the orthomosaic .............................................................................. 11
viii
List of Tables
Table 1 : Initial processing Details ..................................................................5
Table 2 : Point Cloud Densification Details ....................................................6
Table 3 : DSM, Orthomosaic and Index Details .............................................7
Table 4 : Absolute Geolocation Variance .....................................................11
ix
List of Abbreviations
x
1. INTRODUCTION
1
Fig 1 : Pix4D
2
Fig 2 : Study Area
Initial position of the images, ie the flight plan is shown below. As shown in
figure different flight plan are taken. he green line follows the position of the
images in time starting from the large blue dot.
3
Fig 3 : Flight Plan, Top view of the initial image position
3.2 METHODOLODY
PIX4Dmapper divides the image processing into three major steps. They are:
3.2.1 Initial Processing:
PIX4Dmapper first computes key points on the images. It uses these
key points to find matches between the images and prepares the quality
report of this step. The initial processing step involves the following
sub-steps:
a) Image Import: The first step is to import the images into the software.
This can be done by selecting the folder containing the images.
b) Calibration: The software then analyzes the images to determine the
internal and external camera parameters, such as focal length, principal
point, and lens distortion.
c) Image Orientation: The software uses the camera parameters to
determine the position and orientation of each image.
d) Image Matching: The software then matches common features
between images to create tie points, which are used to align the images.
4
e) Point Cloud Generation: Finally, the software generates a 3D point
cloud from the aligned images.
It almost took 43 minutes for initial processing of images available.
5
b) Point Cloud Classification: The software classifies the points based
on their location and characteristics, such as ground, vegetation,
buildings, and so on.
c) Digital Surface Model (DSM) Generation: The software generates a
DSM from the classified points.
d) Mesh Generation: The software generates a mesh surface from the
point cloud and DSM.
6
b) Orthomosaic Generation: The software generates an orthomosaic by
projecting the images onto the DSM and blending them together.
c) Index Calculation: The software calculates various indices, such as
NDVI (Normalized Difference Vegetation Index), NDRE (Normalized
Difference Red Edge Index), and so on, using the orthomosaic.
Finally, the software allows users to export the results in various
formats, such as GeoTIFF, LAS, OBJ, and so on, for further analysis.
4. RESULT
4.1 ORTHOPHOTO
The orthophoto below is the processed output of 1089 drone images which
took around 6 hours for orthomosaic generation out of which 1080 images
(99%) are calibrated. There is 1.61%relative difference between initial and
optimized internal camera parameters. Out of the orthomosaic developed we
clipped out the unnecessary area. The median of 1957.15 matches per
calibrated image. The number of calibrated images is 1080 out of 1089 and
geolocated images is 1089 out of 1089.
The image below represents Offset between initial (blue dots) and computed
(green dots) image positions as well as the offset between the GCPs initial
positions (blue crosses) and their computed positions (green crosses) in the
top-view (XY plane). Red dots indicate disabled or uncalibrated images. Dark
green ellipses indicate the absolute position uncertainty of the bundle block
adjustment result.
7
Fig 4 : Computed Image and showing uncalibrated image position
Fig 5: Orthophoto
8
of Paschimanchal campus in OSM . However slight deviation result that
depend on with how much accuracy and how many do we take the reference
points for georeferencing orthophoto.
9
Fig 7 : Digital Surface Model
10
Fig 8 : Number of overlapping images computed for each pixel of the orthomosaic
Min Error and Max Error represent geolocation error intervals between -1.5
and 1.5 times the maximum accuracy of all the images. Columns X, Y, Z show
11
the percentage of images with geolocation errors within the predefined error
intervals. The geolocation error is the difference between the initial and
computed image positions. The image geolocation errors do not correspond
to the accuracy of the observed 3D points. Since 2D GCPs is taken for
georeferencing the geolocation error in Z axis seems to have more
uncertainity. In order to reduce this error, it is recommended to use a drone
with a stable gimbal and acquire UAV images in optimal weather conditions,
use GCPs take from DGPS survey rather than taking location data from
uncertain sources, also we can extend the logging time of DGPS for better
accuracy.
6. CONCLUSION
Using UAVs for data gathering enables efficient and accurate data collection,
resulting in the creation of high-resolution orthophotos of the Pashchimanchal
campus. The orthophotos offer a comprehensive and precise representation of
the campus and are useful for designing, managing assets, and monitoring
changes over time. The effectiveness and efficiency of UAV technology for
mapping and surveying have been demonstrated by the successful generation
of the orthophoto of the campus.
7. RECOMMENDATION
The orthophoto production project that utilized UAV technology was
completed successfully and demonstrated the potential of this technology for
mapping and surveying purposes. For future projects that require efficient and
precise data collection, the use of UAVs is recommended. But make sure of
condition of the devices and instruments used. To ensure the orthophoto's
accuracy and reliability, it is crucial to implement quality control measures,
such as image distortion checks and the use of multiple ground control points.
GCPs taken with high logging time period is recommended. A sufficient
number of GCPs is essential for achieving a highly accurate orthomosaic, and
having a better front and side overlap is recommended. Computer with better
processing power is recommended so that the processing would not terminate
at middle of process.
12
13