0% found this document useful (0 votes)
24 views77 pages

Prashant Thesis 3

Uploaded by

Shubhi Mittal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
24 views77 pages

Prashant Thesis 3

Uploaded by

Shubhi Mittal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 77

ACCURACY ASSESSMENT OF UAV

PHOTOGRAMMETRIC MAPPING BASED ON GROUND


CONTROL POINTS USING PIX4D MAPPER AND
AGISOFT METASHAPE

Thesis submitted in

partial fulfilment of the requirements for the Degree of

Master of Technology

in

Geoinformatics Engineering

By

PRASHANT KUMAR

DEPARTMENT OF CIVIL ENGINEERING


INDIAN INSTITUTE OF TECHNOLOGY
(BANARAS HINDU UNIVERSITY)
VARANASI-221005
22062042 JULY,2024

1
CERTIFICATE

It is certified that the work contained in the thesis titled “ACCURACY

ASSESSMENT OF UAV PHOTOGRAMMETRICMAPPING BASED ON

GROUND CONTROL POINTS USING PIX4D MAPPER AND AGISOFT

METASHAPE” by “PRASHANT KUMAR ” has been carried out under my/our

supervision and that this work has not been submitted elsewhere for a degree it is

further certified that the student has fulfilled all the requirements of comprehensive

examination, Candidacy and SOTA for the award of M.Tech Degree.

Dr. SHISHIR GAUR


(Supervisor)
Assistant Professor
Department of Civil Engineering
Indian Institute of Technology
(BHU) Varanasi-221005

2
DECLARATION BY CANDIDATE

I, PRASHANT KUMAR, certify that the work embodied in this thesis is my own

bonafide work and carried out by me under the supervision of Dr. SHISHIR GAUR

from June, 2023 to July, 2024 at the Department of Civil Engineering, Indian

Institute of Technology (BHU), Varanasi. The matter embodied in this thesis has

not been submitted for the award of any other degree/diploma. I declare that I have

faithfully acknowledged and given credits to the research workers wherever their

works have been cited in my work in this thesis. I further declare that I have not

wilfully copied any other’s work, paragraphs, text, data, results, etc. reported in

journals, books, magazines, reports dissertations, thesis, etc., or available at websites

and have not included them in this thesis and have not cited as my own work.

Date:

Place: Varanasi (PRASHANT KUMAR)

3
CERTIFICATE BY THE SUPERVISOR(S)

It is certified that the above statement made by the student is correct to the best of my
knowledge.

Dr. Shishir Gaur Prof. Sasankasekhar Mandal


(Supervisor) Professor and HOD
Assistant Professor Department of Civil Engineering
Department of Civil Engineering Indian Institute of Technology
Indian Institute of Technology (BHU) Varanasi- 221005
(BHU) Varanasi-221005

4
COPYRIGHT TRANSFER CERTIFICATE

Title Of The Thesis : Accuracy Assessment Of Uav Photogrammetric Mapping


Based On Ground Control Points Using Pix4d Mapper And Agisoft Metashape

Name of the student:Prashant kumar

Copyright Transfer

The undersigned hereby assigns to the Indian Institute of Technology (BHU)


Varanasi all rights under copyright that may exist in and for the above thesis
submitted for the award of the Master of Technology.

Date:

Place: Varanasi
Signature of student

Note: However, the author may reproduce or authorize others to reproduce material
extracted verbatim from the thesis or derivative of the thesis for author’s personal use
provided that the source and the Institute’s copyright notice are indicated.

5
ACKNOWLEDGEMENT

I express my deep and heartfelt gratitude to my supervisor Dr. Shishir Gaur, Assistant

professor, IIT (BHU), Varanasi for his valuable guidance and support throughout the length of

this project.I am also very thankful to Dr. Anurag ohri, Professor of Civil Engineering

Department, IIT (BHU), Varanasi for his knowledge, time, continuous encouragement and

motivation which made me able to carry out my project in time.

I extend my gratitude to respected Prof Sasankasekhar Mandal, HOD, Civil Engineering

Department, IIT (BHU) VARANASI for providing all the facilities required to carry out my

project.

I thank all the faculties of Department of Civil Engineering, IIT (BHU), Varanasi for their valuable

knowledge and suggestions.

I would like to express my sincere gratitude to my seniors and friends for their helps and

cooperation throughout these two years.

Last but not the least; I would like to take the opportunity to thank my parents and siblings for

constantly believing in me and supporting me to follow my dreams.

PRASHANT KUMAR

ROLL NO. 22062042

6
Table of content

S.N Topic Name Page no.


1.0 Introduction 13
1.2 Component of UAV 13
1.2.1 Frame 16
1.2.2 Propellers 16
1.2.3 Flight Controller 17
1.2.4 Motor 17
1.2.5 Electronic Speed Controller 17
1.2.6 Battery 17
1.3 Types of UAV System 18
1.3.1 According to number of propellers 18
1.3.2 According to size 23
1.3.3 According to range 23
1.4 GIS in 3D Map 24
1.5 Uses of 3D GIS Map 25
1.6 Types of Map 26
1.7 Ground Control Points 29
1.8 Choosing Ground Control Points 29
1.9 How do I Lay a Control Points 30
1.10 Common mistakes to avoid when laying 30
control points include the following
1.11 Why use Ground Control Points 31
2.0 Literature Review

7
2.1 Assessment of UAV Photogrammetric 32
Mapping accuracy b ased on variation of
GCP
2.2 Fusion of UAV based DEM for vertical 32
component accuracy improvement
2.3 The test field for UAV accuracy assessment 33
2.4 Modelling farmland topography suitable site 34
selection of dam construction using unmanned
areal vehicle (UAV) Photogrammetry
2.5 Accuracy of unmanned areal vehicle (UAV) 35
and photogrammetry survey as a function of
the number and location of ground control
points used.
2.6 Template for high resolution river landscape 36
mapping using UAV technology
2.7 Towards the automatic detection of geospatial 37
changes based on digital elevation models
produced by UAV imagery

3.0 Study Area


3.1 History of Varanasi 39
3.2 Description of JNV 40
4.0 Methodology
4.1 Planing of project 41
4.2 Field work 42
4.2.1 Ground Control Point Marking in 42
the field
4.3 Differential Global Positioning System 43
4.3.1 Base Station 44

8
4.3.2 Rover 44

4.4 Limitation 45
4.5 Flight Planning 45
4.6 Processing 46
4.7.1 Pix4D Mapper 46
4.7.2 Agisoft Metashape 47
4.8 Check point 48
4.9 Ground control points 48
4.10 Accuracy Assessment 48

5.0 Result and Discussion


5.1 Pix4D Results 50
5.2 Agisoft Results 51
5.3 Quality report of Pix4D software 54
5.4 Orthophoto generated by Pix4D 61
5.5 Quality Report of Agisoft Metashape software 64
5.6 Survey Data 65
5.7 Camera Calibration 66
5.8 Camera Calibration 67
5.9 Camera Location 68
5.10 Ground Control Points 69
5.11 Processing Parameters 72
6.0 Conclusion 74

9
LIST OF FIGURES

Fig. 1 Multi rotor drone

Fig 2 Single rotor drone

Fig.3 Tricoptor drone

Fig.4 Quadcoptor drone

Fig.5 Hexa coptor drone

Fig.6 Octocoptor drone

Fig.7 Fixed wing drone

Fig. 8 Taking gcp coordinate by dgps

Fig.9 GCP marking on ground

Fig.10 study area Varanasi in map

Fig.11 JNV study area

Fig.12 CORS as base station

Fig.13 ideaForge RYNO UAV

Fig, 14 Ortho photo of JNV area

Fig, 15 Quality Report on Agisoft Metashape software

Fig. 16 Camera locations and image overlap

Fig.17 Image residuals for FC2403 (4.5mm).

Fig. 18. Image residuals for FC2403

Fig. 19 Camera locations and error estimates

Fig. 20 GCP locations and error estimates

10
ABBREVIATION

BIM- Building information modelling

CH- Check point

CL- Confidence level

CM- Centimeter

DEM – Didital elevation model

DSM- Digital surface model

DGPS- Differential Global Positioning System

ESC- Electronic speed controller

GCP- Ground control point

GPS- Global positioning system

GIS- Geographical information system

GNSS- Global Navigation Satellite system

KM- Kilo meter

LIDAR- Light detecting and ranging

SSO- Spectrum survey office

SFM- Structure form motion

RMSE- Root mean square error

UAV- Unmanned areal vehical

3D- Three dimensional

11
ABSTRACT

UAV are heavily adopted nowadays to collect high resolution imagery with the

purpose of documenting and mapping environment and cultural heritage. Such data

are currently processed using programmes based on Structure form Motion concept.

The relation between horizontal and vertical accuracy, accuracy estimated on Ground

control points(GCP) with Check points(CP). 3D mapping are increasing quick and

wide application around the globe, due to the moderately ease advantage it offers in

the procurement of high goals 3D topographic models when contrasted with Light

detecting and ranging (LIDAR). Unmanned areal vehicle are used across the world

for civilian, commercial as well as military application. UAV also used for 3D

mapping of an area with very high resolution with greater accuracy than the others

like Lidar and airborne etc. But the limitation of UAV is, it is used for limited area,

not used for large area. The accuracy of 3D map depends on the Ground control point

(GCP) and their location and distribution in area of interest.

In this project my total study area is 123.9211 ha and location is JNV area in

Varanasi U.P India. We have total 14 Ground control points which is taken by

Differential GPS in network real time kinematics mode. Different distribution and

combination is used and processed in pix4d mapper and Agisoft Metashape software.

But the best result find, when we have used 10 as a ground control points and 3 as a

check points.

12
CHAPTER 1
INTRODUCTION

1.1 Introduction to Photogrammetry


Photogrammetry is a straightforward method for mapping and surveying that makes use of photos.

Through the act of capturing, deciphering, and measuring photographic images, science aids in getting

crucial information about the physical objects and environment. Making measurements from pictures is

a basic science.

Basics of Photogrammetry

If we break down the word photogrammetry, we can clearly understand what the word means. Photo

means light, ‘gram’ means drawing and ‘metry’ means measurement. Photogrammetry could be

described as the 3-dimensional coordinate measuring technique that uses the photographs as the

fundamental object for measurements. Photogrammetry generally revolves around the idea of gathering

information from the photos which are collections of the object.

The fundamental concept of photogrammetry is triangulation in which multiple photos are taken(At

least two) to create a line of sight that will point on the object. The photos are taken from different

angles and locations which helps in making accurate calculations that will help gather the data a person

is looking for. The line of sight that was created because of the data that was collected can now be

mathematically inserted to produce the 3-dimensional coordinates of the points of interest.

Photogrammetry helps in creating 3 d models and maps of the real world. During World War II, the

use of photogrammetry increased. During World War II, special aircraft were built that were designed

to carry powerful cameras which were designed for aerial photography and better camera positioning.

13
Photogrammetry during that time was extensively used to monitor the enemy countries territory.

During the Apollo mission, photogrammetry also helped in mapping the surface of the moon.

Types of Photogrammetry

Photogramteert can be classified into two types based on the splitting the field on which the camera is

located during photography. On the basis of this approach, we have Aerial photogrammetry and Close

range or terrestrial photogrammetry.

Aerial Photogrammetry

For mapping, a particular area, the most commonly used photogrammetric method is aerial

photogrammetry. In aerial photogrammetry, the camera is mounted in an aircraft and is pointed

vertically towards the ground. When the plan follows a flight path and by mounting the camera

vertically towards the ground the camera then takes multiple overlapping photos of the ground. The

multiple photos which were taken were traditionally monitored by a stereo plotter. A stereo plotter is

an instrument that helps the operator or the user see two photos at once in a stereo view. The stereo

plotter also helps in determining the elevation by comparing the two different photos and also helps in

conducting the necessary calculations. Stereo plotters were extensively used a decade ago but now all

the photos taken during aerial photogrammetry are processed by automated desktop systems.

Terrestrial Photogrammetry

In terrestrial photogrammetry or close range photogrammetry, the camera is located on the ground

which is either handheld or fixed and images are taken from the fixed position on the ground in the axis

of the camera is parallel to the Earth surface. The coordinates and other data of the camera are

collected at the time in which the photo is taken. Theodolites are the instruments that are used for

terrestrial photogrammetry or close range photogrammetry. Terrestrial photogrammetry is non-

14
topographic which means it is not related to the arrangement of the physical features of the area. The

output of the terrestrial photogrammetry are drawings, 3D models, and measurements. cameras can

also be used to measure and model buildings, help in engineering structures, stock-piles, film sets, etc.

terrestrial photogrammetry is also called as Image-Based Modeling in the computer-based community.

Stereo Photogrammetry

Stereo photogrammetry is a technique that involves the estimation of 3D coordinates of points on an

object by considering the measurements made of two or more images taken from different positions.

By collecting the points obtained from the x,y, and z coordinate the image is calculated. The principles

on which Stereo photogrammetry is based is stereoscopic principles which allow the user to create or

enhance the illusions of the image by the means of stereopsis for binocular vision. The binocular vision

follows a simple principle that says that a user can present two different images separately by the right

and the left eye and then the images are combined by the viewer’s brain to give a perception of 3D

vision. There are many ways such as polarization, chroma depth, stereoscopic technique, etc that help

in presenting the stereoscopic pictures. Stereophotogrammetry is turning out to be one of the emerging

non-contacting techniques that helps in determining the characteristics and mode shapes of both

rotating and nonrotating structures.

1.1 Unmanned areal vehicle (UAV)

An unmanned areal vehicle (UAV) (uncrewed airborne vehicle or drone) is a drone without a

human pilot and a sort of unmanned vehicle. UAVs are a part of an unmanned aeroplane framework

(UAS); which incorporate a UAV, a ground-based controller, and an arrangement of

correspondences between the two. The trip of UAVs will work with different aspects of self-rule:

either under remote control by a human administrator or automatic planning installed PC. The

15
UAV are used for rescue operation and geographical observation and environmental monitoring.

This technology is now available to assist the crew member in the disaster response area. The UAV

are categorized according to the altitude, endurance and weight and serve a wide variety of

applications including military and commercial applications.

The smallest category of UAV are also accompanied by ground control station consisting of laptop

and other component which are small enough to be carry with small car or boat. UAV system

contain high precision camera can fly around the area of Interest which can take picture for 3D

mapping. Further which is used for monitoring or design purpose. UAV is made by light composite

material for weight reduction. strength of composite material is very good because drone fly at very

high altitude for military purpose. UAV drone fitted with various technology component like

infrared camera, GPS and laser system.

1.2 Component of UAV

Various multi copters comes with power supply and durability modules. The quality and

performance of multi copters are specified by multi copters component. Here is a list of critical

components which make up an unmanned aircraft .

1.2.1 Frame

The frame is the basic framework that built upon the rest of structure the frame protects the motor

and various other devices in a way that they preserve stability throughout the flight and keep the

vehicle in levelled. There are several type of frame which define the multi copter the most type of

frames are tricopters, quadcopters, hexa copters, single copters and octocopters.

1.2.2 Propellers

Propellers are the engine units that are situated on either side of a multi copter. Quadcopters

comprises of four propellers or engine units, tricopters comprises of three propeller units, and hexa
16
copters comprise of six propeller units, etc. In the event that you are searching for a UAV available

to be purchased or rambles available to be purchased, ensure that you check the propellers of a

specific automaton or quadcopter before putting resources into it. On the off chance that you are

hoping to assemble your own quadcopter, consistently look for a low pitch propeller for immaculate

dependability and vibration.

1.2.3 Flight controller

The flight controller as the mind of the aeronautical vehicle. Flight controller should be used to control

thewhole flight of UAV or drone.

1.2.4 Motors
Engines are no less critical than the previously mentioned segments. The force arrangement of a

multi copter relies upon the engine; in this manner, it is significant that you consider top notch

engines for your DIY venture. The most widely recognized kind of engine used really taking shape

of a multi copter is a brushless engine.

1.2.5 Electronic speed controller

An electronic speed controller or ESC gives electronically created three-stage capacity to the brush

engines. For your natively constructed quadcopter, you can pick ESC of about 22A territory. You

can likewise pick 25A, however for a quadcopter it may be over heated. Prior to buying an ESC,

ensure that it has a total programming office, including battery mode.

1.2.6 Battery

Most multi copter specialists suggest a lithium fuel battery. Lithium batteries are lighter to convey

and give more vitality than different batteries. To build the intensity of your quadcopter, purchase a

battery with a higher mAh. Presently you comprehend what you will requirement for your task.

Purchase the correct parts from solid providers and acquaint yourself with essential specialized

17
subtleties to construct an ideal unmanned elevated vehicle.

1.3 Types of UAV System

There are so many types of drone in the world. Drones are use for different purposes according to

their specification. Drones are also varies in shape and size.

Main application of drones are use in military purpose as a surveillance uses. An air flying unit can

easily inspect places where people cannot enter legally and it can also take pictures of common

places.

1.3.1 According to their number of propellers

Multi Rotor

Fig 1 multi rotor drone

In the event that you need to get a little camera noticeable all around for a brief time frame, at that

point it is difficult to contend with a multi-rotor. They are the least demanding and least expensive

alternative for getting an 'eye in the sky', and in light of the fact that they give you such

extraordinary authority over position and encircling they are ideal for airborne photography work.

Single Rotor Drone

18
Fig. 2 Single Rotor Drone

The most well-known advancement in rotor-type rambling used to be multi-rotor type plans, which

had many rotors for retaining their place, but if you look inside a single-rotor model, there is only one

rotor.Another will be a tail rotor that just helps to give control to the heading. On the off chance that in

the event that you have a blend of float having overwhelming burdens yet need a quicker flight time

with longer continuance at that point single rotor type helicopters can be best decision for you.

Tricopter

Fig 3 Tricopter drone

19
There are three unique kinds of incredible engines inside a tricopter, three controllers, four gyros

and just a single servo. The engines are basically set at each extraordinary finish of three arms and

every last one of these is holding an area sensor. At whatever point you have to lift your tricopter, it

is fundamental to start a development in throttle switch, the gyro sensor will quickly get its sign and

will pass is legitimately to controller that assists with controlling engine turn. A tricopter can remain

balanced out on its way as it is furnished with such a large number of exemplary sensors and

electronic stuff itself. You need not to apply any manual revision.

Quadcopter

Fig 4 Quadcopter Dji phantom 4 drone

At the point when a multi rotor is structured with four rotor edges then it becomes quadcopter.

These gadgets are generally constrained by uniquely structured brushless sort DC engines. Two of

the engines use to move clockwise way though other two run counter clockwise way. It assists with

choosing a sheltered arriving for quadcopter. The wellspring of battery for such gadgets uses to be a

lithium polymer battery

20
Hexacopter

Fig 5 Hexacopter drone

Hexacopter will serve you for some potential applications with its 6 engine component where 3

work on clockwise course and other three move in against clock astute heading. Consequently,

these gadgets can increase higher lifting power when contrasted with quadcopters. You need not to

stress over its component as it is intended to fill in as an amazingly sheltered landing create.

Octocopter

Fig. 6 Octocopter drone

21
Octo implies eight; so octo copter is going to serve you with its ground-breaking eight engines and

that send capacity to 8 practical propellers. This speciality normally have a lot of flying capacities

when contrasted with units talked about above and are likewise exceptionally steady. You can profit

a steady film recording with octo copters at any elevation. These gadgets discover application in the

realm of expert photography.

Fixed wing drone

Fig. 7 Fixed wing drone

Here is totally unique classification from every single above unit. There structures are very
extraordinary when contrasted with ordinarily utilized multi rotor type rambles. They resemble
conventional planes and have a wing on them. These automatons are unable to maintain their
balance in the air because they lack the strength to successfully resist gravity. They find their uses
in development-related chronicles, where they can advance based on the capabilities provided by
their built-in battery system.

The majority of the automaton constructions on display are quadcopters since they can lift heavy
loads without requiring further architectural modifications. For the vast majority of your needs, it is
the most shrewd response.

22
1.3.2 According to the size

Very small drones


They can be planned with a typical size range changing from a huge estimated creepy crawly to a

50 cm long unit. Mini drones and Nano/Micro drones are the two types of structures that are most

common in this area. The nano rambles are frequently used because of their small size and light

weight construction since they function as basic espionage weapons.

Mini drones

They have a size minimal greater than smaller scale rambles that implies will go over 50 cm yet will

have most extreme 2m measurement. The greater part of these automaton models are planned with

fixed wings type development while few can have rotating wings. Because of their little size they

need power.

Medium drones

They have a size minimal greater then smaller scale rambles that implies will go over 50 cm yet will

have most extreme 2m measurement. While a small number of these automaton models can have

rotating wings, the majority are designed with fixed wings. They require power since they are little.

Large drones

Enormous automatons are fairly practically identical to estimate of airplane and are most usually

utilized for military applications. Put that can't be secured with typical planes are normally caught

with these automatons. They are the main tool for applications involving observation. Customers

can also group them further into different classes based on their flying ability and range.

1.3.3 According to the range


Very close range drones
They act like a most loved toy for the greater part of the children. They can fly up to 5 Km with fly

time of 20 to 45 minutes when outfitted with ground-breaking batteries. The absolute most

23
generally utilized units in this classification are Raven and Dragon Eye.

Close range drones

Such automatons can fly up to 50 Km with a battery reinforcement of 1 to 6 hours. As they can

work for longer terms and can cover far separations so they discover their applications in

reconnaissance missions.

Short range drones

They are minimal better as looked at than short proximity rambles so are generally use for military

applications. They can make a trip up to greatest separation of 150 Km that implies inclusion is

practically 100Km more than that of short proximity rambles. The evaluated flight time for short

range rambles is 8 to 12 hours so they are valuable for observation and spy applications.

Mild range drones

This classification of automatons is a lot of amazing when contrasted with all talked about above.

They are notable as fast automatons that can cover territory up to 650 Km. Mid range rambles are

usually utilized for observation applications and some fundamental kind under this classification

work for meteorological information assortment needs.

Endurance

Here is the best assortment of automatons that has amazing flight time of 36 hours and can go up to

most extreme stature of 3000 feet above ocean level without any problem. These automatons are

famous for very good quality reconnaissance applications.

1.4 GIS IN 3D MAP

Like 2D maps, 3D GIS maps delineate items in more noteworthy detail by including another

measurement (z). 3D innovation in GIS maps is logical delineations that speak to the size of true

articles. 3D models help appearance, review in countless changed spaces. Fo r inst ance, 3D
24
map s ca n d isp la y a bu i ld ing 's o r a mo unt ain's he ig ht r at her t han just it s sur fac e

ar ea. T he 3D t oo ls mu st be used in c o nju nct io n w it h 2D GI S be fo r e be in g

visu a lis ed in 3D. To witness protests in the city and better areas in a city in the past, more than

one programming was needed. The components of geography and earth science have altered as a

result of current geographic information systems (GIS). The modern GIS interface, made possible

by the development of electronic media, enables its administrators to create, evaluate, and manage

land statistical data points.

GIS has significantly influenced how frequently mapping is used to solve problems over time. GIS

data has historically relied on a two-dimensional account, which obviously limited its use in many

applications. Combining 3D technology with GIS transforms the overall experience, bringing it

closer to home and enabling precise depiction.

1.5 USES OF 3D GIS MAP

City planning

Today, a larger part of urban communities face problems of floods and space to live. The issue lead

to inappropriate distribution of assets. Consolidating 3D innovation in GIS can assist government of

agencies, town planners, and others in making appropriate offsets and planning in the execution of

public works, as well as assess and analyse how certain changes in a city will look and how these

advancements can address issues of things to come as people get older. A typical 3D model would

include information on buildings, satellite symbolism, and traffic, which urban planners can use to

effectively find alternative solutions and handle emergency situations.

Building information modelling

The key breakthrough known as Building Information Modelling (BIM) depicts situations in their

verifiable contexts.The blend of BIM and GIS gives the essential skill to assemble a strong model.

The combination of 3D GIS and BIM can aid in the creation of error-free executive designs for

structures that would ultimately take into consideration a gradually point-by-point investigation of
25
information.

Coastal information and modelling

Waterfront areas are important because they serve as a nation's trade point with the rest of the globe.

Seaside areas face serious risks and challenges for improvement everywhere. It is important for

planners to understand all the factors that affect the growth and protection of ports, fisheries, and

mining industries. A degree of understanding of the economic and ecological developments along

the coast can be gained through efficient and effective asset arrangement on 3D GIS.

Disaster response

3D GIS can help individuals and social orders better handled disaster events. In the event of a

cataclysm, point by point mapping may give an expansive plan to debacle reaction groups by

making them mindful of nature they would bargain in. This would likewise incorporate giving data

like.

1.6 TYPES OF MAP


Cadastral maps

Cadastral maps are much more detailed. The plans can be combined to create larger cadastral maps.

The plans map out specific properties and provide details, such as information connected to border

when buildings or land are surveyed. The ancient Egyptians used cadastral mapping, one of the first

forms of mapping, to identify land ownership soon after the Nile River flooded the area. The word

"cadastral" derives from cadastre, which refers to a public record, survey, or map of the value,

extent, and ownership of land for taxation reasons.

Topographic maps

Because it shows various physical terrain elements, a topographic map resembles a real map quite a

bit. However, these maps differ from others since they depict changes in the land using contour

lines rather than colours. On topographic maps, contour lines often show elevation changes at

regular intervals; the steeper the terrain, the closer the contour lines are to one another. These maps
26
can be used for a range of tasks, including surveying, urban planning, resource management, and

outdoor activities like hiking, fishing, and earthmoving. The benefit of relief or topographical maps

is that they depict the actual topography of the area. This contains rivers, mountains, hills, valleys,

and more. They also show significant highways and landmarks.

Political map

Political maps are made to show administrative limits of countries, states, districts, urban

communities and towns, and might have some physical highlights, for example, waterways,

streams, and lakes. The attribute of a political guide is an easy. Political maps won't show any

topographic highlights. It rather concentrates just on the national and state outskirts of a district.

They will likewise include the area of key urban communities, in addition to they are used for the

most part incorporate noteworthy waterways, as per the subtleties in the guide. Albeit some

physical highlights appear on political maps, including significant mountain goes, the utilization of

this is to give land reference showing any physical highlights. These limits and areas are normally

founded on individuals rather than the common world.

Physical map

A physical guide's description of a location's geological highlights is its meaning. Physical maps'

main use is to depict land features like deserts, mountains, lakes, and fields, yet they can also

include a huge amount of similar data that is available on a political guide. They paint a detailed

image of the neighborhood's terrain using geology. A physical map displays a region's typical scene

characteristics.

Thematic map

A topical guide shows the spatial circulation of recognizable earth surface highlights; it gives an

educational depiction over a given zone, instead of an information portrayal. Picture order is the

procedure used to create topical maps from symbolism. The subjects can extend, for instance, from

classifications, for example, soil, vegetation, and surface water in a general portrayal of a rustic

27
region, to various kinds of soil, vegetation, and water profundity or lucidity for a progressively

itemized depiction. It is inferred in the development of a topical guide from remote-detecting

symbolism that the classifications chose for the guide are discernable in the picture information. As

portrayed in the past parts, various elements can create turmoil among unearthly marks, including

geology, shadowing, barometric fluctuation, sensor adjustment changes, and class blending inside

the GIFOV. Albeit a portion of these impacts can be displayed, some can't (with any sensible

measure of exertion), thus they should be dealt with just as factual fluctuation.

Climate map
A district's environment is depicted on an atmosphere map. Their equipped to display information

such as the precise climatic zones of a region using the temperature, such as how much snow a zone

receives or the typical number of gloomy days.These maps typically use hues to call attention to

various climatic regions.

Road map
A road map or route map is a type of navigational chart showing mainly roads and transport

connections rather than natural geographic detail. One of the more common kinds of maps is a road

chart. Such charts display small and significant (detailed) highways and roads along with items like

airports, community centres, and unique facilities, such as beaches, camp grounds, and monuments.

Primary roadways with a road map are usually red and wider than other highways, whereas local

roads may be a lighter colour with a thinner section.

General reference map


Imagine a daily chart, where cities and towns are named, main transport roads are included along

with natural features such as lakes and rivers, and you'll think of a general reference chart. This are

the simple maps best designed to help you navigate to your destination. These can understood

quickly, and have charts of the streets and visitors. The maps display the borders, naming and

different symbols of ordinary geographical regions, along with significant cultural and physical

features such as highways, railways, coastlines, rivers and lakes.


28
1.7 Ground control points

A ground control point (GCP) is a landmark that can be easily seen in the original image and whose

ground coordinates are known.Ground control points are used to georeference the images. Ground

coordinates can be derived using a variety of tools, including the Global Positioning System (GPS),

aerial photography, geocoding, vectors, geographic information systems (GIS), topographic maps,

chip databases, and photogrammetric techniques. A GCP identifies the relationship between the raw

image and the ground by mapping the pixel (P) and line (L) coordinates of the image to the x, y, and

z coordinates of the ground.

Fig. 8 Taking GCP coordinate by Dgps

1.8 Choosing ground control points


The consistency of your ground control points (GCPs) explicitly impacts your area of interest

precision, so this, in effect, will decide your project result. After set of GCPs:

Select apps which you can reliably recognize when the raw picture is resolved.

Pick apps close to the field. Objects, such as buildings, that rise above the ground can tend to lean in

the picture. Thus, a point obtained from the top of the feature may deviate from the true ground

location.

Avoid utilizing the GCPs as backgrounds. While shadows in the picture may be simple to see, they

are not permanent and may change from one picture to the next.

Beware about preferring specific or redundant features such as GCPs, such as parking lots or

29
highway lines. When you seek to recognize the function in the picture, the right one can be difficult

to pick.

1.9 How Do I Lay a Control Points

Fig 9 GCP marking on ground

The method of laying all checkpoint forms (GCPs and CHPs) is the same. Take these moves to set

down the check points:

Determine your mission's Altitude. If operating above ground level (AGL) of 40 feet, using control

points of large scale (46"x46). control points while operating at or below 40 meters above ground

level.

Place at least 4 ground control points across the perimeter of your place. You may need to position

further GCPs inside the site itself, based on the scale of the site and its terrain. GCPs may not have a

fixed width to be isolated, so make sure they are spread widely enough apart to prevent ambiguity.

Position the check points around your site. Positioning is up to your discretion

1.10 Common mistakes to avoid when laying control points include the following

1. Excessive shade or glare will confuse the recognition of your control points on your screen.

2. Verify that the control points are visible by positioning them on flat surfaces and in open
30
spaces. Do not place control points under trees.

3. If your field of interest varies considerably in elevation (hills, cliffs, rivers, caves, etc.), not

putting at least one control point on each of the various elevations will jeopardize your map's

accuracy.

1.11 Why Use Ground Control Points?

A Ground Control Point is a defined place location which is visibly identified. This fixed position

has GPS co-ordinates of high precision which correspond to the GCP position. If your GPS device's

precision is good enough the GCP can improve your data's spatial accuracy. You will get accuracy

values of + /- 1-5 meters using a standard DJI drone. So the final orthomosaic would be bound to

the earth within + /- 1-5 meters of precision by utilizing Ground Control Points. You will increase

the accuracy value to the standards of your high precision GPS device with the use of well-placed

ground control points. Choosing a high precision GPS collection system for ground control points.

Relative Accuracy
Is the difference in distance between points on a map which corresponds to the actual distance in

the real world between those points. It does not matter where the map is placed on Earth to have

good relative accuracy only the scale and form is fairly close to the true representation of your

features. You do not need to have Relative Accuracy at Ground Control Points. Both our maps are

relatively precise in relative terms. This helps you to make precise measurements of time, area, and

volume irrespective of the position of the survey.

Absolute accuracy
Absolute precision is calculated depending on the distance of the features on the chart from the true

real-world positions of certain apps. Just like Relative precision, the appropriate size and shape of

these features are still relevant in your chart. The distinction is that as opposed to the position of the

different elements of the physical world, the chart as a whole is correctly positioned. If you are

interested in precision location accuracy in your final outputs, having high absolute accuracy is a

must.
31
CHAPTER 2

Literature Review

2.1 Assessment of UAV Photogrammetric Mapping accuracy based on variation


of GCP

Patricio Martínez-Carricondoa, Francisco Agüera-Vegaa, Fernando Carvajal-Ramíreza,⁎,Francisco-

Javier Mesas-Carrascosab, Alfonso García-Ferrerb, Fernando-Juan Pérez-Porrasb carried out study

on, 3D mapping generated by using UAV and accuracy assessment based on number of ground

control points and their distribution. Total study area was 17.64ha and total ground control point

used were 72 in this research total 5 number of distribution and 12 different combination were

taken. Study area is located at Campo de Níjar (Almería), southeastSpain.Images were taken by

rotary wing UAV with eight rotors. Agisoft photoscan professional software was used for

processing. Flight altitude was 120m and horizontal and vertical error around 1 and 2cm. Total 5

distribution edge, centre, corner, stratified and random distribution were used. But for vertical

accuracy, stratified distribution give best result out of five distribution. Vertical accuracy in

stratified is.308m to.043m and GCP, used is 4 and 16. edge distribution give best result when GCP

used was 36, horizontal and vertical accuracy is 035 and.048m. But in standard use 20 GCP at edge

and 2 at centre and accuracy for vertical is.054m.

2.2 Fusion of UAV based DEM for vertical component accuracy improvement

Ajibola Ismaila Isola1, 2*,Shattri Mansor1,Biswajeet Pradhan1 and Helmi Zulhaidi Mohd. Shafri1,

research aimed at creating model to improve digital elevation model produce by UAV. The study

provides for a fusion solution combining weighted average and additive mean filtering algorithms

boost the precision of digital elevation model resulting from UAVs with fixed wings. Fused the

poor performing DEM with high quality DEM made from multi-rotor UAVs. Assessment of root

32
generated by DEM mean square error of 1.14 cm and normal vertical accuracy of 2.24 cm with

confidence level of 95 per cent . This factor reflects a drop of 18.31 cm to 2.24 cm in vertical

standard error, which is an 88% increase. Aerial Unmanned Vehicle Systems (UAVs) are widely

used for the production of precise Digital Elevation Models (DEM) over relatively extended, very

low cost areas. Precisely the DEM fusion that is most used The algorithms are: average

interpolation, krigging, regularized spline, weighted Inverse Distance (IDW), weighted average.

The analysis was conducted within the University Putra Malaysia golf course region. Golfing the

course is situated in the south of the school. Two digital elevation models named DEM1 and DEM2

are used as input data in this study. DEM1 and DEM2 were extracted from images of rotorcraft and

UAVs with fixed wings, in that command. Vertical accuracy of DEM1and DEM2 are 1.95cm and

18.31cm respectively. LS3D software was used for match for 2 DEM.The images were analyzed

earlier by the motion structure (SfM) algorithm, growing interpolator is deterministic dependent on

IDW. Co-registration is a process of making a common spatial reference to two or more images

enable proper overlay of the picture. The main purpose of image registration is to make pixel to

pixel easier picture analysis. This section presents the process of improving the vertical precision of

the DEM produced by fixed-wing UAV by fusion algorithms and filtration. The 95 per cent CL

RMSE and vertical standard errors (in centimetres) indicate measurements of model accuracy. How

Would That Be the vertical standard fused DEM error, seen in the table, has improved from 18.31

cm to 18.31 cm 2.24 cm, which shows an 88 per cent increase in the model.

2.3 The test field for UAV accuracy assessment

Paweł Wiącek, Krystian Pyka carried out study on test field for UAV accuracy assessment. The

UAV photogrammetry is a growing mapping and surveying tool. Around the same time because of

the growing amount of the breadth of research done with UAV raises the value of finished product

accuracy. However, to achieve the precision of the sample Bundle adjustment processes which

could be influenced by multiple factors such as unreliable camera calibration are required,

33
correlation between interior and exterior orientation and inadequate knowledge on georeference.

One of the project's goals was to prepare the terrestrial test field which helps to obtain optimal

decorrelation and enables the accuracy of the test to be objectively assessed adjustment kit in UAV

framework. Two multi-variant flights were performed over the test area during the project. UAVs

are becoming more and more complex Some forms of surveys include environmental hazards stock,

landscape surveillance, visualization of cultural heritages so several many, require stronger state of

perception, so quite high geometrical veracity and precision. There are no universal rules

concerning optimum number of GCPs and their distribution spatially. It has established the direct

georefencing of UAV photos few years. Most vendors already have survey rating mounted UAV

reader RTKGNSS on board. Total covered study area of this project was 350*400m and located in

Bochnia city Around 150 Control Points were used in all experiments to assess final accuracy.

Dividing the points into three groups:

1. Land checkpoints-25 designated natural points using during phase for package modification

2. Roof Test Points-11 points listed on Construction of Roofs

3. Test points-approximately 150 identified natural points used for the assessment of the precision

calculated over the final Products available in GIS software.

Total 2 flight were taken with different height. First flight height varies from 155 to 230m and

second 110 to 160m. Horizontal and vertical accuracy obtain corresponding are 2 and 3 cm. Final

3D the accuracy is less than three times the GSD and reaches 7.6 cm on the worst case. In all of

this, we can infer that there are no bending patterns on instances.

2.4 Modelling farmland topography suitable site selection of dam


construction using unmanned areal vehicle (UAV) Photogrammetry

Oluibukun Gbenga Ajayia,b,⁎, Mark Palmera, Akporode Anthony Salubib are the author of this

paper. UAV photogrammetry and 3D mapping are gaining rapid and large applications worldwide

mainly because of the fairly low cost benefit it provides when creating topographic 3D high-

34
resolution images before for LiDAR. This research aims to show the applicability of the UAV

photogrammetry and Geographic information system (GIS) in modelling the topography of a

farmland in the city of Kwandere;

Lafia, Nasarawa province, Nigeria, choosing an ideal location to construct an earth-fill dam. DJI

phantom 2 used for image acquisition and flying height was 120cm. Agisoft photoscan professional

and arcgis software wre used for image processing.

3D map and Digital elevation model generated by using 20 GCP and horizontal and vertical

accuracy corresponding are 1.7308 and 1.96. After the acquired image data is analyzed

photogrammetrically. The following 3D models were generated: Cloud sparse point ,Dense point

cloud, Automated Surface Pattern, Orthophoto And the concept of automated elevation. The

following contours are DEM map and vector map displaying magnitude and direction flowing. The

cloud of sparse points generated by could be represented as a representation of triangulation of the

rendered data points; these are however limited usage in 3D software, and deep point cloud as well

created from it by reconstruction of the surface and depth.

2.5 Accuracy of unmanned areal vehicle (UAV) and photogrammetry survey as


a function of the number and location of ground control points used.

Enoc Sanz-Ablanedo 1,*, Jim H. Chandler 2 , José Ramón Rodríguez-Pérez 1 and Celestino

Ordóñez 3 carried out study on, reflects on the crucial position of GCP and place control points

(GCP) used in the geo-reference process. Total 1200ha, 102 GCP and 2500 photos were taken for

accuracy assessment. Three thousand, four hundred sixty-five the package modification added

various combinations of control points, while the accuracy of the model was tested using control

points as well as individual checkpoints. Longitudinal and side overlap are respectively 75% and

60%. software used for processing is agisoft photoscan professional and flying height is 120m. The

ground sample distance is 6.86cm. 3465 different combination were used for find better accuracy.

First step was processed when 3 as a GCP and 99 as a check point used and last processing was

done when 101 as a GCP and 1 as a check points. But for better accuracy in this research paper

35
more than 3 GCP per 100 photos used. The study area is located at the South of the Cordillera

Cantábrica, near the village of Santa Lucia in León, Spain. Accuracy when 9 to 100 gcp were used

12cm.

2.6 Template for high resolution river landscape mapping using UAV technology

Miloš Rusnáka, Ján Sládekab, Anna Kidováa, Milan Lehotskýa, carried out study on this research.

Geomorphological mapping includes primary landform data collection and is based upon on size

and modifications of the properties of the studied item. Identification of landforms is common using

field work and remotely sensed data with object classification framework accessibility, flexibility

and accuracy both temporary and spatial. The morphology of the river is amongst the most

Dynamic landscape properties, so precision generation is important in its mapping Dataset and

topography required for process, pattern and spatio-temporal linking volumetric moves. The study

area located at the knickzone, 1,6 km wide, enters the braided Belá River in the to the north of

Slovakia. Total 38 gcp were used in this study out of 20 taken as GCP and 18 as a check points.

Distribution and accuracy of ground control points (GCPs) are main factors in accurate UAV

Mapping as this methodology is 5 to 10 times more accurate than direct UAV 's internal GNSS

georeferencing. There may be three or more GCPs required to Georeferencing. The PhotoScan

software requires a minimum of 10 GCPs for model reference. Resolution for DEM and flood plain

are taken as 5 and 6.46cm respectively. Six take off and landing site taken in this study area.

Software used for image processing are arcgis, terrasolid, agisoft. 7 Highlights that UAV

orthophoto mosaic combined with nadir has a high resolution,

Horizontal and oblique images supported landforms and ground cover vectorisation by one

dimensions larger than 10 cm. An average root mean square error rate (RMSE) for all GCPs

following 0.01915 m alignment (z = 0.10093 m; x = 0.01474 m; y = 0.02272 m). Center Gap The

vertical dimensions of the RMSE were 0.02836 m and the check points were 0.02642 m and

0.02459 m for X, and 0.02812 m for Y.

36
2.7 Towards the automatic detection of geospatial changes based on digital
elevation models produced by UAV imagery
T. Bauman, O. Almog, S. Dalyot carried out study on were conducted in open urban areas, and

Desert areas, where Pix4D was used to produce DEMs on images collected by a consumer level

UAV DJI off-the-shelf fantasy Advanced 4. To determine the durability and potentiality of we first

contrasted the implementation of SIFT on a DEM data structure The findings for those obtained

using the software orthophoto, Seen with the test. 300 images for the local environment we were

taken and created a database of around 0.05 sq km. This searched the same region twice, with a

time gap 57 Years. We created an orthophoto for the data developed, and both have a DEM

resolution of 0.25 m. Good ortho photography, Just 26 homologous points are defined and

combined, though 212Homological points for the DEM generated for the same region was named

and coupled. Tests reveal although the orthophoto can be affected to a significant degree by the

current conditions of weather and development thereby influencing the effects of point recognition

have a considerably less impact on the DEM by this. In this study the approach is focused on three

key stages:

(1) The identification and alignment of reference points, (2) the spatial modification,And (3) change

noise filter identification. For identifying homological points of reference, digital picture transition

applied methodologies. After defining the homological points, we quantify several transformations

planned to evaluate our strategy. Firstly, we are measuring international affine transition between the

two dependent repositories on all homologous points where one database is converted .The other and

variations in height are determined per DEM point. The effects of this method are seen where the

height is increases over half a meter are seen in green, and height changes greater than half a meter are

seen in red. 450 photographs were taken for the desert region, which created a database covers around

0.02 sq km. It searched the same region twice, with 1 hour gap in frequency. For the produced info, we

DEM was developed with a resolution of 0.25 m.

37
CHAPTER 3

Description about study area

Varanasi, also known as Banaras or Kashi, is a city in the Indian state of Uttar Pradesh, in the

southeast. One of Hinduism's seven sacred towns, it can be found on the left bank of the Ganges (or

Ganga) River. Standard. Pop. (2001) town, 1,091,918; urban agglomeration, 1,203,961; (2011)

town, 1,198,491; town, 1,432,280. Varanasi features the best riverfront in all of India, with miles of

ghats (religious bathing steps) and a variety of shrines, temples, and palaces rising tier by tier from

the water's edge. The city's interior streets are too narrow, winding, and treacherous for vehicle

traffic. Hindu devotees anticipate walking the Panchakosi road that surrounds the holy cit y

once in a lifetime and, if necessary, passing away there in old age. The venue receives

more than a million visitors each year. Addit ionally, the cit y receives a large influx of

domestic and international tourists every year, and the tourism industry plays a vital role

in the local economy. The most revered of the many temples in the area are those of

Vishvanatha, which is dedicated to Shiva; Sankat mochana, which is dedicated to

Hanuman; and Durga. The big trees next to the Durga Temple are home to swarms of

monkeys that are well-known there. Another famous holy building is the Great Mosque at

Aurangzeb. Two of the more prominent modern temples on Banaras Hindu University campus are

those of Tulasi Manas and the Vishvanatha. Hundreds of other temples are in the area. There are

remnants of ancient Buddhist monasteries and temples in Sarnath, only a few miles north of

Varanasi, as well as temples founded by the Maha Bodhi Community and the Chinese, Burmese,

and Tibetan Budhhists. Varanasi coordinates for latitude and longitude are: 25.321684, 82.987289.

Located in North India's Indo-Gangetic Plains, the ground is very productive, as low-level floods in

the Ganges continuously replenish the soil. Varanasi is located between the Ganges junctions of

two rivers: the Varuna and the Assi. The path between the two

38
confluences is around 2 miles (4 km) and serves as a sacred Hindus travel road, culminating in a

visit to a Sakshi Vinayak Temple. Tourism is the second most significant field of Varanasi.[109]

Domestic tourists more regularly travel for religious reasons while international tourists travel ghats

along the Ganges and Sarnath rivers. Most domestic tourists come from Bihar, West Bengal,

Madhya Pradesh, and other areas of Uttar Pradesh, while Sri Lanka and Japan are the bulk of

international tourists. The highest tourism season ranges from October to March. In total, there are

about 12,000 beds available in the area, around a half of which are in cheap budget hotels and a

third in dharamsalas. The tourist infrastructure of Varanasi, on the whole, is not well established.

3.1 History of Varanasi

Among the world's oldest continuously populated settlements, its early past is that of the first Aryan

settlement in the middle Ganges valley. Varanasi was a centre of Aryan philosophy and religion by

the late 2nd millennium BCE, as well as a commercial and manufacturing hub well known for its

sculpture, ivory plays, fragrances, and muslin and silk clothing. As attested by the renowned

Chinese traveller Hsüan-tsang, who visited it in around 635 CE and claimed that the town

stretched along the western bank for about 5 km, the capital of the Kingdom of Kashi was still a

hub of worship, education, and creative endeavours during the time of the Buddha (6th century

BCE), who delivered his first sermon at nearby Sarnath. From 1194 onwards, Varanasi under the

Muslim rule entered a devastating period for three centuries. The buildings were torn down, and

the scholars had to flee. In the 16th century, with the accession of the compassionate emperor

Akbar to the Mughal throne, the city was given a certain religious respite. Much of this vanished

again when the tyrannical Mughal emperor Aurangzeb came to power in the late 17th century.

Finally the 18th century gave back Varanasi the missing glory. As the British proclaimed it a

separate Indian state in 1910, it became an autonomous kingdom, with Ramnagar as its capital.

Varanasi was a part of the state of Uttar Pradesh after India's independence in 1947.

39
Fig. 10 study area of Varanasi in map

3.2 DESCRIPTION OF JNV AREA

Jawahar Navodaya Vidyalaya is located in Varanasi, district of Uttar Pradesh and is 40 km from

Banaras Hindu university campus. The total area of JNV is 123.9211 ha area. It is a plain

topographic land near ganga river basin in UTTAR PRADESH. The area is a plain land with very

less amount of construction works.

40
CHAPTER 4

METHODOLOGY

4.1 OBJECTIVES

1.To demonstrate the application of UAV photogrammetry in heritage mapping.

2. To demonstrate the use of low cost UAV as a mapping tool for small areas.

3. To do the accuracy assessment of map prepared by using UAVs and software.

4. To estimate the need of GCPs for achieving desirable accuracy.

4.1 PLANING OF PROJECT

Planing is first and most important for any project. My study area includes 123.9211 ha of area.

Planning should be done perfectly so that the time and cost etc can be saved. My study area is JNV

which is a plain area near ganga river basin. The total area we covered 123.9211 ha area.

We had followed some steps before going in the field:

1. We opened google earth and go to location of JNV then zoomed google earth, saw mystudy

area which is 123.9211 ha.

2. Distribution of Ground control points should be spread all over the area uniformly so that the

desired accuracy is achieved..We have marked around 14 GCP’s all over the area.

41
4.2 Field Work

4.2.1 Ground Control Point Marking in the field

An ideal GCP marker can be as simple as two lines which intersect or two square of different

colour. The objective is to provide a distinguishable feature on the region being flown. Because

signals are not disrupted by buildings or covered areas, GCP should be placed in an open area. On

the chart where the mission was conducted, the GCP should be visible at the specified altitude.

Remember that the GCP's centre cannot be accurately determined during production if it cannot be

plainly seen in your map pictures. No matter whether your GPS reads precisely in centimetres or

millimetres, your GCP will be as accurate as the centre can be noted if your GCP is obscured,

poorly defined, or if you can't safely set your cursor on your photographs within one inch of the

point where the GCP was measured using GPS.Ground control point marking starts, which had

predefined in the google clipped image. size of GCP marks are 1feet*1feet. We used square shape

GCP mark of two different colour. White and black colour used for GCP making because these

colour are good visibility from high altitude. Colour used in GCP marking was very good quality. i

had taken GCP reading at the middle of gcp mark point and corner of both square where colour has

been changed. Total 14 GCP marked out of which 9 GCP are at the corner and remaining 5 at the

centre of the area distributed uniformly. We were numbering all the GCP beside of marking point.

Which has really helped in image processing.

42
4.3 Differential Global Positioning System

Global Positioning System, GPS, is a satellite-based radio navigation system operated by the United

States Defence Department. The signals that the satellites transmit carry data that your receiver

needs to pinpoint its location. The receiver will receive signals from at least three satellites to

determine a 2D position, and from four satellites to determine a 3D location. Positional precision

exceeds 8 metres (95 per cent).

GPS covers three segments — space, user control, and control. The space segment includes the 24

operational NAVSTAR satellites, which orbit the earth at an altitude of about 20,200 kilo meters

every 12 hours. A growing satellite employs a unique identification code to continually broadcast

radio signals and is made up of numerous high accuracy atomic clocks. If the satellite signal is not

impacted by the ionosphere, troposphere, flaws in the satellite clocks, and errors in the satellite

ephemeris, the position could be more accurate. To get even greater accuracy you should mount a

DGPS receiver. For action, the spatial precision is often usually smaller than 1 metre. Differential

GPS (DGPS) is that there will be similar atmospheric errors on any two relatively close together

receivers. A GPS receiver must be installed at a precisely known position in order to use DGPS.

The basestation or reference point is this GPS receiver. Based on satellite signals, the base station's

receiver calculates their location and compares it to the designated spot. The difference is in the

information that the second GPS receiver, also known as the wandering receiver, reports to GPS.

The corrected information can be applied to data from the roaming receiver in real time in the field

either through radio signals or after data capture after processing using specialised processing

software.

43
4.3.1 BASE STATION

Base station is at a known point, whether it was on marking permanently and it's a base DGPS

mounted on a tripod. The fact it is in an established location helps the base station to make

corrections. The constellation informs the base station that it is in a somewhat different location,

such that at the uncertain stage, adjustments may be produced to return to the rover. The corrections

are in real time. Our base station is CORS(Continuously Operating Reference Stations). These are

fixed GPS receivers that accurately measure their position and transmit correction data to DGPS

receivers in the field. This correction data helps improve the accuracy of GPS positions by

correcting errors caused by factors like atmospheric disturbances and satellite orbit deviations.

4.3.2 ROVER
Rover is setup at marked GCP point where we want to find coordinate. Rover is connected to base

through blue tooth and satellite by signal. Rover coordinate is corrected by base station. Rover is in

moving nature around the study area where we are interested to find coordinate of marked GCP

mark and base is a fixed at a point and its coordinate is known. all the 14 GCP mark point recorded

by rover setup 4 minute each mark points. Dgps show green satellite signal means more than 4

satellite connected through Dgps. Our rover denoted as 34 which is the rover dgps blue tooth name.

44
Fig. 12 CORS as base station

45
4.4 Limitation
When we moved through any covered area or inside roof building, trees then satellite signals are

disconnected and again connection with satellite and DGPS signals take some extra time to store

the points around 5 to 6 minute.

4.5 FLIGHT PLANNING


We have used DJI Phantom 4 drone for taking images of our study area which is 123.9211 ha. Our

flight planning software is BlueFIRE Touch which has installed in our laptop. Single grid mission

is used for taking images in by drone. Flying height is 80m above the ground level and longitudinal

overlap is 80%. Total 1830 images taken by drone in our area of interest. 1830 images are

calibrated out of 1830 images. All the image taken when good weather condition. Because good

quality images obtained. The ground sample distance of image is 2.15 cm

Fig. 13 ideaForge RYNO UAV

45
4.6 Processing
In this method all the data which was collected in the field taken into computer lab for processing.

Images which are taken by drone and ground control points coordinates which are taken by Dgps,

processing has been done on two differtent software. Pix4D mapper and Agisoft Metashape

software is used for image processing and Spectrum survey office is used for ground control point

processing.

4.7.1 PIX4D Mapper:


It is an image processing software. Pix4Dmapper transforms photographs captured by drone, by

hand or by plane automatically and generates precise, geo referenced 2D maps and 3D models such

as professional orthomosaics,point clouds,3D models and more. This is flexible, prompt, and a user

friendly software which supports a wide variety of apps and applications. Leading software

photogrammetry for professional mapping of drones. Capture RGB images, with any camera,

thermal or multispectral. Pix4Dmapper turns your images into spatial digital models. Scan the tasks

effortlessly utilizing the Photogrammetry software on the web on the desktop. All the images. First

of all create a project and all the 1830 images imported in pix4d mapper. Geoinformation and geo

tags information will be loaded automatically, the correct coordinate system will also loaded

automatically. We used WGS84 coordinate system. There was many processing template but in my

project i have choosed 3D map. The images have been positioned correctly according to image

geolocation after started the project. In pix4d mapper process occur mainly in 3 steps. Initial

processing, point cloud and mess and digital surface model or ortho photo these are the three steps

of processing respectively. We have started processing with and without ground control point.

We have used thirty different combination of ground control point distribution in our project.

1830 image calibrated Out of 1830 images. automatic tie point is a 3D points and its corresponding

2D key points that were automatically detected in the images and used to compute its 3D position.

Manual tie point is a point without 3D coordinates that is marked by user in the images. It is used to

improve the reconstruction accuracy.


46
4.7.2 Agisoft Metashape

The first software package tested was PhotoScan developed by Agisoft LLC (St.

Peterburg, Russia) [43]. After the photos have been entered into the application, they must be aligned,

which entails computing an approximation of the camera position and orientation for each image and

extracting tie-points in the form of a sparse point cloud. Since there was no a priori knowledge of the

placements of the pictures, the "high accuracy" setup was selected for the photo alignment, in which

tie points were derived from the full-resolution photographs and the "pair preselection" option was set

to "generic.If camera positions were accessible, for example, through a GNSS on-board receiver, they

could be used at this point; however, our dataset lacked this kind of data. Only SfM algorithms were

used to complete the first alignment. Then, GCPs were cautiously measured on the imaging, which

involved measuring only clearly visible points for each image. This indicates that markers that were

severely distorted because of their proximity to image borders or that were partially obscured by

impediments were removed.

The BBA settings for camera self-calibration and weighting scheme are the third and most

important crucial phase. Regarding the former, it was decided to use the parameter set suggested by

the software as a default after some testing on Configuration 1. It is made up of the focal length (f), the

major point position corrections (cx and cy), the first three radial distortion coefficients (K1, K2, and

K3), and the first two tangential distortion coefficients (P1 and P2). We didn't include any

approximations for the camera model because it didn't seem to help. It's vital to note that PhotoScan is

very adaptable from this perspective because it enables


4 the selection of the individual parameters that

need to be adjusted. The term "weighting method" refers to the degree of accuracy that must be

assigned to the observations included in the adjustment, which in our case are divided into two

categories: GCP object coordinates and GCP picture coordinates and tie-points. The values acquired

from the topographic adjustment were used to set the accuracy of the ground coordinates of markers at

0.5 cm for the horizontal component and 1 cm for the vertical component. The accuracy for

automatically measured image coordinates of tie-points was set at 1.5 pixels, which is equivalent to
47
three times the overall reprojection error, while the accuracy for manually measured image

coordinates of the markers was set at one of the pixel size. . We made sure to implement the technique

that produced the best residuals for the CPs by adhering to the programme developers'

recommendations to do so in the situation of blurred photos.

4.8 Check Points


We have used some ground control points as a check points because check points access the

accuracy of result. Checkpoint marks are used to estimate its 3D position, as well as possible click

errors. This can increase the relative precision of the region of the checkpoints. The discrepancy

between the original and the simulated location of the checkpoints is seen in the quality study, and it

offers an approximation of the model's absolute accuracy in the area.

4.9 Ground Control Points

The model is georeferenced by using ground control points. To scale, rotate and locate the model, a

minimum of three GCPs are required, but we have used total 14 ground control points in our

project.

4.10 Accuracy Assessment

It assessed the accuracy of all photogrammetric projects using The points surveyed which

were not used for georeferencing (GCPs), Using traditional formulation of the root mean

square error (RMSE). To that end, the GCPs are the orthoimages were identified and their

coordinates compared to the GPS coordinates surveyed which resulted in RMSEx, RMSEy,

RMSEz and RMSE for horizontal, vertical and total accuracy.

48
1 n
RMSEx   (X i  X
n i1
GPS
)2

1 n
RMSEy  
n i1
(Y i  Y GPS )2

1 n
RMSEz  
n i1
(Z i  Z GPS )2

RMSE  ((RMSEx)2  (RMSEy)2  (RMSEz ) 2 )

RMSEx = Root mean square error in x direction

RMSEy = Root mean square error in y direction

RMSEz = Root mean square error in z direction

RMSE = Total Root mean square error

n = Number of GCPs used in a project

Xi, Yi and Zi are the X, Y and z coordinates respectively, measured in the orthophoto for the

ith GCP.

49
CHAPTER 5

Result and Discussion.

5.1 Comparison of some result with different distribution and number of


ground control point used

1. First we have processed images by taking all points as only ground control points then root

mean square error in X, Y and Z are 0.03368m, 0.037206m and 0.023157 m respectively.

2.When we have used 06 ground control point and 3 check points in our project then total root

mean square error is 0.034m Root mean square error in X, Y, and Z are 0.0345m, 0.03758m and

0.02023m.

3. when have used 07 ground control points and 3 check points for processing then total root

mean square error obtain is 0.032m and error in X, Y and Z are 0.46101m, 0.6505m and 1.883m.

4. In this distribution we have used 8 ground control points and 3 check points, total error is0.033m.

Error in X, Y and Z are 0.036004m, 0.040290m and 0.023495m.

5. When we have used total 9 ground control points and 3 check points then total accuracy

generated is.0.032m and corresponding error in X, Y and Z are 0.033274m, 0.040736m and

0.023202m.

6. When we have used 10 ground control point and 3 check point then total error generated is

0.03m, in X, Y and Z error are 0.031971m, 0.038535m and 0.021993m.

7. When we have used 11 ground control points and 3 check points, total mean error

generated is .031m. Mean error produced in X, Y and Z are 0.031589 m, 0.037456m and

0.02452m.

Out of 20 different processed images with ground control points and check points, it is

our best result. Our best Quality report generated, when we have used 10 ground control points

and 3 check points.

50
5.2 Comparison of some result with different distribution and number of
ground control point used

1. First we have processed images by taking all points as only ground control points then root

mean square error in X, Y and Z are 0.0390m, 0.06811m and 0.04951 m respectively

2.When we have used 06 ground control point and 3 check points in our project then total root

mean square error is 0.0676m Root mean square error in X, Y, and Z are 0.0182m, 0.0596m and

0.0264m.

3.when have used 07 ground control points and 3 check points for processing then total root mean

square error obtain is 0.0632m and error in X, Y and Z are 0.0243m, 0.0596m and 0.0267m

4.In this distribution we have used 8 ground control points and 3 check points, total error is

0.0596m. error in X, Y and Z are 0.0253m, 0.0476m and 0.0252m.

5. When we have used total 9 ground control points and 3 check points then total error generated

is.0.0563m and corresponding error in X, Y and Z are 0.0239m, 0.0449m and 0.0238m.

6.when we have used 10 ground control point and 3 check point then total error generated is

0.0536m, in X, Y and Z error are 0.0235m, 0.0425m and 0.02275m.

7. When we have used 11 ground control points and 3 check points, total mean error

generated si 0.0619m. Mean error produced in X, Y and Z are 0.0239m, 0.0515m and 0.02453m.

Out of 20 different processed images with ground control points and check points, it is our

best result. Our best Quality report generated, when we have used 10 ground control points and 3

check points.

51
Table 7 Variation In The RMSE With Different Numbers Of GCPs In Reference Project

using Pix4D Mapper

Number of CPs RMSE(Meter)

6 0.034

7 0.032

8 0.033

9 0.032

10 0.03

11 0.031

0.035

0.034

0.033

0.032

Series1
0.031

0.03

0.029

0.028
6 7 8 9 10 11

Number Of GCPs

52
Table 7 Variation In The RMSE With Different Numbers Of GCPs In Reference Project

using Agisoft Metashape software

Number of CPs RMSE(Meter)

6 0.06765

7 0.06315

8 0.05955

9 0.05626

10 0.05363

11 0.06192

0.08

0.07

0.06

0.05

0.04
Series1
0.03

0.02

0.01

0
6 7 8 9 10 11

53
5.3 Quality Report on Pix4d Software
Quality Report
Generated with Pix4Dmapper version 4.4.10 Preview

Important: Click on the different icons for:

Help to analyze the results in the Quality Report Additional

information about the sections

Click here for additional tips to analyze the Quality Report

Summary

Project pk
Processed 2024-06-27 13:36:24
Camera Model Name(s) ILCE-5100_E16mmF2.8_16.0_6000x4000 (RGB)
Average Ground Sampling Distance (GSD) 2.15 cm / 0.85 in
Area Covered 1.239 km2 / 123.9211 ha / 0.48 sq. mi. / 306.3743 acres

Quality Check

Images median of 86305 keypoints per image

Dataset 1830 out of 1830 images calibrated (100%), all images enabled

Camera Optimization 2.56% relative difference between initial and optimized internal camera parameters

Matching median of 27456.3 matches per calibrated image

Georeferencing yes, 10 GCPs (10 3D), mean RMS error = 0.03 m

Preview

Figure 1: Orthomosaic and the corresponding sparse Digital Surface Model (DSM) before
densification.

54
Calibration Details

Number of Calibrated Images 1830 out of 1830


Number of Geolocated Images 1830 out of 1830

Initial Image Positions

Figure 2: Top view of the initial image position. The green line follows the
position of the images in time starting from the large blue dot.

55
Computed Image/GCPs/Manual Tie Points Positions

Uncertaintyellipses 1000xmagnified

Figure 3: Offset between initial (blue dots) and computed (green dots) image positions as well as the offset
between the GCPs initial positions (blue crosses) and their computed positions (green crosses) in the top-view
(XY plane), front-view (XZ plane), and side-view (YZ plane). Dark green ellipses indicate the absolute position
uncertainty of the bundle block adjustment result.

56
Absolute camera position and orientation uncertainties

X[m] Y[m] Z [m] Omega [degree] Phi [degree] Kappa [degree]


Mean 0.008 0.007 0.008 0.004 0.005 0.002
Sigma 0.002 0.001 0.001 0.001 0.001 0.000

Overlap

Number of overlapping images: 1 2 3 4 5+

Figure 4: Number of overlapping images computed for each pixel of the orthomosaic.
Red and yellow areas indicate low overlap for which poor results may be generated. Green areas indicate an
overlap of over 5 images for every pixel. Good quality results will be generated as long as the number of
keypoint matches is also sufficient for these areas (see Figure 5 for keypoint matches).

57
Bundle Block Adjustment Details
Number of 2D Keypoint Observations for Bundle Block Adjustment 48812001
Number of 3D Points for Bundle Block Adjustment 14940054
Mean Reprojection Error [pixels] 0.133

Internal Camera Parameters

ILCE-5100_E16mmF2.8_16.0_6000x4000 (RGB). Sensor Dimensions: 23.333 [mm] x 15.556 [mm]

EXIF ID: ILCE-5100_E16mmF2.8_16.0_6000x4000

Foc Princip Princip R1 R2 R3 T1 T2


al al al
Lengt Point Point
h x y
4114.286 3000.000 2000.000
Initial Values [pixel] [pixel] [pixel] 0.000 0.00 0.00 0.000 0.000
16.000 [mm] 11.667 [mm] 7.778 [mm] 0 0
4008.897 2964.294 1996.078
Optimized Values [pixel] [pixel] [pixel] - 0.08 0.00 - -
15.590 [mm] 11.528 [mm] 7.763 [mm] 0.062 2 7 0.001 0.002
0.273 [pixel] 0.038 [pixel] 0.061 [pixel]
Uncertainties 0.001 [mm] 0.000 [mm] 0.000 [mm] 0.000 0.00 0.00 0.000 0.000
(Sigma) 0 0

58
2D Keypoints Table

Number of 2D Keypoints per Image Number of Matched 2D Keypoints per Image


Median 86305 27456
Min 51087 1866
Max 98190 51338
Mean 85616 26673

3D Points from 2D Keypoint Matches

Number of 3D Points Observed


In 2 Images 8495322
In 3 Images 2802657
In 4 Images 1308328
In 5 Images 724498
In 6 Images 430963
In 7 Images 289835
In 8 Images 207308
In 9 Images 154309
In 10 Images 117085
In 11 Images 89232
In 12 Images 71395
In 13 Images 58183
In 14 Images 48434
In 15 Images 39269
In 16 Images 29540
In 17 Images 21857
In 18 Images 16874
In 19 Images 13018
In 20 Images 9227
In 21 Images 5607
In 22 Images 2972
In 23 Images 1720
In 24 Images 1097
In 25 Images 646
In 26 Images 363
In 27 Images 216
In 28 Images 60
In 29 Images 29
In 30 Images 9
In 31 Images 1

59
2D Keypoint Matches

Number of matches

25 222 444 666 888 1111 1333 1555 1777 2000

Figure 5: Computed image positions with links between matched images. The darkness of the links indicates the
number of matched 2D keypoints between the images. Bright links indicate weak links and require manual tie
points or more images.

60
Geolocation Details

Ground Control Points

GCP Name Accuracy XY/Z [m] Error X[m] Error Y[m] Error Z [m] Projection Error [pixel] Verified/Marke
d
GCP2 (3D) 0.020/ 0.020 -0.003 -0.011 0.004 0.475 11 / 11
GCP3 (3D) 0.020/ 0.020 0.033 -0.010 0.002 2.085 14 / 14
GCP4 (3D) 0.020/ 0.020 0.019 -0.002 0.025 0.403 15 / 15
GCP5 (3D) 0.020/ 0.020 0.017 -0.033 -0.050 0.354 11 / 11
GCP6 (3D) 0.020/ 0.020 0.026 0.032 0.004 0.391 10 / 10
GCP7 (3D) 0.020/ 0.020 0.027 0.072 0.023 0.362 12 / 12
GCP10 (3D) 0.020/ 0.020 -0.009 0.026 -0.021 0.579 10 / 10
GCP11 (3D) 0.020/ 0.020 -0.074 0.010 0.026 0.499 10 / 10
GCP13 (3D) 0.020/ 0.020 -0.037 -0.025 -0.007 0.681 11 / 11
GCP14 (3D) 0.020/ 0.020 0.009 -0.078 0.000 0.381 11 / 11
Mean [m] 0.000707 - 0.000612
0.001908
Sigma [m] 0.031963 0.038487 0.021984
RMS Error [m] 0.031971 0.038535 0.021993
0 out of 3 check points have been labeled as inaccurate.
Check Point Name Accuracy XY/Z [m] Error X[m] Error Y[m] Error Z [m] Projection Error [pixel] Verified/Marke
d
GCP8 0.0548 0.0009 0.1486 0.5781 15 / 15
GCP9 -0.0477 0.0106 0.1201 0.3110 11 / 11
GCP15 -0.0019 -0.1704 -0.1038 0.5890 10 / 10
Mean [m] 0.001739 - 0.054982
0.052971
Sigma [m] 0.041919 0.083115 0.112870
RMS Error [m] 0.041955 0.098560 0.125549

Localisation accuracy per GCP and mean errors in the three coordinate directions. The last column counts the
number of calibrated images where the GCP has been automatically verified vs. manually marked.

Absolute Geolocation Variance

Min Error [m] Max Error [m] Geolocation Error X[%] Geolocation Error Y[%] Geolocation Error Z [%]
- -15.00 0.11 0.00 0.00
-15.00 -12.00 0.00 0.00 0.00
-12.00 -9.00 0.33 0.11 0.00
-9.00 -6.00 10.39 1.15 0.16
-6.00 -3.00 17.17 1.09 3.34
-3.00 0.00 19.08 52.16 77.04
0.00 3.00 24.22 43.90 8.86
3.00 6.00 20.50 1.59 1.53
6.00 9.00 8.15 0.00 1.42
9.00 12.00 0.05 0.00 7.65
12.00 15.00 0.00 0.00 0.00
15.00 - 0.00 0.00 0.00
Mean [m] -0.754408 2.296481 -9.631252
Sigma [m] 4.371162 1.537286 3.444296
RMS Error [m] 4.435785 2.763526 10.228597

61
Min Error and Max Error represent geolocation error intervals between -1.5 and 1.5 times the maximum accuracy of
all the images. Columns X, Y, Z show the percentage of images with geolocation errors within the predefined error
intervals. The geolocation error is the difference between the initial and computed image positions. Note that the
image geolocation errors do not correspond to the accuracy of the observed 3D points.

Geolocation Bias X Y Z
Translation [m] -0.754421 2.296494 -9.631262

Bias between image initial and computed geolocation given in output coordinate system.

Relative Geolocation Variance

Relative Geolocation Error Images X[%] Images Y[%] Images Z [%]


[-1.00, 1.00] 68.23 98.47 92.78
[-2.00, 2.00] 99.84 100.00 100.00
[-3.00, 3.00] 99.89 100.00 100.00
Mean of Geolocation Accuracy [m] 5.000000 5.000000 10.000000
Sigma of Geolocation Accuracy [m] 0.000000 0.000000 0.000000

Images X, Y, Z represent the percentage of images with a relative geolocation error in X, Y, Z.

Initial Processing Details

System Information

CPU: Intel(R) Xeon(R) Gold 5118 CPU @


Hardware 2.30GHz RAM: 128GB
GPU: NVIDIA Quadro P4000 (Driver: 31.0.15.5161)
Operating System Windows 10 Pro for Workstations, 64-bit

Coordinate Systems

Image Coordinate System WGS 84 (EGM96 Geoid)


Ground Control Point (GCP) Coordinate System WGS 84 / UTMzone 44N (EGM96 Geoid)
Output Coordinate System WGS 84 / UTMzone 44N (EGM96 Geoid)

Processing Options

Detected Template 3D Maps


Keypoints Image Scale Full, Image Scale: 1
Advanced: Matching Image Pairs Aerial Grid or Corridor
Advanced: Matching Strategy Use Geometrically Verified Matching: no
Advanced: Keypoint Extraction Targeted Number of Keypoints: Automatic
62
Calibration Method:
Advanced: Calibration Standard Internal
Parameters Optimization: All
External Parameters
Optimization: All Rematch:
Auto, no

63
5.5 Quality Report on Agisoft Metashape software

64
65
66
67
68
69
70
71
72
73
Chapter 6
Conclusion

From the above generated results we can conclude that as number of ground control point increases the

rmse values has been decreased.and also pix4d mapper has shown less error in comparision to agisoft

metashape. The GCPs were distributed inside the study area and painted with clearly visible

‘‘X”s to be easily identified by the user during processing. For UAV photogrammetry, the

accuracy of the obtained spatial data can be greatly affected by many variables, such as geo-

referencing methods, the number of GCPs, and the type of software used. Four different GCP

distributions were tested using two different processing software packages (Pix4dmap- per and

Agisoft Metashape). The vertical and horizontal RMSEs were computed for each GCP

distribution. Both pack- ages used SfM software, which is different from the traditional

photogrammetric solution that requires camera calibration information. The SfM

photogrammetric processes were con- ducted on the same set of drone-acquired images and

manually tagged GCPs.

74
REFERENCES

Ajibola, Isola Ismaila, Shattri Mansor, Biswajeet Pradhan, and Helmi Zulhaidi Mohd Shafri.
"Fusion of UAV-based DEMs for vertical component accuracy improvement." Measurement 147
(2019): 106795.

Martínez-Carricondo, Patricio, Francisco Agüera-Vega, Fernando Carvajal-Ramírez, Francisco-


Javier Mesas-Carrascosa, Alfonso García-Ferrer, and Fernando-Juan Pérez-Porras. "Assessment of
UAV-photogrammetric mapping accuracy based on variation of ground control
points." International journal of applied earth observation and geoinformation 72 (2018): 1-10.

Ajayi, Oluibukun Gbenga, Mark Palmer, and Akporode Anthony Salubi. "Modelling farmland
topography for suitable site selection of dam construction using unmanned aerial vehicle (UAV)
photogrammetry." Remote Sensing Applications: Society and Environment 11 (2018): 220-230.

Sanz-Ablanedo, Enoc, Jim H. Chandler, José Ramón Rodríguez-Pérez, and Celestino Ordóñez.
"Accuracy of unmanned aerial vehicle (UAV) and SfM photogrammetry survey as a function of the
number and location of ground control points used." Remote Sensing 10, no. 10 (2018): 1606.

Rusnák, Miloš, Ján Sládek, Anna Kidová, and Milan Lehotský. "Template for high-resolution river
landscape mapping using UAV technology." Measurement 115 (2018): 139-151.

Bauman, T., O. Almog, and S. Dalyot. "TOWARDS THE AUTOMATIC DETECTION OF


GEOSPATIAL CHANGES BASED ON DIGITAL ELEVATION MODELS PRODUCED BY
UAV IMAGERY." International Archives of the Photogrammetry, Remote Sensing & Spatial
Information Sciences (2019).

Hastaoğlu, Kemal Özgür, Yavuz Gül, Fatih Poyraz, and Burak Can Kara. "Monitoring 3D areal
displacements by a new methodology and software using UAV photogrammetry." International
Journal of Applied Earth Observation and Geoinformation 83 (2019): 101916.

Bhatta, Basudeb. "Research Framework." In Research Methods in Remote Sensing, pp. 21-41.
Springer, Dordrecht, 2013.

75
Barati, Susan, Behzad Rayegani, Mehdi Saati, Alireza Sharifi, and Masoud Nasri. "Comparison the
accuracies of different spectral indices for estimation of vegetation cover fraction in sparse
vegetated areas." The Egyptian Journal of Remote Sensing and Space Science 14, no. 1 (2011): 49-
56.

Torres-Sánchez, Jorge, Ana I. de Castro, Jose M. Pena, Francisco M. Jiménez-Brenes, Octavio


Arquero, María Lovera, and Francisca López-Granados. "Mapping the 3D structure of almond trees
using UAV acquired photogrammetric point clouds and object-based image analysis." Biosystems
engineering 176 (2018): 172-184.

76

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy