GNTP
GNTP
GIS is a set of tools for collecting, storing, retrieving at will, transforming, and displaying
spatial data from the real world for a particular set of purposes.
GIS is a computerized system that facilitates the phases of data entry, data analysis and data
presentation especially in cases when we are dealing with georeferenced data.
i. Hardware: It consists of the computer system on which the GIS software will run.
The choice of hardware system ranges from Personal Computers to multi user Super
Computers. These a computer should have essentially an efficient processor to run
the software and sufficient memory to store enough information (data).
ii. Software: GIS software provides the functions and tools needed to store, analyze,
and display geographic information. The software available can be said to be
application specific. All GIS software generally fit all these requirements, but their
on screen appearance (user interface) may be different.
iii. Data: Geographic data and related tabular data are the backbone of GIS. It can be
collected in-house or purchased from a commercial data provider. The digital map
forms the basic data input for GIS. Tabular data related to the map objects can also
be attached to the digital data. A GIS will integrate spatial data with other data
resources and can even use a DBMS.
iv. Method: A successful GIS operates according to a well-designed plan, which are
the models and operating practices unique to each task. There are various techniques
used for map creation and further usage for any project. The map creation can either
be automated raster to vector creator or it can be manually vectorized using the
scanned images. The source of these digital maps can be either map prepared by
any survey agency or satellite imagery.
v. People: GIS users range from technical specialists who design and maintain the
system to those who use it to help them perform their everyday work. GIS operators
solve real time spatial problems. They plan, implement and operate to draw
conclusions for decision making.
vi. Network: With rapid development of IT, today the most fundamental of these is
probably the network, without which no rapid communication or sharing of digital
information could occur. GIS today relies heavily on the Internet, acquiring and
sharing large geographic data sets.
• Examples: Find all agricultural fields where the soil type is "clay loam"
• Involve spatial relationships like adjacency, containment, and proximity. Often use
geometric operations such as intersect, contains, touches, and buffer.
• Selection Queries – Retrieve features based on conditions (e.g., "Select all forests in
Bihar").
• Proximity Queries – Identify features within a certain distance (e.g., "Find hospitals
within 10 km of an earthquake epicenter").
• Overlay Queries – Compare layers to extract insights (e.g., "Find agricultural land
under flood risk using land use and flood hazard maps").
• Buffer Queries – Create buffer zones around features (e.g., "Identify settlements
within 2 km of a major road").
Lesson-2
There are two types of data are used in GIS platform; that is, spatial data and non-spatial data.
Spatial data are those that have coordinates; latitudinal and longitudinal that shows position of
feature. It represents the locations of geographical entities as well as their spatial dimension
that are represented with the help of point, line and polygon/area.
Non spatial data are those representing a set of information that is systematically organised and
computing against each spatial data. These types are also known as attribute data. For instances,
if the spatial data contain a polygon representing a state, then in attribute data it has information
about its administrative division; like area, population etc. The non-spatial data can be two
types; Statistical which have numerical values and Descriptive that are stored in the form of
word or text.
The spatial data are further divided into two types; Raster data and Vector data. The raster data
are the data that have individual pixels where each pixel has its spatial location in referenced
to real earth. When the data is ortho-rectified or Geo-referenced the data give each and every
pixel its locational information. Thus the attribute is represented as a single value of each pixel
or cell that is called as DN (Digital Number value). On the other hand in vector data, the spatial
information are recorded as x, y coordinates. The point features are recorded as single x and y
pair of coordinates. The line features as well as polygon features are recoded as a series of x
and y coordinates. Thus the vector attributes recorded against feature ID numbers are assigned
by system itself.
2.2 The Spatial data structure:
The two types of spatial data; Raster data and vector data have their own importance and
charateristics regarding the application in GIS. The structure of both the data set are different
from each other, however both maintain or preserve the same coordinates value x and y of a
particular featured in referened to the real earth surface.
2.3.1 Regarding satellite images, the basic characteristics of Raster data structure are;
The area is covered by grid with (usually) equal-sized cells, extents in rows and columns
Every single pixel is assumed to have an only one DN value. On the other hand it becomes
inaccurate when the boundary of two different soil types intersects other pixel. Thus in
such case the pixel is assigned value of largest fraction of the cell.
The x and y value of a pixel represent the pixel size which is the spatial resolution of the
pixel. Therefore, in calculating the area with help of pixel, it is calculated as Area = Count
X (x * y).
Every gird has its origin in the upper left, but the coordinates are computed at the center
of the grid.
Raster images are normally acquired by raster imaging devices where the spatial resolution is
determined by the resolution of the acquisition device and the quality of the original data
source. The raster image should have pixels for all spatial locations which is strictly limited by
the size of an area it can represent. When increasing the spatial resolution by 2 times, the total
size of a two-dimensional raster image will increase by 4 times because the number of pixels
is doubled in both X and Y dimensions. The same is true when a larger area is to be covered
when using same spatial resolution.
Point feature:
i. It has 0 dimension (cannot represent neither length nor width)
ii. Represented by single x, y coordinate pair
iii. It has area size zero
iv. Mostly used for denotes a single particular feature
Line feature:
i. It has 1 dimension (can represent the length)
ii. Represented by connecting two or more pair of x, y coordinates
iii. It has its length value
iv. Commonly used to demarcates roads, rivers, stream and so on
Polygon feature:
i. It is 2 dimensional ( can represent the length as well as the width)
ii. Represented by connecting four or more pairs of x, y coordinates
iii. The starting point should be the ending point
iv. Preserve an area
v. Commonly used to demarcate features having closed boundary
Cost effectiveness: Though establishment cost is high, the maintenance cost very low
as compared to other means
Timeliness: Realtime information collection and processing is possible in remote
sensing. This is faster than any other system
Unbiased information generation: As the informations are collected and processed
mechanically the human bias is avoided
Monitoring inaccessible areas: Difficult/Inaccessible areas like deep forest, ocean,
mountain peaks are being monitored by remote sensing techniques
Sensing typical features: Human vision (or other sensing organs) is sensitive in
visible radiation. But lot of informations can be generated from the infrared,
microwave, radio wave signals which is not possible by visual observations
Image data are stored in a regular grid format (rows and columns).
The single elements are called pixels (picture elements). For each pixel, the measurements
are stored as Digital Number values or DN-values.
Spectral resolution: refers to the part of the Electro Magnetic spectrum measured – How
many bands are observed
Radiometric resolution: Refers to the part of the Electro Magnetic spectrum measured and
the differences in energy that can be observed.
Spatial resolution: Refers to the smallest unit-area measured, it indicates the minimum size
of objects that can be detected.
Temporal resolution (Revisit time): Refers to the time between two successive image
acquisitions over the same location on Earth.
Gamma-ray spectrometer
Aerial camera
• Now a day, analogue photos are scanned and transformed to digital systems.
Multispectral scanner
Thermal scanner
ACTIVE SENSORS
Laser scanner
• Use laser beam (IR light) to measure distance from the aircraft to ground points
• Applied to study calculate the terrain elevation, used for high-resolution Digital
Terrain Models (DTM) in topographic mapping, 3 D models of city buildings, trees
etc
Radar altimeter
• Determine height with a precision of 2–4 cm. Measures the topographic profile
parallel to the satellite orbit. (single lines of measurements)
• Useful for measuring relatively smooth surfaces such as oceans and for ‘small scale’
mapping of continental terrain models.
Imaging radar
• Combining two radar images acquired at different moments can be used to precisely
assess changes in height or vertical deformations (SAR Interferometry)
Satellite Orbit
• Sun-synchronous orbit: In sun-synchronous orbit the satellite always passes overhead
at the same local solar time. Most sun-synchronous orbits cross the equator at mid-
morning (around 10:30 h). Sun-synchronous orbits allow a satellite to record images at
two fixed times during one 24-hour period: one during the day and one at night.
Examples - Landsat, SPOT and IRS.
• Geostationary orbit. This refers to orbits in which the satellite is placed above the
equator (inclination angle is 0) at a distance of some 36,000 km. At this distance, the
period of the satellite is equal to the period of the Earth. The result is that the satellite
is at a fixed position relative to the Earth. Geostationary orbits are used for
meteorological and telecommunication satellites. Example – INSAT, Kalpana,
Meteosat
• Shuttle orbit: Inclination angle 30-60degree. The satellite is typically placed in orbit
at 200-300 km altitude. Used for specific research purpose Example – Skylab
Lesson-4
For the selection of the appropriate data type it is necessary to fully understand the information
requirements for a specific application. The Spatio-temporal characteristics of radiation as
selection criteria for remote sensing studies are described below-
Additional Considerations:
1. Spatial Resolution:
o Higher resolution is important for detailed studies; lower resolution suffices
for broader regional assessments.
2. Radiometric Resolution:
o The ability to distinguish slight differences in reflectance is critical for
detecting subtle changes.
3. Data Continuity:
o For change detection, consistent data acquisition schedules (e.g., weekly,
monthly) help monitor trends effectively.
4. Surface Conditions:
o Moisture content, snow cover, or surface roughness may influence image
interpretation and must be factored in during selection.
True
(Natural)
colour
composite
False colour
Red-Green-Blue
composite
(FCC)
Contrast Enhancement
Stretching:
A technique that adjusts pixel intensity values to improve image contrast by spreading pixel
brightness over a wider range. Examples include linear stretching, histogram equalization,
and logarithmic stretching.
Filtering:
A method that modifies pixel values based on their neighborhood to enhance or suppress
specific features. It includes techniques like low-pass (smoothing), high-pass (edge
enhancement), and noise reduction filters such as median filtering.
Image Interpretation
Image interpretation is the process of examining remote sensing images to identify, analyze,
and extract meaningful information about surface features, patterns, or conditions on Earth.
This involves recognizing objects and phenomena based on their shapes, sizes, tones, textures,
shadows, patterns, and associations. Two types, visual image interpretation, digital image
interpretation.
Interpretation elements
A. Tone/Hue
• Tone/hue is directly related to the amount of light (energy) reflected from the surface.
• Different types of rock, soil or vegetation most likely have different tones.
• Increasing moisture content gives darker grey tones
B. Shape
• Characterizes many terrain objects visible in the image in geomorphological mapping.
• Characterize object (built-up areas, roads and railroads, agricultural fields, fishery ponds
etc).
C. Size
• Farm size, water body size are important for agril. Study
• Width determines the road type, e.g., primary road, secondary road, et cetera.
D. Pattern
• Spatial arrangement objects (such as concentric, radial, checkerboard etc)
• Used for land form, land use, erosion study
E. Texture
• Frequency of tonal change (e.g., as coarse/fine, smooth/rough, even/uneven, mottled,
speckled, granular, linear, woolly)
• Often be related to terrain roughness
F. Site
• Topographic or geographic location (where the feature is located)
• ‘Backswamps’ in a flood plain , Mangroves with coastal zone
G. Association
• Combination of objects makes it possible to infer about its meaning or function.
• Industry with Transport system or Salinity features in coastal zone
The Global Positioning System (GPS) is a satellite-based navigation system that provides
accurate positioning, navigation, and timing (PNT) services to users worldwide. Developed by
the United States Department of Defense (DoD), GPS has become an essential tool for a wide
range of applications, including navigation, mapping, surveying, and scientific research. GPS
operates through a network of satellites, ground control stations, and user receivers, ensuring
global coverage and high precision in determining location and time.
Components of GPS
GPS consists of three main segments: Space Segment, Control Segment and User Segment
A. Space Segment
B. Control Segment
The control segment is responsible for monitoring and maintaining the GPS system. It consists
of ground control stations that track and update the satellites to ensure accurate positioning
data. The primary control components include:
Master Control Station (MCS): Located in the United States, it manages satellite
operations and updates their orbital data.
Monitor Stations: Spread across various locations worldwide, these stations track
satellite signals and send data to the MCS.
Ground Antennas: Used to transmit updates and corrections to satellites.
C. User Segment
The user segment comprises GPS receivers that interpret satellite signals to determine the
user’s position, velocity, and time. GPS receivers vary in complexity and applications, ranging
from handheld navigation devices to high-precision surveying instruments. They are widely
used in smartphones, vehicles, aviation, military operations, and scientific research.
A GPS receiver's job is to locate four or more of these satellites, figure out the distance to each,
and use this information to deduce its own location. This operation is based on a simple
mathematical principle called trilateration. Hence, the entire mechanism of working of GPS
can be summed up under the following steps:
I. GPS transmits a radio signal to each satellite at a known time.
II. These radio signals travel at the speed of light.
III. The system measures the time delay between the
signal transmission and signal reception
IV. This time difference helps in finding information
about the satellite’s location.
V. Similar process is repeated for each of the visible
satellites in the sky to determine the position of,
and distance to, at least three satellites.
VI. The receiver computes the position using
mathematical principle of trilateration.
VII. A GPS requires at least 3 satellites to calculate 2-D position (latitude and longitude);
while at least 4 satellites to determine receiver’s 3-D position (latitude, longitude and
altitude). The more the number of satellites visible, higher would be the accuracy of
location determined by GPS receiver.
Besides, United Sates, several countries have launched their own navigation system, which
shall be discussed in this section:
The Indian Space Research Organisation (ISRO) has developed an autonomous regional
satellite navigation system called as Indian Regional Navigational Satellite System (IRNSS).
It is a constellation of 7 satellites, and three satellites are in geostationary orbit over the Indian
Ocean. Besides military applications, the satellite would help in navigation, disaster
management and vehicle tracking. It is designed to provide accurate position information
service to users in the country as well as the region extending up to 1,500 km from its boundary,
which is its primary service area.
i. IRNSS-1A: The IRNSS-1A is the first of the 7 satellites developed by India, has a
mission life of 10 years and was launched successfully at 23:41 hrs. on 1st July 2013
SDSC Centre, Sriharikota, India using PSLV - C22 vehicle.
ii. IRNSS-1B: This satellite was successfully launched on Apr 04, 2014 and is placed at
55 deg East longitude, collocated with IRNSS-1A and GSAT-8 satellites. The
navigational system would provide two types of services -- Standard Positioning
Service, which is provided to all the users and Restricted Service, which is an encrypted
service provided only to the authorised users.
iii. IRNSS-1C: It was launched successfully on 16 October 2014 at 1:32 am IST from
Satish Dhawan Space Centre in Sriharikota and placed in geostationary orbit. It has a
lifespan of 10 years.
vii. IRNSS-1G: It was launched on April 28, 2016 with a lifespan of 10 years.
Applications of GPS
GPS has numerous applications across different sectors, enhancing efficiency, safety, and
accuracy in various fields.
a. Navigation
Used in vehicles, ships, and aircraft for real-time navigation and route optimization.
Provides turn-by-turn directions in GPS-enabled smartphones and car navigation
systems.
Enhances maritime and aviation safety by guiding vessels and aircraft with precise
positioning data.
e. Disaster Management
Used in search and rescue operations to locate victims and coordinate relief efforts.
Helps in mapping disaster-prone areas and monitoring environmental changes.
Assists in tracking the movement of hurricanes, earthquakes, and floods.
f. Scientific Research
GPS data is used for studying plate tectonics and monitoring seismic activities.
Helps in climate research by tracking atmospheric conditions and sea-level changes.
Enables space research and satellite tracking for astronomical studies.
Lesson-6
Precision agriculture (PA) is an advanced farming practice that utilizes modern technology to
optimize agricultural productivity while ensuring minimal resource utilization. It involves the
application of data-driven techniques, Geographic Information Systems (GIS), remote sensing,
Variable Rate Technology (VRT), and the Internet of Things (IoT) to enhance decision-making
and farming operations.
1. Remote Sensing and GIS: These technologies provide spatial information about soil
conditions, crop health, and climatic variations, enabling farmers to take precise
actions.
2. Variable Rate Technology (VRT): Facilitates the application of inputs like fertilizers,
pesticides, and irrigation water at variable rates based on real-time field data.
3. Soil and Plant Sensors: Help in monitoring soil moisture, nutrient levels, and plant
growth, ensuring site-specific management.
4. Global Positioning System (GPS) and Automated Machinery: Allows precise field
operations, reducing overlaps and wastage of resources.
5. Artificial Intelligence (AI) and Big Data Analytics: Helps in predicting crop diseases,
yield forecasting, and optimizing resource allocation.
Despite its potential, the adoption of precision agriculture in India faces several challenges:
Soil Test Crop Response (STCR) is a scientific methodology aimed at improving crop
productivity by applying fertilizers based on soil test results and crop response analysis. It helps
in the judicious use of fertilizers, reducing input costs, and enhancing soil health.
1. Soil Testing: Analyzing soil nutrient content to determine deficiencies and excesses.
2. Crop Response Studies: Understanding how different crops respond to specific
nutrient applications.
3. Site-Specific Nutrient Management (SSNM): Applying fertilizers in the right
quantity and at the right time to meet crop demands.
4. Yield Targeting: Setting realistic yield goals and formulating fertilizer
recommendations accordingly.
5. Sustainability Considerations: Balancing short-term productivity with long-term
soil health.
Grid Soil Sampling builds upon standard soil sampling but involves:
o Higher sampling intensity
o Dividing fields into smaller grids (e.g., 0.5 to 2.5 acres per grid)
o Each grid is sampled individually for more localized soil data
This method ensures detailed nutrient mapping and spatial variability analysis.
1. Crop Monitoring and Scouting: Drones equipped with multispectral cameras help
detect stress in crops early.
2. Soil and Field Analysis: UAVs assess soil conditions, enabling better land preparation
and irrigation planning.
3. Pesticide and Fertilizer Spraying: Drones facilitate precision spraying, reducing
chemical usage and environmental impact.
4. Irrigation Management: Thermal imaging helps in identifying water stress and
optimizing irrigation schedules.
5. Yield Estimation and Forecasting: Data collected by drones aid in predicting crop
yield and planning harvest strategies.
6. Disaster Management: Helps in assessing crop damage due to natural calamities and
assists in insurance claims.
1. Time and Labor Efficiency: Reduces the need for manual labor, saving time and costs.
2. Enhanced Precision: Provides accurate and real-time data, improving decision-
making.
3. Sustainability: Reduces excessive use of water, fertilizers, and pesticides.
4. Increased Productivity: Helps in maximizing yield through better crop management.
1. High Cost of Drones: Initial investment and maintenance costs can be prohibitive.
2. Regulatory Restrictions: Strict drone regulations may limit widespread usage.
3. Lack of Skilled Operators: Training is required for effective operation and data
interpretation.
4. Connectivity Issues: Limited internet access in rural areas affects real-time data
transmission.
5. Farmer Awareness and Acceptance: Traditional farming communities may be
hesitant to adopt drone technology.
Introduction to Nanotechnology
Nano-science
Nano-science is the study of phenomena and manipulation of materials at the atomic,
molecular, and macro-molecular scale. At these scales, material properties differ
significantly from those observed at a larger scale.
Nano-technologies
Nano-technologies involve the design, characterization, production, and application of
structures, devices, and systems by controlling their shape and size at the nanometer
scale.
Nanotechnology
Nanotechnology refers to the processes that construct, control, and restructure materials
and systems at the scale of atoms and molecules.
Nano-particle
A nano-particle is a small object that functions as a whole unit in terms of its transport
and physical or chemical properties.
Properties of Nano-particles
Approaches in Nanotechnology
1. Bottom-up Approach
In this method, materials and devices are built from molecular components.
The components assemble themselves chemically using molecular recognition.
Offers precise control at the atomic level and is inspired by natural biological
processes.
2. Top-down Approach
Nano-pesticides refer to any pesticide formulation that includes elements within the
nanometer (nm) size range (typically 1–100 nm). These formulations may exhibit novel
properties due to their small size.
Modern agriculture is increasingly turning to nanotechnology for more efficient and targeted
pesticide delivery. Nano-formulations enhance the effectiveness of pesticides, reduce
environmental impact, and allow for controlled and sustained release.
1. Nano-emulsions
2. Nano-suspensions (Nano-dispersions)
3. Polymer-Based Nanoparticles
4. Nano-encapsulation
5. Nanospheres
7. Nano-fibres
4. Improved Mobility
Smaller particles can move more efficiently within plant tissues and soil.
Enhances systemic action and better pest targeting.
9. Eco-Friendly Approach
Nano-fertilizers are fertilizers made with tiny particles (nanoparticles) that act as
carriers of nutrients.
These carriers have a very large surface area, which allows them to hold more
nutrient ions than traditional fertilizers.
They release nutrients slowly and steadily, based on the plant's needs.
Nano-sensors are tiny devices that detect and respond to biological, chemical, or
physical signals at the nanoscale.
They provide real-time monitoring by converting this information into measurable
signals.
Used in agriculture, medicine, environmental monitoring, and food safety.
Definition
“Nano-sensors are biological, chemical, or surgical sensory devices designed to detect the
presence and behavior of nanoparticles or specific biological/chemical substances at
nanoscale levels.”
Key Features
Applications of Nano-Sensors
Agriculture: Monitor soil health, nutrient levels, and plant disease early detection.
Environmental Monitoring: Detect air and water pollutants at low concentrations.
Medical Diagnostics: Identify pathogens, cancer cells, and biomarkers.
Food Safety: Detect microbial contamination and chemical residues.
Advantages of Nano-Sensors
Nanozeolites: These are nanoporous materials that help retain nutrients in the soil,
preventing leaching and increasing nutrient availability for plants over a longer
period.
Hydrogels: Nanostructured hydrogels are water-retaining materials that help improve
soil moisture levels. They absorb and release water as needed, making them highly
useful for drought-prone areas.
Nanosensors: These tiny sensors can detect soil conditions, nutrient levels, pest
infestations, and environmental changes in real time.
Wireless Communication Devices: When combined with nanosensors, wireless
devices can relay crucial agricultural data to farmers, allowing them to make informed
decisions on irrigation, fertilization, and pest control.
5. Nanotechnology for Mechanical Tillage and Soil Improvement
The application of nanoparticles can modify soil pH, making it more suitable for
specific crops.
Soil structure improvement leads to better root penetration, improved microbial
activity, and optimized nutrient absorption by plants.
Heavy metals like lead (Pb), cadmium (Cd), and arsenic (As) are harmful to crops
and human health. Nanotechnology helps in:
o Reducing their mobility (so they don’t spread uncontrollably in the soil).
o Lowering their toxicity by converting them into less harmful forms.
Soil erosion control: Nanomaterials bind soil particles together, making them less
prone to wind or water erosion.
Zeolite filtration membranes – These membranes contain nanosized pores that allow
only clean water to pass while trapping contaminants.
Nanocatalysts – These are used to break down harmful pollutants and organic waste
in water.
Magnetic nanoparticles – These particles attract heavy metals and toxins, which can
then be removed easily through a magnetic separation process.
Carbon nanotube membranes and nanofibrous alumina filters have the ability to
remove:
o Turbidity – Cloudiness in water caused by suspended particles.
o Oil and grease – Helps in treating wastewater from industries.
o Bacteria and viruses – Ensures pathogen-free drinking water.
o Organic contaminants – Removes harmful organic pollutants like pesticides
and pharmaceuticals.
Detecting the pollen load that may cause contamination is essential for genetic
purity.
Unwanted cross-pollination can lead to genetic variations, affecting seed quality and
yield.
By monitoring pollen contamination, farmers can take preventive measures to
isolate fields or adjust planting schedules.
Crop Discrimination and Spectral Features
Computers are currently being used to automate and expand Decision Support Systems (DSS)
in agricultural research. Recently, Geographic Information Systems (GIS) and Remote Sensing
(RS) technologies have become valuable tools in this field, particularly for crop yield
prediction, crop suitability studies, and site-specific resource allocation.
Remote sensing is an efficient technology that provides valuable information about the Earth’s
surface. It can capture images of large areas, enabling researchers to analyze agricultural fields
from a macro perspective. Imaging and non-imaging data from remote sensing help identify
and characterize different plant species based on their phenological traits.
Different crops display distinct phenological characteristics and timings based on stages such
as germination, tillering, flowering, boll formation (in cotton), and ripening. Even within the
same crop and growing season, different varieties exhibit variability in the duration and
intensity of these stages, which introduces complexity in crop type discrimination using
imaging systems. Hyperspectral data enables better characterization, classification, modeling,
and mapping of agricultural crops due to its high spectral resolution.
Feature Extraction
Feature extraction is the process of defining and isolating important image characteristics or
features that provide meaningful information for interpreting or classifying the image data.
For crop type discrimination, spatial features are particularly useful. Crops are often planted in
rows (single or multiple), which create unique spatial patterns that can be captured using high-
resolution satellite imagery. Spatial image classification combines spatial elements with
spectral properties to reach more accurate classification decisions. Common spatial elements
used in classification include texture, contexture, and geometry (shape).
Texture plays an essential role in image classification as it enables the identification of regular
patterns, often found in man-made agricultural arrangements. This texture characteristic is
valuable for distinguishing between different land uses and crop types.
Texture analysis often involves segmenting images based on grey value relationships. Two
common methods include:
The Local Binary Pattern (LBP) is a simple yet effective texture operator that assigns a binary
value to each pixel by comparing its intensity to its surrounding neighborhood. The result is a
binary number that represents texture patterns.
Spatial feature extraction using LBP works effectively with high-resolution satellite imagery.
Additionally, these spatial features are helpful in visual interpretation during supervised
classification.
Spectral Features for Crop Classification
Spectral features are derived from the reflectance behavior of crops. They are crucial for
differentiating crop types based on their interaction with various wavelengths of light.
Band Selection
Band selection is a critical step in hyperspectral remote sensing. Hyperspectral sensors collect
data across hundreds of narrow spectral bands, but many of these bands may contain redundant
information.
HVIs are capable of describing biochemical and biophysical interactions between light
and vegetation.
They can detect specific absorption features related to plant properties.
1. Structural Properties
2. Biochemical Properties
3. Plant Physiology and Stress Levels
Examples of narrow band indices include the Simple Ratio (SR) and other custom indices
tailored to hyperspectral systems. These indices help map plant health, structure, and
biochemical content more accurately.
Importance of Hyperspectral Remote Sensing
The spectrum provides the ability to study specific crop characteristics in detail.
Non-imaging sensors with high spectral resolution (1-10 nm sampling rate) offer
precise feature identification.
HVIs are essential for mapping and monitoring plant biophysical and biochemical
properties.
Accurate band selection is vital for capturing relevant crop traits.
Yield monitoring refers to the process of collecting data on crop production at various stages
of growth to estimate the final yield. This is essential for decision-makers and planners to
forecast crop production accurately and make informed decisions regarding import and export
requirements well before harvest.
Traditional methods of yield estimation are often expensive, time-consuming, and prone to
large errors due to incomplete and inaccurate ground-based observations. In contrast, remote
sensing data provides a reliable, efficient, and spatially comprehensive method for monitoring
crop yields. It captures imagery and information at various scales and frequencies, allowing
continuous and widespread observation.
Aerial Photography
Aerial photography is used to optimize the use of agricultural resources and maintain
crop inventory.
Black and white aerial photographs have traditionally been used to identify crops by
comparing the appearance of crops on the ground with their photographic
representation.
Photographs are taken at various intervals during the growing season to monitor crop
changes.
Multispectral Scanners
Radar sensors are effective for yield monitoring, particularly by detecting seasonal
changes in crops.
Radar imagery considers various parameters such as moisture content, crop height,
and structure, which are useful for yield estimation.
Satellite Data
Traditional models for predicting crop yield have become less reliable; remote
sensing offers a more accurate alternative.
Satellite remote sensing provides consistent, large-scale spatial coverage and timely
data.
India's remote sensing program began with the launch of the Indian Remote Sensing
(IRS) satellite in 1988.
Yield estimates derived from remote sensing are indirect but increasingly precise due
to advancements in data resolution.
Coarse resolution satellite data is currently being used as a sampling tool to improve
yield prediction models.
Soil Mapping
Soil mapping is the process of creating maps that describe the properties and distribution of
soil in a specific area. These maps are crucial for agricultural planning and land resource
management.
Soil maps are required at different scales depending on the planning level:
1:1 million scale maps are used for macro-level national planning.
1:250,000 scale maps are useful at the regional or state level and provide generalized
interpretations for determining agricultural suitability.
1:50,000 scale maps, which show associations of soil series, are suitable for district-
level planning.
1:8,000 or 1:4,000 scale maps are high-resolution and require intensive field
observations. They are often based on aerial photographs or high-resolution satellite
data.
These maps are also used to identify and manage degraded lands, such as salt-affected soils,
eroded areas, waterlogged lands, and shifting cultivation zones.
Remote sensing has significantly improved soil mapping by speeding up traditional surveys
and enhancing accuracy.
Topographic variations are used as the basis for depicting soil variability.
Multispectral satellite data supports soil mapping up to the family association level
(1:50,000 scale).
Soil features are identified using elements such as shape, size, tone, shadow, texture,
pattern, site, and association.
This method is cost-effective and relatively easy to implement.
Studies have shown that remote sensing technology is particularly efficient for soil
mapping at scales of 1:50,000 and 1:10,000.
Computer Aided Approach
With the large volume of remote sensing data, computer-aided methods are essential
for quick analysis.
These techniques utilize spectral variations for accurate classification.
Pattern recognition helps identify homogeneous soil areas for detailed investigation.
One challenge in traditional soil cartography is the precise delineation of soil
boundaries, which remote sensing can improve.
By combining remote sensing with ancillary data, better soil mapping units can be
delineated.
Conclusion
Remote sensing offers a powerful set of tools for both yield monitoring and soil mapping. It
addresses the limitations of traditional methods, enhances accuracy, and supports effective
agricultural planning at various levels. The integration of aerial photography, radar,
multispectral and hyperspectral satellite data, along with modern computational techniques, is
paving the way for smarter, data-driven agriculture.
Simulation and Crop Modelling
Models are of different types depending upon the purpose for which it is designed or used and
the supporting system developed for the same.
1. Statistical - Empirical models: These models express the relationship between yield
or yield component and weather parameters. However, the model dose not explain the
mechanism, approach and the aspects of weather parameters by which they influence
yield. The models are simple in nature, crop and location specific and do not take soil
variability, genetic potential and management into account.
2. Mechanistic models: These models explain not only the relationship between weather
parameters and yield but also explains the mechanism of influencing dependant
variables i.e. photosynthesis, leaf area development by independent variable i.e.
weather parameter such as; radiation, temperature etc.
3. Deterministic models: These models estimate/predict the exact value of the yield or
dependant variable. Usually, these are developed by mathematical techniques and have
well defined coefficients.
4. Stochastic models: These models use the value of weather parameter at some or the
other probability level. Therefore, output i.e. yield or yield components are also
estimated within a range depending upon the range or probability level of dependent
variable.
5. Static models: These models do not account time factor. Dependant and independant
both the variables are having values which remains constant over a given period of time.
6. Dynamic models: These models are defined at a given time. They are usually, dealing
with rate variables such as evapo-transpiration, rate of photosynthesis, respiration, etc.
They are complex in nature and define yield or state of dependant variable at a given
rate or time of independent variables.
5. Decision Support Systems (DSS): Decision support systems (DSS) are integrated
software packages comprising tools for processing both numerical and qualitative
information. It offers the ability to deliver the best information available, quickly,
reliably, and efficiently. The choices of planting time, varietal selection, grazing
strategies, and fertilizer, irrigation, and spray applications are complex decisions to be
made at the farm level. These are important and decisive because they cannot be
postponed, are irreversible, represent a substantial allocation of resources, and have a
wide range of outcomes, with consequences that impact the farm business for years to
come. A successful decision support system focuses on such decisions. A key element
in the success of a DSS is the development of trust in its reliability and the willingness
and ability of the targeted users to utilize the system.
1. Simulation: Process necessary for operationalizing the model or solving the model to
mimic a system behaviour is known as simulation. Developing computer logic and flow
diagram, writing the computer code and implementing the code on a computer to
produce desired outputs from analyzing the system are necessary tasks in the simulation
process.
2. Inputs: Inputs to the system are those factors in the environment that influence the
behaviour of the system but which are not influenced by the system, such as
meteorological variables. Inputs are also referred to as driving variables or forcing
functions. The choice of components and inputs for various models may differ
depending on the objectives and availability of data.
3. Output: Outputs from the system represent the characteristic behaviour of the system
that is of interest to the modeller.
4. State variables: State variables are quantities that describe the conditions of the
components of the system. They may change with time as the system components
interacts with the environment. In dynamic models, state variables change with time.
Soil water content and crop biomass are two state variables that change with time in
most crop models. State variables of crop models are of critical importance because
these are the dynamic characteristics of a crop that are of interest to the modellers.
Accuracy may be defined in terms of three progressive stages; verification, validation and
calibration.
The modeling process is cyclic and closely parallels the scientific method and the software
life cycle for the development of a major software project. The process is cyclic because at
any step we might return to an earlier stage to make revisions and continue the process from
that point. The steps of the modeling process are as follows:
Analyze the problem We must first study the situation sufficiently to identify the
problem precisely and understand its fundamental questions clearly. At this stage, we
determine the problem’s objective and decide on the problem’s classification, such as
deterministic or stochastic. Only with a clear, precise problem identification can we
translate the problem into mathematical symbols and develop and solve the model.
Formulate a model In this stage, we design the model, forming an abstraction of the
system we are modeling. Some of the tasks of this step are as follows:
Gather data We collect relevant data to gain information about the system’s
behavior.
Make simplifying assumptions and document them In formulating a model
we should attempt to be as simple as reasonably possible. Thus, frequently we
decide to simplify some of the factors and to ignore other factors that do not
seem as important. Most problems are entirely too complex to consider every
detail, and doing so would only make the model impossible to solve or to run in
a reasonable amount of time on a computer. Moreover, factors often exist that
do not appreciably affect outcomes. Besides simplifying factors, we may decide
to return to Step 1 to restrict further the problem under investigation.
Determine variables and units We must determine and name the variables. An
independent variable is the variable on which others depend. In many
applications, time is an independent variable. The model will try to explain the
dependent variables. For example, in simulating the trajectory of a ball, time is
an independent variable; and the height and the horizontal distance from the
initial position are dependent variables whose values depend on the time. To
simplify the model, we may decide to neglect some variables (such as air
resistance), treat certain variables as constants, or aggregate several variables
into one. While deciding on the variables, we must also establish their units,
such as days as the unit for time.
Establish relationships among variables and submodels If possible, we
should draw a diagram of the model, breaking it into submodels and indicating
relationships among variables. To simplify the model, we may assume that
some of the relationships are simpler than they really are. For example, we
might assume that two variables are related in a linear manner instead of in a
more complex way.
Determine equations and functions While establishing relationships between
variables, we determine equations and functions for these variables. For
example, we might decide that two variables are proportional to each other, or
we might establish that a known scientific formula or equation applies to the
model. Many computational science models involve differential equations, or
equations involving a derivative, which we introduce in Module 2.3 on “Rate
of Change.”
Solve the model This stage implements the model. It is important not to jump to this
step before thoroughly understanding the problem and designing the model. Otherwise,
we might waste much time, which can be most frustrating. Some of the techniques and
tools that the solution might employ are algebra, calculus, graphs, computer programs,
and computer packages. Our solution might produce an exact answer or might simulate
the situation. If the model is too complex to solve, we must return to Step 2 to make
additional simplifying assumptions or to Step 1 to reformulate the problem
Verify and interpret the model’s solution Once we have a solution, we should
carefully examine the results to make sure that they make sense (verification) and that
the solution solves the original problem (validation) and is usable. The process of
verification determines if the solution works correctly, while the process of validation
establishes if the system satisfies the problem’s requirements. Thus, verification
concerns “solving the problem right,” and validation concerns “solving the right
problem.” Testing the solution to see if predictions agree with real data is important for
verification. We must be careful to apply our model only in the appropriate ranges for
the independent data. For example, our model might be accurate for time periods of a
few days but grossly inaccurate when applied to time periods of several years. We
should analyze the model’s solution to determine its implications. If the model solution
shows weaknesses, we should return to Step 1 or 2 to determine if it is feasible to refine
the model. If so, we cycle back through the process. Hence, the cyclic modelling
process is a trade-off between simplification and refinement. For refinement, we may
need to extend the scope of the problem in Step 1. In Step 2, while refining, we often
need to reconsider our simplifying assumptions, include more variables, assume more
complex relationships among the variables and submodels, and use more sophisticated
techniques.
Report on the model Reporting on a model is important for its utility. Perhaps the
scientific report will be written for colleagues at a laboratory or will be presented at a
scientific conference. A report contains the following components, which parallel the
steps of the modeling process:
Analysis of the problem Usually, assuming that the audience is
intelligent but not aware of the situation, we need to describe the circumstances
in which the problem arises. Then, we must clearly explain the problem and the
objectives of the study.
Model design The amount of detail with which we explain the model
depends on the situation. In a comprehensive technical report, we can
incorporate much more detail than in a conference talk. For example, in the
former case, we often include the source code for our programs. In either case,
we should state the simplifying assumptions and the rationale for employing
them. Usually, we will present some of the data in tables or graphs. Such figures
should contain titles, sources, and labels for columns and axes. Clearly labeled
diagrams of the relationships among variables and submodels are usually very
helpful in understanding the model.
Model solution In this section, we describe the techniques for solving
the problem and the solution. We should give as much detail as necessary for
the audience to understand the material without becoming mired in technical
minutia. For a written report, appendices may contain more detail, such as
source code of programs and additional information about the solutions of
equations.
Results and conclusions Our report should include results,
interpretations, implications, recommendations, and conclusions of the model’s
solution. We may also include suggestions for future work.
Maintain the model: As the model’s solution is used, it may be
necessary or desirable to make corrections, improvements, or enhancements. In
this case, the modeler again cycles through the modeling process to develop a
revised solution.
Agricultural systems are characterized by having many organizational levels. From the
individual components within a single plant , through constituent plants, to farms or a
whole agricultural region or nation, lies a whole range of agricultural systems. Since
the core of agriculture is concerned with plants, the level that is of main interest to the
agricultural modeller is the plant. Reactions and interactions at the level of tissues and
organs are combined to form a picture of the plant that is then extrapolated to the crop
and their output.
Simplicity
Learn from the past
Create a conceptual model
Build a prototype
Push the user’s desire
Model to data available
Separate data from software
Trust your creative juices
Fit universal constraints
Distil your own principles
Model Uses
Model Limitations
Models and simulations can’t ever completely re-create real life situations
Not every possible situation have been included in the model
The equipment and software are expensive to purchase
The result depends on how good the model is and how much data was used to
create it in the first place