Hydrometeorological Instrument
Hydrometeorological Instrument
Barometer - It consists of a glass tube filled with mercury and a brass scale.
The glass tube is kept in a metallic tube.
Barometer - It consists of a glass tube filled with mercury and a brass scale.
The glass tube is kept in a metallic tube.
Ceilometer - a device that uses a laser or other light source to determine the
height of a cloud ceiling or cloud base.
Dry and wet Bulb Hydrometer - Important to determine the state of humid air.
Dark Adoptor Goggles - made with red-tinted plastic lenses. Such goggles or
glasses are often used by pilots and weather observers to preserve their
natural night vision.
Measuring Scales - calculate weight that is the product of mass into gravity
(9.807 m/s2) on the force on a spring, whereas a balance or pair of scales
using a balance beam compares masses by balancing the weight due to the
mass of an object against the weight of one or more known masses.
Navigational compasses - shows the directions north, south, east, and west
on the compass face as abbreviated initials.- an instrument used for
navigation and orientation that shows direction relative to the geographic
cardinal directions.
Pipette -A small piece of apparatus which consists of narrow tube into which
fluid is drawn by suction (as for dispersing or measurement) and retained by
closing the upper end.
Wind vane -It consists essentially of a broad arrow head placed in ball
bearings to enable the arrow to move freely in the horizontal plane. The arrow
indicates the direction of wind.
Hail Pad-A standard hail pad consists of florist's foam and aluminum foil. The
falling hail strikes the foil and creates dimples for the observer to measure
after the
storm.
Reference
https://www.scribd.com/document/394446342/HYDROMETEOROLOGICAL-
INSTRUMENTS
STATISTICAL TREATMENT FOR HYDROLOGIC DATA
Introduction
Many hydrological processes exhibit substantial variability which may not be
explained by laws of physics, chemistry and biology or climatology only, which
means they are subjected to chance, hence the importance of statistics in
hydrology. There is substantial difficulty in explaining hydrologic variables like
precipitation due to their inherent randomness, and because of the
randomness in the hydrologic system itself in which the variable operates, like
the watershed. The second reason of variability is the sampling error as often
hydrologists must predict from small samples of the population as data set is
available for a short period of time only. Also, precipitation data, soil data,
infiltration values are collected from only a few points from the entire
watershed, while these limited data sets are used to explain the desired
characteristics of the entire watershed. As the number of data sets increase
over the years the accuracy of prediction also improves. However, statistics
must go hand in hand with hydrological process understanding, only then the
study will be robust. Though lots of literature on statistics and statistical
methods exist, yet, normal literature on statistics does not explain its need
and application in hydrology.
Basic concepts of statistics and probability in hydrology
In hydrology most of the data are observations rather than experiments, so
once the event has occurred like rainfall, the same event does not occur
again, so an extreme event like heavy rainfall or flood does not occur again in
the same form. Thus, statistics and probability offer insights into the expected
magnitude and variability in future observations. Statistics is a tool which is
used to infer about the properties of a population based on the properties of
the sample, while probability provides answers to the likelihood of an event
occurrence provided the population characteristics are known.
Hypothesis testing
Once the hydrologist has established the characteristics of sample of
hydrologic data like stream flow (annual flood flow), aquifer flow or rainfall,
there are other causative relationships which need to be established. Certain
questions need to be answered like have the annual flood peaks increased
over time due to anthropogenic changes, does the groundwater in aquifer
meet drinking water quality standards, has the concentration of pollutant in
river water increased over time, does it increase only during a season? Such
practical questions that we face daily involve a causative agent which has to
be taken care of by the hydrologist. The causative agent can be river basin
development, change in land use cover in case of increase in annual flood
peaks, contamination of ground water due to agriculture return flow in case of
ground water contamination or presence of an industry which operates only in
certain seasons in case of the river water quality. These questions can be
translated into statistical hypotheses, like the following:
Null hypothesis Ho, which is usually a hypothesis of no change, instances of
no change in water or hydrology are, as follows:
The distribution of aquifer hydraulic conductivity is identical at two far away
points in the same aquifer.
Concentration of pollutant in the river does not depend on flow.
Alternate hypothesis Hi, which is the hypothesis of change, some departure
is expected, so the distribution of hydraulic conductivity across two points in
the same aquifer may be different due to certain causative factors, and the
concentration of pollutant is related to river flow during the season under
consideration (as against the null hypothesis explained above).
One sided test, it is a hypothesis test in which Hi is a departure from Ho only
in one direction, for example the hydraulic conductivity in the aquifer changes
from a point of higher gradient to lower gradient only, or concentration of
pollutant increases with increase in flow only.
Two-sided test, where Hi is a departure from Ho in either direction, means
that the hydraulic conductivity can change in either direction or the
concentration changes with both increase and decrease in river flow.
Hypothesis testing is a measure of strength of statistical evidence, meaning
does the statistical evidence provide sufficient reason to conclude for null
hypothesis Ho or alternate hypothesis Hi.
Test of Hypothesis is carried out about the sample mean, the null hypothesis
being that the population mean is equal to sample mean, while the alternate
hypothesis is that sample mean is not equal to population mean.
Extreme rainfall events and floods are hydrologic processes which are very
complicated natural events. They occur whenever, many parameters and
variables combine, thus analyzing these using any conventional model like
the rational method for peak runoff in a catchment, unit hydrograph method do
not yield good results. Some variables that are always involved are the
catchment characteristics, rainfall intensity and duration and antecedent
conditions. Each of these factors in turn depends on a host of other
parameters. The statistical approach to hydrology is used for prediction of
flood flows and rainfall events.
However, use of statistics in hydrology is bound by the limitations that many
hydrological processes cannot be put to formula due to the variability that
exists between events. There are three major deterrents to statistics in
hydrology, these being an inherent randomness in water-related events and
thus variables, accompanied by substantial sampling errors and an
incomplete understanding of the processes involved.
Furthermore, many hydrological data already collected show anomalies when
put to statistical enquiry. This is often in the form of skewed distribution
functions, lack of independence among variables, censoring due to natural
events, or even seasonal patterns. This can be attributed to the fact that while
statistics are based on set formulae derived from repeating similarity in results
from different experiments, this method in hydrology can only really be used
to define expected outcomes and not for modeling due to the nature of water
events.
The defining characteristic of statistics takes into consideration is the
characteristics of a sample taken from the population. This is often the median
of the population under observation, where population can be defined as a
collection of objects whose measurable properties are of interest.
In defining the population, another problem often arises, however, that of
sampling. The term refers to the fact that while sometimes a population can
be finite and thus the individual characteristics of that population can be
discerned, usually the researcher is limited to the use of a mere sample of the
total population. It is then important to understand the individual
characteristics of the sample first and then track their relation to the properties
of the population.
This process can be simplified, however, as sampling is of four basic types.
The first is the idea of Random Sampling, where each part of the population
under study has equal chances of being selected.
This random sampling can also be used by dividing the population into
groups, and applying the method to each group thus formed; this is called
Stratified Random Sampling.
Converse to the random method, the Uniform Sampling method allows for a
strict rule to prevail on the sampling points, making them equally distant from
each other.
Fourth and finally, we also have the Convenience Sampling method, where
data are collected at the convenience of the experiment. Usually, the two
forms of random sampling or a uniform sample are considered ideal, where
uniform sampling has the logistical advantage of minimizing the serial
dependence on variability; the stratified sampling method is therefore the
other extreme, used only when the groups thus formed show substantial
variability.
The statistical approach for flood frequency analysis estimates the design
flood by using past stream flow data of maximum annual flood flow which may
be taken from direct observations or estimated by using a suitable method.
Frequency analysis is conducted using available record of the maximum
annual rainfall events of that region. The probability of occurrence of an event,
in this case a flood event (that is maximum flood discharge likely to occur in a
year at a location), whose magnitude is equal to or greater than a certain
magnitude X is denoted by P (probability). The return period T is defined as
inverse of P.
In hydrology estimating the magnitude of an event (storm or flood)
corresponding to the return period of such an event/occurrence is of utmost
importance. This is done through statistical analyses of past record of flood
events, rainfall events to predict events of the future. The statistical studies
use records of daily, monthly and annual rainfall events and stream flows for
estimation of large storm events and flood flows. For estimation of extremely
large events often the past records do not have that range of data, often
extrapolation techniques are used. However, the sample size may not be
enough to allow extrapolation of that data and allow prediction with accuracy.
Statistical tools/methods then help in such predictions with reasonable
accuracy. In most cases or situations, the data is inadequate to determine the
risk due to large flood peaks, rainfall events, pollutant loadings and low flows.
Normal Distribution
The normal distribution is one of the most commonly used distributions
because of its bell shape (figure 1), it turns out to be a symmetrical distribution
with coefficient of skewness equal to zero. The normal distribution is used to
study the average annual stream flow or the average annual pollutant loading
in the stream. The natural parameters of the normal distribution are µ and σ2.
Log normal distribution (two parameters)
This statistical distribution is also seen in hydrology, when the hydrological
variables act multiplicatively rather than having addition properties. The
frequency distribution of variables is skewed, so the logarithm of these
variables is considered which follows a normal distribution. The µ, σ, x0 are
called the scale, shape and location parameter, x0 is usually equal to zero.
The most commonly used distribution for flood frequency events is Gumbel
distribution, which is extreme value Type 1, EVI, distribution.
2. The mean annual flood of a river is 600m3/s and the standard deviation of
the annual flood time series is 150m3/s. Determine the return period of a flood
of magnitude 1000m3/s occurring in the river.
Use Gumbel’s method and assume sample size to be very large (taken from
Engineering Hydrology by K. Subramanya,2008). Answer
Using :
XT= ͞x +Kσ,
Given that xmean=600m3/sand σ=150 m3/s and XT=1000 m3/s, then
substituting in the above equation, the frequency factor K=2.667
But K=yT-y͞ n /Sn Given for N large, ynand Sn are 0.577 and 1.2825
respectively.
Therefore, 2.667= (YT-0.577)/1.2825
YT=3.9970, YT = -(ln.lnT/T-1),
Therefore T=54.9years (taking antilog of Yt)
So, the return period of flood of magnitude 1000m3/s is 55 years.
REFERENCE
Ranjana Ray Chaudhuri (2019)
https://ebooks.inflibnet.ac.in/esp05/chapter/statistical-analysis-of-hydrologic-
data-hydrology-frequency-analysis/?
fbclid=IwY2xjawFFXBZleHRuA2FlbQIxMAABHaNWq6XsTkXl6n9nEDTfckwlM
QbX0nAlSkwzoacfbkRP216Mw5pLqli4Vw_aem_do7lyFNVeyWAHD0j8S_b1g
CONCEPTS OF PROBABILITY
Quantile Plots
Quantile plots visually portray the quantiles, or percentiles (which equal the
quantiles times 100) of the distribution of sample data.
REFERENCE
https://www.scribd.com/presentation/435528470/CONCEPTS-OF-
PROBABILITY-AND-STATISTICS-HYDROLOGY-pptx
ρs− ρw 2
v= D
18 ŋ
Where;
v = velocity
ρs = density of soil
ρw = density of water
ŋ = viscosity of water
D = diameter of soil particle
D (mm) = K
√ L(cm)
t (min)
Where;
K = Temperature
L = depth
t = time
K=
√ 30 ŋ
(Gs −1)
Table 2.1
PERCENT FINER:
a Rcp
%finer = × 100
Ws
Where;
Rcp = corrected hydrometer reading, Rcp = R+Fr+Fz
Gs(1.65)
a = correction for specific gravity, a =
(Gs −1)2.65
Ws = dry weight of soil
PARTICLE SIZE/DIAMETER:
D (mm) = K
Rcl = R+Fm
√ L(cm)
t (min)
References:
Geotechnical Eng'g Hydrometer Analysis (youtube.com)
Frequency Analysis
6.1 Introduction
The above procedure was applied to the daily rainfalls given in Table 6. I. The
results are shown in Table 6.2, in Columns (I), (2), (3), (4), and (5). The data
are the same data found in the previous edition of this book.
Column (5) gives the frequency distribution of the intervals. The bulk of the
rainfall values is either O or some value in the 0-25 mm interval. Greater
values, which are more relevant for the design capacity of drainage canals,
were recorded on only a few days.
From the definition of frequency (Equation 6.1), it follows that the sum of all
frequencies equals unity
In hydrology, we are often interested in the frequency with which data exceed
a certain, usually high design value. We can obtain the frequency of
exceedance F(x > ai) of the lower limit ai of a depth interval i by counting the
number Mi of all rainfall values x exceeding ai, and by dividing this number by
the total number of rainfall data. This is shown in Table 6.2, Column (6). In
equation form, this appears as
Frequency distributions are often presented as the frequency of non-
exceedance and not as the frequency of occurrence or of exceedance. The
frequency of non-exceedance is also referred to as the cumulative frequency.
We can obtain the frequency of non- exceedance F(x < ai) of the lower limit a,
by calculating the sum of the frequencies over the intervals below a1.
Because the sum of the frequencies over all intervals equals unity, it follows
that
The cumulative frequency (shown in Column (7) of Table 6.2) can, therefore,
be derived directly from the frequency of exceedance as
Columns (8) and (9) of Table 6.2 show return periods. The calculation of
these periods will be discussed later, in Section 6.2.4.
The remaining frequencies presented in Table 6.3 differ from those in Table
6.2 in that they are conditional frequencies (the condition in this case being
that the rainfall is higher than 25 mm). To convert conditional frequencies to
unconditional frequencies, the following relation is used
Where,
e F = unconditional frequency (as in Table 6.2
) F’ = conditional frequency (as in Table 6.3)
F* = frequency of occurrence of the excluded events (as in Table 6.2)
As an example, we find in Column (7) of Table 6.3 that F’(x I 50) = 0.641.
Further, the cumulative frequency of the excluded data equals F*(x I 25) =
0.932 (see Column (7) of Table 6.2). Hence, the unconditional frequency
obtained from Equation 6.6 is
F(x I 50) = (1 - 0.932) x 0.641 = 0.0439
This is exactly the value found in Column (5) of Table 6.2.
the frequency of occurrence of an event, the larger the sample will have to be
in order to make a prediction with a specified accuracy. For example, the
observed frequency ofdry days given in Table 6.2 (0.5, or 50%) will deviate
only slightly from the frequency observed during a later period of at least
equal length. The frequency of daily rainfalls of 75-100 mm (0.005, or 0.5%),
however, can be easily doubled (or halved) in the next period of record.
A quantitative evaluation of the reliability of frequency predictions follows
in the next section.
This means that, on average, there will be a November day with rainfall
exceeding 100 mm once in 6.33 years.
If a censored frequency distribution is used (as it was in Table 6.3), it will also
be necessary to use the factor I-F* (as shown in Equation 6.6) to adjust
Equation 6.10
This produces
Figure 6.2 also shows that, to give an impression of the error in the prediction
of Future frequencies, frequency estimates based on one sample of limited
size should Be accompanied by confidence statements. Such an impression
can be obtained from Figure 6.3, which is based on the binomial distribution.
The figure illustrates the Principle of the nomograph. Using N = 50 years of
observation, we can see that the 90% confidence interval of a predicted 5-
year return period is 3.2 to 9 years. These values are obtained by the
following procedure:
- Enter the graph on the vertical axis with a return period of T, = 5, (point A),
and Move horizontally to intersect the baseline, with N = co, at point B;
Move horizontally to intersect the baseline, with N = co, at point B;
Move vertically from the intersection point (B) and intersect the curves for
N = 50 to obtain points C and D;
Move back horizontally from points C and D to the axis with the return
periods and read points E and F;
The interval from E to Fis the 90% confidence interval ofA, hence it can be
predicted With 90% confidence that TI is between 3.2 and 9 years.
Nomographs for confidence Intervals other than 90% can be found in
literature (e.g. in Oosterbaan 1988).
The confidence belts in Figure 6.3 show the predicted intervals for the
frequencies that can be expected during a very long future period with no
systematic changes in hydrologic conditions. For shorter future periods, the
confidence intervals are wider than indicated in the graphs. The same is true
when hydrologic conditions change.
Frequency Analysis
6.1 Introduction
The above procedure was applied to the daily rainfalls given in Table 6. I. The
results are shown in Table 6.2, in Columns (I), (2), (3), (4), and (5). The data
are the same data found in the previous edition of this book.
Column (5) gives the frequency distribution of the intervals. The bulk of the
rainfall values is either O or some value in the 0-25 mm interval. Greater
values, which are more relevant for the design capacity of drainage canals,
were recorded on only a few days.
From the definition of frequency (Equation 6.1), it follows that the sum of all
frequencies equals unity
In hydrology, we are often interested in the frequency with which data exceed
a certain, usually high design value. We can obtain the frequency of
exceedance F(x > ai) of the lower limit ai of a depth interval i by counting the
number Mi of all rainfall values x exceeding ai, and by dividing this number by
the total number of rainfall data. This is shown in Table 6.2, Column (6). In
equation form, this appears as
Columns (8) and (9) of Table 6.2 show return periods. The calculation of
these periods will be discussed later, in Section 6.2.4.
The remaining frequencies presented in Table 6.3 differ from those in Table
6.2 in that they are conditional frequencies (the condition in this case being
that the rainfall is higher than 25 mm). To convert conditional frequencies to
unconditional frequencies, the following relation is used
Where,
e F = unconditional frequency (as in Table 6.2
) F’ = conditional frequency (as in Table 6.3)
F* = frequency of occurrence of the excluded events (as in Table 6.2)
As an example, we find in Column (7) of Table 6.3 that F’(x I 50) = 0.641.
Further, the cumulative frequency of the excluded data equals F*(x I 25) =
0.932 (see Column (7) of Table 6.2). Hence, the unconditional frequency
obtained from Equation 6.6 is
F(x I 50) = (1 - 0.932) x 0.641 = 0.0439
This is exactly the value found in Column (5) of Table 6.2.
6.2.3 Frequency Analysis by Ranking of Data
the frequency of occurrence of an event, the larger the sample will have to be
in order to make a prediction with a specified accuracy. For example, the
observed frequency ofdry days given in Table 6.2 (0.5, or 50%) will deviate
only slightly from the frequency observed during a later period of at least
equal length. The frequency of daily rainfalls of 75-100 mm (0.005, or 0.5%),
however, can be easily doubled (or halved) in the next period of record.
A quantitative evaluation of the reliability of frequency predictions follows
in the next section.
This means that, on average, there will be a November day with rainfall
exceeding 100 mm once in 6.33 years.
If a censored frequency distribution is used (as it was in Table 6.3), it will also
be necessary to use the factor I-F* (as shown in Equation 6.6) to adjust
Equation 6.10
This produces
Figure 6.2 also shows that, to give an impression of the error in the prediction
of Future frequencies, frequency estimates based on one sample of limited
size should Be accompanied by confidence statements. Such an impression
can be obtained from Figure 6.3, which is based on the binomial distribution.
The figure illustrates the Principle of the nomograph. Using N = 50 years of
observation, we can see that the 90% confidence interval of a predicted 5-
year return period is 3.2 to 9 years. These values are obtained by the
following procedure:
- Enter the graph on the vertical axis with a return period of T, = 5, (point A),
and Move horizontally to intersect the baseline, with N = co, at point B;
Move horizontally to intersect the baseline, with N = co, at point B;
Move vertically from the intersection point (B) and intersect the curves for
N = 50 to obtain points C and D;
Move back horizontally from points C and D to the axis with the return
periods and read points E and F;
The interval from E to Fis the 90% confidence interval ofA, hence it can be
predicted With 90% confidence that TI is between 3.2 and 9 years.
Nomographs for confidence Intervals other than 90% can be found in
literature (e.g. in Oosterbaan 1988).
The confidence belts in Figure 6.3 show the predicted intervals for the
frequencies that can be expected during a very long future period with no
systematic changes in hydrologic conditions. For shorter future periods, the
confidence intervals are wider than indicated in the graphs. The same is true
when hydrologic conditions change.
Reference
Oosterbaan. n.d.p. 175-186. Frequency and Regression Analysis.