0% found this document useful (0 votes)
67 views37 pages

Hydrometeorological Instrument

Credits to the owner.

Uploaded by

Shakira Bila
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
67 views37 pages

Hydrometeorological Instrument

Credits to the owner.

Uploaded by

Shakira Bila
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 37

HYDROMETEOROLOGICAL INSTRUMENT

Hydrometeorology, an essential branch of meteorology, focuses on aspects


such as hydrologic cycle, water balance and statistical analysis of rainfall
patterns. It also encompasses the study of methods for observing weather
patterns concerning water, playing a crucial role in weather phenomena since
excessive or insufficient rainfall can both pose significant risks. Various
instruments are used in hydrometeorology to facilitate these observations and
analysis such as:

Anemometer - a device used for measuring the speed of wind.

Atmometer - an atmometer alongside a potometer, it is possible to compare


the rate of transpiration from a plant with evaporation from a purely physical
system. "atmometer."

Altimeters - The measurement of altitude is called altimetry, which is related to


the term bathymetry, the measurement of depth under water.

Barometer - It consists of a glass tube filled with mercury and a brass scale.
The glass tube is kept in a metallic tube.

Barometer - It consists of a glass tube filled with mercury and a brass scale.
The glass tube is kept in a metallic tube.

Barograph - Continuous recording of pressure is made with this instrument.

Centrifuges- a machine with a rapidly rotating container that applies


centrifugal force to its contents, any of various rotating machines that
separate liquids from solids of dispersions of one liquid in another, by the
action of centrifugal force.

Ceilometer - a device that uses a laser or other light source to determine the
height of a cloud ceiling or cloud base.

Caliper - A caliper can be as simple as a compass with inward or outward-


facing points.

Dry and wet Bulb Hydrometer - Important to determine the state of humid air.
Dark Adoptor Goggles - made with red-tinted plastic lenses. Such goggles or
glasses are often used by pilots and weather observers to preserve their
natural night vision.

Dropsonde - an expendable weather reconnaissance device.

Dewcell - consist of a small heating element surrounded by a solution of


lithium chloride.

Disdrometer - Some disdrometers can distinguish between rain, graupel, and


hail. The uses for disdrometers are numerous.
Evaporation Pan -The tank is filled with water up to two and half inches from
the top. The decrease by noting the level of water with the help of a vernier
scale.

Sieve - a device with meshes or perforations through which finer particles of a


mixture (as of ashes, flour, or sand) of various sizes may be passed to
separate them from coarser ones, through which the liquid may be drained
from liquid-containing material, or through
which soft materials may be forced for reduction to fine particles

Field Mill - electrical fields in the atmosphere near thunderstorm clouds.


Flux meter - is an instrument used for the measurement of intensity of sun-
shine / solar radiation.

Graduated Cylinders - It has a narrow cylindrical shape. Each marked line on


the graduated cylinder represents the amount of liquid that has been
measured.

Maximum and Minimum Thermometer (Six’s Thermometer) - used to measure


the maximum and minimum temperature of air of a day.

Measuring Scales - calculate weight that is the product of mass into gravity
(9.807 m/s2) on the force on a spring, whereas a balance or pair of scales
using a balance beam compares masses by balancing the weight due to the
mass of an object against the weight of one or more known masses.

Nephelometer - measures suspended particulates by employing a light beam


and a light detector set to one side of the source beam.

Nephoscope - is an instrument for measuring the altitude, direction, and


velocity of clouds.

Navigational compasses - shows the directions north, south, east, and west
on the compass face as abbreviated initials.- an instrument used for
navigation and orientation that shows direction relative to the geographic
cardinal directions.

Pipette -A small piece of apparatus which consists of narrow tube into which
fluid is drawn by suction (as for dispersing or measurement) and retained by
closing the upper end.

pH Meters -measures the hydrogen-ion activity in water-based solutions,


indicating its acidity or alkalinity expressed as pH.

Rain Gauges - is a device used for measuring the amount of rainfall. It


comprises a funnel with five inches diameter in the mouth, a container to
which the funnel is shouldered and finally a metallic cylinder that holds both
funnel and container.
Sound Meter - The diaphragm of the microphone responds to changes in air
pressure caused by sound waves. That is why the instrument is sometimes
referred to as a Sound Pressure Level Meter.

Sunshine Recorder -consists of a spherical glass mounted on a stand. When


the sun rays fall on the glass, a strip of prepared paper held in a grove at the
focus length starts burning. When the sun does not shine, the burn point of
the paper the sun light hours (duration) of the sun shine can be recorded.

Seismometer - Records of seismic waves allow seismologists to map the


interior of the Earth and to locate and measure the size of events like these.-is
an instrument that measures motion of the ground, caused by, for example,
an earthquake, a volcanic eruption, or the use of explosives.

Snow Gauge -A snow gauge is a type of instrument used by meteorologists


and hydrologists to gather and measure the amount of solid precipitation over
a set period of time.

Wind vane -It consists essentially of a broad arrow head placed in ball
bearings to enable the arrow to move freely in the horizontal plane. The arrow
indicates the direction of wind.

Stream gauges-Stream gauges help predict flooding in addition to assisting


water transportation.

Hail Pad-A standard hail pad consists of florist's foam and aluminum foil. The
falling hail strikes the foil and creates dimples for the observer to measure
after the
storm.

Satellite -Satellites with infrared cameras can measure water temperature to


help determine the source of the water in addition to possible future weather
patterns. Satellites that make use of microwaves can collect information about
changes in ground height. Hydrologists can then deduct information about
moisture content of the soil, helping to predict drought patterns and thereby
assist crop growth.

Rockets - The rocket had been developed in Germany as a weapons of war.


Then they offer the opportunity to scientist for them to add scientific
instrumentation on board.

Reference
https://www.scribd.com/document/394446342/HYDROMETEOROLOGICAL-
INSTRUMENTS
STATISTICAL TREATMENT FOR HYDROLOGIC DATA

Introduction
Many hydrological processes exhibit substantial variability which may not be
explained by laws of physics, chemistry and biology or climatology only, which
means they are subjected to chance, hence the importance of statistics in
hydrology. There is substantial difficulty in explaining hydrologic variables like
precipitation due to their inherent randomness, and because of the
randomness in the hydrologic system itself in which the variable operates, like
the watershed. The second reason of variability is the sampling error as often
hydrologists must predict from small samples of the population as data set is
available for a short period of time only. Also, precipitation data, soil data,
infiltration values are collected from only a few points from the entire
watershed, while these limited data sets are used to explain the desired
characteristics of the entire watershed. As the number of data sets increase
over the years the accuracy of prediction also improves. However, statistics
must go hand in hand with hydrological process understanding, only then the
study will be robust. Though lots of literature on statistics and statistical
methods exist, yet, normal literature on statistics does not explain its need
and application in hydrology.
Basic concepts of statistics and probability in hydrology
In hydrology most of the data are observations rather than experiments, so
once the event has occurred like rainfall, the same event does not occur
again, so an extreme event like heavy rainfall or flood does not occur again in
the same form. Thus, statistics and probability offer insights into the expected
magnitude and variability in future observations. Statistics is a tool which is
used to infer about the properties of a population based on the properties of
the sample, while probability provides answers to the likelihood of an event
occurrence provided the population characteristics are known.
Hypothesis testing
Once the hydrologist has established the characteristics of sample of
hydrologic data like stream flow (annual flood flow), aquifer flow or rainfall,
there are other causative relationships which need to be established. Certain
questions need to be answered like have the annual flood peaks increased
over time due to anthropogenic changes, does the groundwater in aquifer
meet drinking water quality standards, has the concentration of pollutant in
river water increased over time, does it increase only during a season? Such
practical questions that we face daily involve a causative agent which has to
be taken care of by the hydrologist. The causative agent can be river basin
development, change in land use cover in case of increase in annual flood
peaks, contamination of ground water due to agriculture return flow in case of
ground water contamination or presence of an industry which operates only in
certain seasons in case of the river water quality. These questions can be
translated into statistical hypotheses, like the following:
Null hypothesis Ho, which is usually a hypothesis of no change, instances of
no change in water or hydrology are, as follows:
The distribution of aquifer hydraulic conductivity is identical at two far away
points in the same aquifer.
Concentration of pollutant in the river does not depend on flow.
Alternate hypothesis Hi, which is the hypothesis of change, some departure
is expected, so the distribution of hydraulic conductivity across two points in
the same aquifer may be different due to certain causative factors, and the
concentration of pollutant is related to river flow during the season under
consideration (as against the null hypothesis explained above).
One sided test, it is a hypothesis test in which Hi is a departure from Ho only
in one direction, for example the hydraulic conductivity in the aquifer changes
from a point of higher gradient to lower gradient only, or concentration of
pollutant increases with increase in flow only.
Two-sided test, where Hi is a departure from Ho in either direction, means
that the hydraulic conductivity can change in either direction or the
concentration changes with both increase and decrease in river flow.
Hypothesis testing is a measure of strength of statistical evidence, meaning
does the statistical evidence provide sufficient reason to conclude for null
hypothesis Ho or alternate hypothesis Hi.
Test of Hypothesis is carried out about the sample mean, the null hypothesis
being that the population mean is equal to sample mean, while the alternate
hypothesis is that sample mean is not equal to population mean.
Extreme rainfall events and floods are hydrologic processes which are very
complicated natural events. They occur whenever, many parameters and
variables combine, thus analyzing these using any conventional model like
the rational method for peak runoff in a catchment, unit hydrograph method do
not yield good results. Some variables that are always involved are the
catchment characteristics, rainfall intensity and duration and antecedent
conditions. Each of these factors in turn depends on a host of other
parameters. The statistical approach to hydrology is used for prediction of
flood flows and rainfall events.
However, use of statistics in hydrology is bound by the limitations that many
hydrological processes cannot be put to formula due to the variability that
exists between events. There are three major deterrents to statistics in
hydrology, these being an inherent randomness in water-related events and
thus variables, accompanied by substantial sampling errors and an
incomplete understanding of the processes involved.
Furthermore, many hydrological data already collected show anomalies when
put to statistical enquiry. This is often in the form of skewed distribution
functions, lack of independence among variables, censoring due to natural
events, or even seasonal patterns. This can be attributed to the fact that while
statistics are based on set formulae derived from repeating similarity in results
from different experiments, this method in hydrology can only really be used
to define expected outcomes and not for modeling due to the nature of water
events.
The defining characteristic of statistics takes into consideration is the
characteristics of a sample taken from the population. This is often the median
of the population under observation, where population can be defined as a
collection of objects whose measurable properties are of interest.
In defining the population, another problem often arises, however, that of
sampling. The term refers to the fact that while sometimes a population can
be finite and thus the individual characteristics of that population can be
discerned, usually the researcher is limited to the use of a mere sample of the
total population. It is then important to understand the individual
characteristics of the sample first and then track their relation to the properties
of the population.
This process can be simplified, however, as sampling is of four basic types.
The first is the idea of Random Sampling, where each part of the population
under study has equal chances of being selected.
This random sampling can also be used by dividing the population into
groups, and applying the method to each group thus formed; this is called
Stratified Random Sampling.
Converse to the random method, the Uniform Sampling method allows for a
strict rule to prevail on the sampling points, making them equally distant from
each other.
Fourth and finally, we also have the Convenience Sampling method, where
data are collected at the convenience of the experiment. Usually, the two
forms of random sampling or a uniform sample are considered ideal, where
uniform sampling has the logistical advantage of minimizing the serial
dependence on variability; the stratified sampling method is therefore the
other extreme, used only when the groups thus formed show substantial
variability.
The statistical approach for flood frequency analysis estimates the design
flood by using past stream flow data of maximum annual flood flow which may
be taken from direct observations or estimated by using a suitable method.
Frequency analysis is conducted using available record of the maximum
annual rainfall events of that region. The probability of occurrence of an event,
in this case a flood event (that is maximum flood discharge likely to occur in a
year at a location), whose magnitude is equal to or greater than a certain
magnitude X is denoted by P (probability). The return period T is defined as
inverse of P.
In hydrology estimating the magnitude of an event (storm or flood)
corresponding to the return period of such an event/occurrence is of utmost
importance. This is done through statistical analyses of past record of flood
events, rainfall events to predict events of the future. The statistical studies
use records of daily, monthly and annual rainfall events and stream flows for
estimation of large storm events and flood flows. For estimation of extremely
large events often the past records do not have that range of data, often
extrapolation techniques are used. However, the sample size may not be
enough to allow extrapolation of that data and allow prediction with accuracy.
Statistical tools/methods then help in such predictions with reasonable
accuracy. In most cases or situations, the data is inadequate to determine the
risk due to large flood peaks, rainfall events, pollutant loadings and low flows.
Normal Distribution
The normal distribution is one of the most commonly used distributions
because of its bell shape (figure 1), it turns out to be a symmetrical distribution
with coefficient of skewness equal to zero. The normal distribution is used to
study the average annual stream flow or the average annual pollutant loading
in the stream. The natural parameters of the normal distribution are µ and σ2.
Log normal distribution (two parameters)
This statistical distribution is also seen in hydrology, when the hydrological
variables act multiplicatively rather than having addition properties. The
frequency distribution of variables is skewed, so the logarithm of these
variables is considered which follows a normal distribution. The µ, σ, x0 are
called the scale, shape and location parameter, x0 is usually equal to zero.

Pearson Type III distribution is known as a three-parameter distribution, it is


also called gamma distribution with three parameters. The µ, β, ɣ are the
location, scale and shape parameters.

The most commonly used distribution for flood frequency events is Gumbel
distribution, which is extreme value Type 1, EVI, distribution.

In terms of reduced variate: z= (x-µ)/β


The probability distribution function pdf becomes f(z) =e-z-(exp-z)
The Gumbel distribution is alternatively simplified by as shown below. The
general equation of hydrologic frequency analysis for Gumbel distribution as
per Chow (1998) is as shown below:
XT= ͞ xmean+Kσ

Where XT is the value of the variate X of a random hydrologic series with


return period T, ͞x is the mean of the variate, K is the frequency factor which
depends on the return period, T and σ is the standard deviation of the variate.
K = yT-͞yn /Sn
The values of yn and Sn are selected from Gumbel’s Extreme Value
Distribution considered depending on the sample size, when number of years
of data exceeds 100 years, N≥100, yn and Sn are 0.577 and 1.2825
respectively.
YT= -(ln.lnT/T-1), where in the reduced variate YT is afunction of the return
period T, ln is the natural logarithm.
Solved example:
1.The monthly rainfall recorded in millimetres at a station for a period of
twelve months is as given below. Determine the mean rainfall, variance,
standard deviation, coefficient of variation and coefficient of skewness (taken
from Ojha et.al. 2008)

2. The mean annual flood of a river is 600m3/s and the standard deviation of
the annual flood time series is 150m3/s. Determine the return period of a flood
of magnitude 1000m3/s occurring in the river.
Use Gumbel’s method and assume sample size to be very large (taken from
Engineering Hydrology by K. Subramanya,2008). Answer

Using :
XT= ͞x +Kσ,
Given that xmean=600m3/sand σ=150 m3/s and XT=1000 m3/s, then
substituting in the above equation, the frequency factor K=2.667
But K=yT-y͞ n /Sn Given for N large, ynand Sn are 0.577 and 1.2825
respectively.
Therefore, 2.667= (YT-0.577)/1.2825
YT=3.9970, YT = -(ln.lnT/T-1),
Therefore T=54.9years (taking antilog of Yt)
So, the return period of flood of magnitude 1000m3/s is 55 years.

REFERENCE
Ranjana Ray Chaudhuri (2019)
https://ebooks.inflibnet.ac.in/esp05/chapter/statistical-analysis-of-hydrologic-
data-hydrology-frequency-analysis/?
fbclid=IwY2xjawFFXBZleHRuA2FlbQIxMAABHaNWq6XsTkXl6n9nEDTfckwlM
QbX0nAlSkwzoacfbkRP216Mw5pLqli4Vw_aem_do7lyFNVeyWAHD0j8S_b1g

CONCEPTS OF PROBABILITY

BASIC PROBABILITY AND PROBABILITY MODELS

Probability is the measure of chance of occurrence of a particular event. The


basic concept of probability is widely used in the field of hydrology and
hydroclimatology due to its stochastic nature.
Extreme hydrologic processes can be considered as random with little or no
correlation to adjacent
processes (i.e. time and space independent). Thus, the output from a
hydrologic process can be treated as stochastic (i.e. non-deterministic
process comprised of predictable and random actions)
Probabilistic and statistical methods are used to analyze stochastic
processes and involve varying
degrees of uncertainty. Probabilistic and statistical methods are used to
analyze stochastic processes and involve varying degrees of uncertainty.
The focus of probability and statistical methods is on the observations and not
the physical process.
We will focus on two aspects of hydrology where the stochastic approach can
be applied: rainfall and
streamflow.

A probability model is a mathematical representation of a random


phenomenon. It is defined
by its sample space, events within the sample space, and probabilities
associated with each event. The sample space S for a probability model is the
set of all possible outcomes.

 Graphical Analysis of Single Data Sets


1. Histograms
2. Stem and Leaf Diagrams
3. Quantile Plots
4. Boxplots
5. Probability Plots
 Graphical Comparisons of Two or More Data Sets
1. Histograms
2. Boxplots
3. Probability Plots
4. Dot and Line Plots of Means, Standard Deviations
5. Q-Q Plots
Histograms
Histograms are familiar graphics, and their construction is detailed in
numerous introductory texts
on statistics. Bars are drawn whose height is the number or fraction of data
falling into one of several categories or intervals.
Histograms have one primary deficiency - their visual impression
depends on the number of categories selected for the plot.

Stem and Leaf Diagrams


Stem and leaf diagrams are like histograms turned on their side with data
magnitudes to two significant digits presented rather than only bar heights.

Quantile Plots
Quantile plots visually portray the quantiles, or percentiles (which equal the
quantiles times 100) of the distribution of sample data.

Quantile plots have three advantages:


  Arbitrary categories are not required.
  All of the data are displayed, unlike a boxplot.
  Every point has a distinct position, without overlap.
Boxplots
A very useful and concise graphical display for summarizing the distribution of
a data set.

Boxplots provide visual summaries of:


  the center of the data
  the variation or spread
  the skewness
  presence or absence of unusual values
Outliers
Outliers, observations whose values are quite different than others in the data
set, often cause concern or alarm. They should not. They are often dealt with
by throwing them away prior to describing data, or prior to some of the
hypothesis test procedures . Outliers may be the most important points in the
data set, and should be investigated further.

Outliers can have one of three causes:


1. a measurement or recording error.
2. an observation from a population not similar to that of most of the data,
such as a flood caused by a dam break rather than by precipitation.
3. a rare event from a single population that is quite
skewed.

REFERENCE
https://www.scribd.com/presentation/435528470/CONCEPTS-OF-
PROBABILITY-AND-STATISTICS-HYDROLOGY-pptx

Probability Distribution of Hydrometer Logical Data


Hydrometer Analysis
is used to find particle-size distribution of soil for particle sizes smaller
than 0.075 mm (No. 200 passing) in diameter.It is based on the principle of
sedimentation of soil grain in water. When a soil specimen is dispersed in
water, the particle settle at different velocities, depending on their shape,
size, weight, and viscosity of the water. For simplicity, it is assumed the all soil
particle are sphere and that the velocity of soil particles can be expressed by
Stoke’s Law, according to which

ρs− ρw 2
v= D
18 ŋ
Where;
v = velocity
ρs = density of soil
ρw = density of water
ŋ = viscosity of water
D = diameter of soil particle

D (mm) = K
√ L(cm)
t (min)

Where;
K = Temperature
L = depth
t = time

K=
√ 30 ŋ
(Gs −1)

The ASTM 152-H type of hydrometer is calibrated a reading of 60 at a


temperature of 20 degree celcius for soil particle having a Gs = 2.65. A
hydrometer reading of; say, 30 at a given time of a test means that there are
30g of soil solids (Gs = 2.65) in. In suspension per 1000 cc of soil-water
mixture at a temperature of 20 degree celcius at a depth L where the specific
gravity of the soil-water suspension is measured.
Table 2 ( value of L w/ Hydrometer reading )

Table 3 (value of K w/ test temperature and specific gravity of soil solid)


Hydrometer Reading Corrections:
In the actual experimental work, some corrections to the observed
hydrometer readings need to be applied. They are as follows:

1) Temperature correction (Ft) - hydrometers standardized to certain


temperature and when used in a sample at any other temperature, the
observed reading must be corrected. We wan use Table 2.0 for
temperature corrections at different temperatures.
2) Meniscus correction (Fm) - generally the upper level of meniscus is taken
as the reading during laboratory work (always positive).
3) Zero correction (Fz) - a despersing agent is added to the soil-distilled
water suspension for performing experiment. This will change zero
reading. If the reading increases, the corrections is negative, otherwise is
positive.

Table 1.1 ( Temperature Correction Factors )


Temperature, ℃ Ct Temperature, ℃ Ct
15 -1.10 23 0.70
16 -0.90 24 1
17 -0.70 25 1.30
18 -0.50 26 1.65
19 -0.30 27 2
20 0.00 28 2.50
21 0.20 29 3.05
22 0.40 30 3.80

Table 2.1

PERCENT FINER:

a Rcp
%finer = × 100
Ws

Where;
Rcp = corrected hydrometer reading, Rcp = R+Fr+Fz
Gs(1.65)
a = correction for specific gravity, a =
(Gs −1)2.65
Ws = dry weight of soil
PARTICLE SIZE/DIAMETER:

D (mm) = K
Rcl = R+Fm
√ L(cm)
t (min)

D = corresponding diameter of percent finer


Rcl = corrected reading for effective length L

TOTAL PERCENT FINER:

%passing No. 200


Total % finer = %finer ×
100

References:
Geotechnical Eng'g Hydrometer Analysis (youtube.com)
Frequency Analysis

6.1 Introduction

Frequency analysis is an aid in determining the design discharge and design


rainfall. In addition, it can be used to calculate the frequency of other
hydrologic (or even non-hydrologic) events. Because high discharges and
rainfalls are comparatively infrequent, the selection of the design discharge
can be based on the low frequency with which these high values are
permitted to be exceeded. This frequency of exceedance, or the design
frequency, is the risk that the designer is willing to accept. Of course, the
smaller the risk, the more costly are the drainage works and structures, and
the less often their full capacity will be reached. Accordingly, the design
frequency should be realistic - neither too high nor too low.
The following methods of frequency analysis are discussed in this chapter:
- Counting of the number of occurrences in certain intervals
- Ranking of the data in ascending or descending order ;
- Application of theoretical frequency distributions.

6.2 Fequency Analysis by Intervals

The interval method is as follows


: - Select a number (k) of intervals (with serial number i, lower limit a,, upper
limit bi) of a width suitable to the data series and the purpose of the analysis;
- Count the number (mi) of data (x) in each interval; - Divide mi by the total
number (n) of data in order to obtain the frequency (F) of data (x) in the i-th
interval

The frequency thus obtained is called the frequency of occurrence in a certain


interval. In literature, mi is often termed the.frequency, and Fi is then the
relative frequency. But, in this chapter, the term frequency has been kept to
refer to Fi.

The above procedure was applied to the daily rainfalls given in Table 6. I. The
results are shown in Table 6.2, in Columns (I), (2), (3), (4), and (5). The data
are the same data found in the previous edition of this book.

Column (5) gives the frequency distribution of the intervals. The bulk of the
rainfall values is either O or some value in the 0-25 mm interval. Greater
values, which are more relevant for the design capacity of drainage canals,
were recorded on only a few days.
From the definition of frequency (Equation 6.1), it follows that the sum of all
frequencies equals unity
In hydrology, we are often interested in the frequency with which data exceed
a certain, usually high design value. We can obtain the frequency of
exceedance F(x > ai) of the lower limit ai of a depth interval i by counting the
number Mi of all rainfall values x exceeding ai, and by dividing this number by
the total number of rainfall data. This is shown in Table 6.2, Column (6). In
equation form, this appears as
Frequency distributions are often presented as the frequency of non-
exceedance and not as the frequency of occurrence or of exceedance. The
frequency of non-exceedance is also referred to as the cumulative frequency.
We can obtain the frequency of non- exceedance F(x < ai) of the lower limit a,
by calculating the sum of the frequencies over the intervals below a1.
Because the sum of the frequencies over all intervals equals unity, it follows
that

The cumulative frequency (shown in Column (7) of Table 6.2) can, therefore,
be derived directly from the frequency of exceedance as

Columns (8) and (9) of Table 6.2 show return periods. The calculation of
these periods will be discussed later, in Section 6.2.4.

Censored Frequency Distributions

Instead of using all available data to make a frequency distribution, we can


use only certain selected data. For example, if we are interested only in higher
rainfall rates, for making drainage design calculations, it is possible to make a
frequency distribution only of the rainfalls that exceed a certain value.
Conversely, if we are interested in water shortages, it is also possible to make
a frequency distribution of only the rainfalls that are below a certain limit.
These distributions are called censored frequency distributions.

In Table 6.3, a censored frequency distribution is presented of the daily


rainfalls, from Table 6.1, greater than 25 mm. It was calculated without
intervals i = 1 and i = 2 of Table 6.2.

The remaining frequencies presented in Table 6.3 differ from those in Table
6.2 in that they are conditional frequencies (the condition in this case being
that the rainfall is higher than 25 mm). To convert conditional frequencies to
unconditional frequencies, the following relation is used
Where,
e F = unconditional frequency (as in Table 6.2
) F’ = conditional frequency (as in Table 6.3)
F* = frequency of occurrence of the excluded events (as in Table 6.2)

As an example, we find in Column (7) of Table 6.3 that F’(x I 50) = 0.641.
Further, the cumulative frequency of the excluded data equals F*(x I 25) =
0.932 (see Column (7) of Table 6.2). Hence, the unconditional frequency
obtained from Equation 6.6 is
F(x I 50) = (1 - 0.932) x 0.641 = 0.0439
This is exactly the value found in Column (5) of Table 6.2.

6.2.3 Frequency Analysis by Ranking of Data

Data for frequency analysis can be ranked in either ascending or descending


order. For a ranking in descending order, the suggested procedure is as
follows:
- Rank the total number of data (n) in descending order according to their
value (x), the highest value first and the lowest value last;
- Assign a serial number (r) to each value x (x,, r = 1,2,3, ..., n), the highest
value being x, and the lowest being x,;
- Divide the rank (r) by the total number of observations plus 1 to obtain the
frequency of exceedance

- Calculate the frequency of non-exceedance

If the ranking order is ascending instead of descending, we can obtain similar


relations by interchanging F(x > x,) and F(x I x,).
An advantage of using the denominator n + 1 instead of n (which was
used in Section 6.2.2) is that the results for ascending or descending ranking
orders will be identical.
Table 6.4 shows how the ranking procedure was applied to the monthly
rainfalls of Table 6.1. Table 6.5 shows how it was applied to the monthly
maximum 1-day rainfalls of Table 6.1. Both tables show the calculation of
return periods (Column 7), which will be discussed below in Section 6.2.4.
Both will be used again, in Section 6.4, to illustrate the application of
theoretical frequency distributions.
The estimates of the frequencies obtained from Equations 6.7 and 6.8 are
not unbiased. But then, neither are the other estimators found in literature. For
values of x close to the average value (TI), it makes little difference which
estimator is used, and the bias is small. For extreme values, however, the
difference, and the bias, can be relatively large. The reliability of the
predictions of extreme values is discussed in Section 6.2.5.

6.2.4 Recurrence Predictions and Return Periods


An observed frequency distribution can be regarded as a sample taken from a
frequency distribution with an infinitely long observation series (the
‘population’). If this sample is representative of the population, we can then
expect future observation periods to reveal frequency distributions similar to
the observed distribution. The expectation of similarity (‘representativeness’)
is what makes it possible to use the observed frequency distribution to
calculate recurrence estimates.

Representativeness implies the absence of a time trend. The detection of


possible time trends is discussed in Section 6.6.
It is a basic law of statistics that if conclusions about the population are
based on a sample, their reliability will increase as the size of the sample
increases. The smaller
* Tabulated for parametric distribution-fitting (see Section 6.4)

the frequency of occurrence of an event, the larger the sample will have to be
in order to make a prediction with a specified accuracy. For example, the
observed frequency ofdry days given in Table 6.2 (0.5, or 50%) will deviate
only slightly from the frequency observed during a later period of at least
equal length. The frequency of daily rainfalls of 75-100 mm (0.005, or 0.5%),
however, can be easily doubled (or halved) in the next period of record.
A quantitative evaluation of the reliability of frequency predictions follows
in the next section.

Recurrence estimates are often made in terms of return periods (T), T


being the number of new data that have to be collected, on average, to find a
certain rainfall value. The return period is calculated as T = 1/F, where F can
be any of the frequencies discussed in Equations 6.1,6.3, 6.5, and 6.6. For
example, in Table 6.2, the frequency F of 1-day November rainfalls in the
interval of 25-50 mm equals 0.04386, or 4.386%. Thus the return period is T =
I/F = 1/0.04386 = 23 November days.

In hydrology, it is very common to work with frequencies of exceedance of


the variable x over a reference value x,. The corresponding return period is
then
For example, in Table 6.2 the frequency of exceedance of 1 -day rainfalls of x,
= 1 O0 mm in November is F(x > 100) = 0.00526, or 0.526%. Thus the return
period is

In design, T is often expressed in years

As the higher daily rainfalls can generally be considered independent of each


other, and as there are 30 November days in one year, it follows from the
previous example that

This means that, on average, there will be a November day with rainfall
exceeding 100 mm once in 6.33 years.
If a censored frequency distribution is used (as it was in Table 6.3), it will also
be necessary to use the factor I-F* (as shown in Equation 6.6) to adjust
Equation 6.10
This produces

where T’ is the conditional return period (T’ = l/F’).


In Figure 6.1, the rainfalls of Tables 6.2, 6.4, and 6.5 have been plotted
against their respective return periods. Smooth curves have been drawn to fit
the respective points as well as possible. These curves can be considered
representative of average future frequencies. The advantages of the
smoothing procedure used are that it enables interpolation and that, to a
certain extent, it levels off random variation. Its disadvantage is that it may
suggest an accuracy of prediction that does not exist. It is therefore useful to
add confidence intervals for each of the curves in order to judge the extent of
the curve’s reliability. (This will be discussed in the following section.)
From Figure 6.1, it can be concluded that, if Tr is greater than 5, it makes
no significant difference if the frequency analysis is done on the basis of
intervals of all 1-day rainfalls or on the basis of maximum I-day rainfalls only.
This makes it possible to restrict the analysis to maximum rainfalls, which
simplifies the calculations and produces virtually the same results.
The frequency analysis discussed here is usually adequate to solve
problems related to agriculture. If there are approximately 20 years of
information available, predictions for 10-year return periods, made with the
methods described in this section, will be reasonably reliable, but predictions
for return periods of 20 years or more will be less reliable.
6.2.5 CONFIDENCE ANALYSIS
Figure 6.2 shows nine cumulative frequency distributions that were obtained
with the Ranking method. They are based on different samples, each
consisting of 50 Observations taken randomly from 1000 values. The values
obey a fixed distribution (the base line). It is clear that each sample reveals a
different distribution, sometimes Close to the base line, sometimes away from
it. Some of the lines are even curved, Although the base line is straight.

Figure 6.2 also shows that, to give an impression of the error in the prediction
of Future frequencies, frequency estimates based on one sample of limited
size should Be accompanied by confidence statements. Such an impression
can be obtained from Figure 6.3, which is based on the binomial distribution.
The figure illustrates the Principle of the nomograph. Using N = 50 years of
observation, we can see that the 90% confidence interval of a predicted 5-
year return period is 3.2 to 9 years. These values are obtained by the
following procedure:
- Enter the graph on the vertical axis with a return period of T, = 5, (point A),
and Move horizontally to intersect the baseline, with N = co, at point B;
Move horizontally to intersect the baseline, with N = co, at point B;
 Move vertically from the intersection point (B) and intersect the curves for
N = 50 to obtain points C and D;

 Move back horizontally from points C and D to the axis with the return
periods and read points E and F;
 The interval from E to Fis the 90% confidence interval ofA, hence it can be
predicted With 90% confidence that TI is between 3.2 and 9 years.
Nomographs for confidence Intervals other than 90% can be found in
literature (e.g. in Oosterbaan 1988).

By repeating the above procedure for other values of TI, we obtain a


confidence belt.
In theory, confidence belts are somewhat wider than those shown in the
graph. The reason for this is that mean values and standard deviations of the
applied binomial distributions have to be estimated from a data series of
limited length. Hence, the true means and standard deviations can be either
smaller or larger than the estimated ones. In practice, however, the exact
determination of confidence belts is not a primary concern because the error
made in estimating them is small compared to their width.

The confidence belts in Figure 6.3 show the predicted intervals for the
frequencies that can be expected during a very long future period with no
systematic changes in hydrologic conditions. For shorter future periods, the
confidence intervals are wider than indicated in the graphs. The same is true
when hydrologic conditions change.

Frequency Analysis

6.1 Introduction

Frequency analysis is an aid in determining the design discharge and design


rainfall. In addition, it can be used to calculate the frequency of other
hydrologic (or even non-hydrologic) events. Because high discharges and
rainfalls are comparatively infrequent, the selection of the design discharge
can be based on the low frequency with which these high values are
permitted to be exceeded. This frequency of exceedance, or the design
frequency, is the risk that the designer is willing to accept. Of course, the
smaller the risk, the more costly are the drainage works and structures, and
the less often their full capacity will be reached. Accordingly, the design
frequency should be realistic - neither too high nor too low.
The following methods of frequency analysis are discussed in this chapter:
- Counting of the number of occurrences in certain intervals
- Ranking of the data in ascending or descending order ;
- Application of theoretical frequency distributions.

6.2 Fequency Analysis by Intervals

The interval method is as follows


: - Select a number (k) of intervals (with serial number i, lower limit a,, upper
limit bi) of a width suitable to the data series and the purpose of the analysis;
- Count the number (mi) of data (x) in each interval; - Divide mi by the total
number (n) of data in order to obtain the frequency (F) of data (x) in the i-th
interval
The frequency thus obtained is called the frequency of occurrence in a certain
interval. In literature, mi is often termed the.frequency, and Fi is then the
relative frequency. But, in this chapter, the term frequency has been kept to
refer to Fi.

The above procedure was applied to the daily rainfalls given in Table 6. I. The
results are shown in Table 6.2, in Columns (I), (2), (3), (4), and (5). The data
are the same data found in the previous edition of this book.

Column (5) gives the frequency distribution of the intervals. The bulk of the
rainfall values is either O or some value in the 0-25 mm interval. Greater
values, which are more relevant for the design capacity of drainage canals,
were recorded on only a few days.
From the definition of frequency (Equation 6.1), it follows that the sum of all
frequencies equals unity
In hydrology, we are often interested in the frequency with which data exceed
a certain, usually high design value. We can obtain the frequency of
exceedance F(x > ai) of the lower limit ai of a depth interval i by counting the
number Mi of all rainfall values x exceeding ai, and by dividing this number by
the total number of rainfall data. This is shown in Table 6.2, Column (6). In
equation form, this appears as

Frequency distributions are often presented as the frequency of non-


exceedance and not as the frequency of occurrence or of exceedance. The
frequency of non-exceedance is also referred to as the cumulative frequency.
We can obtain the frequency of non- exceedance F(x < ai) of the lower limit a,
by calculating the sum of the frequencies over the intervals below a1.
Because the sum of the frequencies over all intervals equals unity, it follows
that
The cumulative frequency (shown in Column (7) of Table 6.2) can, therefore,
be derived directly from the frequency of exceedance as

Columns (8) and (9) of Table 6.2 show return periods. The calculation of
these periods will be discussed later, in Section 6.2.4.

Censored Frequency Distributions

Instead of using all available data to make a frequency distribution, we can


use only certain selected data. For example, if we are interested only in higher
rainfall rates, for making drainage design calculations, it is possible to make a
frequency distribution only of the rainfalls that exceed a certain value.
Conversely, if we are interested in water shortages, it is also possible to make
a frequency distribution of only the rainfalls that are below a certain limit.
These distributions are called censored frequency distributions.

In Table 6.3, a censored frequency distribution is presented of the daily


rainfalls, from Table 6.1, greater than 25 mm. It was calculated without
intervals i = 1 and i = 2 of Table 6.2.

The remaining frequencies presented in Table 6.3 differ from those in Table
6.2 in that they are conditional frequencies (the condition in this case being
that the rainfall is higher than 25 mm). To convert conditional frequencies to
unconditional frequencies, the following relation is used

Where,
e F = unconditional frequency (as in Table 6.2
) F’ = conditional frequency (as in Table 6.3)
F* = frequency of occurrence of the excluded events (as in Table 6.2)

As an example, we find in Column (7) of Table 6.3 that F’(x I 50) = 0.641.
Further, the cumulative frequency of the excluded data equals F*(x I 25) =
0.932 (see Column (7) of Table 6.2). Hence, the unconditional frequency
obtained from Equation 6.6 is
F(x I 50) = (1 - 0.932) x 0.641 = 0.0439
This is exactly the value found in Column (5) of Table 6.2.
6.2.3 Frequency Analysis by Ranking of Data

Data for frequency analysis can be ranked in either ascending or descending


order. For a ranking in descending order, the suggested procedure is as
follows:
- Rank the total number of data (n) in descending order according to their
value (x), the highest value first and the lowest value last;
- Assign a serial number (r) to each value x (x,, r = 1,2,3, ..., n), the highest
value being x, and the lowest being x,;
- Divide the rank (r) by the total number of observations plus 1 to obtain the
frequency of exceedance

- Calculate the frequency of non-exceedance

If the ranking order is ascending instead of descending, we can obtain similar


relations by interchanging F(x > x,) and F(x I x,).
An advantage of using the denominator n + 1 instead of n (which was
used in Section 6.2.2) is that the results for ascending or descending ranking
orders will be identical.
Table 6.4 shows how the ranking procedure was applied to the monthly
rainfalls of Table 6.1. Table 6.5 shows how it was applied to the monthly
maximum 1-day rainfalls of Table 6.1. Both tables show the calculation of
return periods (Column 7), which will be discussed below in Section 6.2.4.
Both will be used again, in Section 6.4, to illustrate the application of
theoretical frequency distributions.
The estimates of the frequencies obtained from Equations 6.7 and 6.8 are
not unbiased. But then, neither are the other estimators found in literature. For
values of x close to the average value (TI), it makes little difference which
estimator is used, and the bias is small. For extreme values, however, the
difference, and the bias, can be relatively large. The reliability of the
predictions of extreme values is discussed in Section 6.2.5.

6.2.4 Recurrence Predictions and Return Periods


An observed frequency distribution can be regarded as a sample taken from a
frequency distribution with an infinitely long observation series (the
‘population’). If this sample is representative of the population, we can then
expect future observation periods to reveal frequency distributions similar to
the observed distribution. The expectation of similarity (‘representativeness’)
is what makes it possible to use the observed frequency distribution to
calculate recurrence estimates.

Representativeness implies the absence of a time trend. The detection of


possible time trends is discussed in Section 6.6.
It is a basic law of statistics that if conclusions about the population are
based on a sample, their reliability will increase as the size of the sample
increases. The smaller
* Tabulated for parametric distribution-fitting (see Section 6.4)

the frequency of occurrence of an event, the larger the sample will have to be
in order to make a prediction with a specified accuracy. For example, the
observed frequency ofdry days given in Table 6.2 (0.5, or 50%) will deviate
only slightly from the frequency observed during a later period of at least
equal length. The frequency of daily rainfalls of 75-100 mm (0.005, or 0.5%),
however, can be easily doubled (or halved) in the next period of record.
A quantitative evaluation of the reliability of frequency predictions follows
in the next section.

Recurrence estimates are often made in terms of return periods (T), T


being the number of new data that have to be collected, on average, to find a
certain rainfall value. The return period is calculated as T = 1/F, where F can
be any of the frequencies discussed in Equations 6.1,6.3, 6.5, and 6.6. For
example, in Table 6.2, the frequency F of 1-day November rainfalls in the
interval of 25-50 mm equals 0.04386, or 4.386%. Thus the return period is T =
I/F = 1/0.04386 = 23 November days.

In hydrology, it is very common to work with frequencies of exceedance of


the variable x over a reference value x,. The corresponding return period is
then
For example, in Table 6.2 the frequency of exceedance of 1 -day rainfalls of x,
= 1 O0 mm in November is F(x > 100) = 0.00526, or 0.526%. Thus the return
period is

In design, T is often expressed in years

As the higher daily rainfalls can generally be considered independent of each


other, and as there are 30 November days in one year, it follows from the
previous example that

This means that, on average, there will be a November day with rainfall
exceeding 100 mm once in 6.33 years.
If a censored frequency distribution is used (as it was in Table 6.3), it will also
be necessary to use the factor I-F* (as shown in Equation 6.6) to adjust
Equation 6.10
This produces

where T’ is the conditional return period (T’ = l/F’).


In Figure 6.1, the rainfalls of Tables 6.2, 6.4, and 6.5 have been plotted
against their respective return periods. Smooth curves have been drawn to fit
the respective points as well as possible. These curves can be considered
representative of average future frequencies. The advantages of the
smoothing procedure used are that it enables interpolation and that, to a
certain extent, it levels off random variation. Its disadvantage is that it may
suggest an accuracy of prediction that does not exist. It is therefore useful to
add confidence intervals for each of the curves in order to judge the extent of
the curve’s reliability. (This will be discussed in the following section.)
From Figure 6.1, it can be concluded that, if Tr is greater than 5, it makes
no significant difference if the frequency analysis is done on the basis of
intervals of all 1-day rainfalls or on the basis of maximum I-day rainfalls only.
This makes it possible to restrict the analysis to maximum rainfalls, which
simplifies the calculations and produces virtually the same results.
The frequency analysis discussed here is usually adequate to solve
problems related to agriculture. If there are approximately 20 years of
information available, predictions for 10-year return periods, made with the
methods described in this section, will be reasonably reliable, but predictions
for return periods of 20 years or more will be less reliable.
6.2.5 CONFIDENCE ANALYSIS
Figure 6.2 shows nine cumulative frequency distributions that were obtained
with the Ranking method. They are based on different samples, each
consisting of 50 Observations taken randomly from 1000 values. The values
obey a fixed distribution (the base line). It is clear that each sample reveals a
different distribution, sometimes Close to the base line, sometimes away from
it. Some of the lines are even curved, Although the base line is straight.

Figure 6.2 also shows that, to give an impression of the error in the prediction
of Future frequencies, frequency estimates based on one sample of limited
size should Be accompanied by confidence statements. Such an impression
can be obtained from Figure 6.3, which is based on the binomial distribution.
The figure illustrates the Principle of the nomograph. Using N = 50 years of
observation, we can see that the 90% confidence interval of a predicted 5-
year return period is 3.2 to 9 years. These values are obtained by the
following procedure:
- Enter the graph on the vertical axis with a return period of T, = 5, (point A),
and Move horizontally to intersect the baseline, with N = co, at point B;
Move horizontally to intersect the baseline, with N = co, at point B;
 Move vertically from the intersection point (B) and intersect the curves for
N = 50 to obtain points C and D;

 Move back horizontally from points C and D to the axis with the return
periods and read points E and F;
 The interval from E to Fis the 90% confidence interval ofA, hence it can be
predicted With 90% confidence that TI is between 3.2 and 9 years.
Nomographs for confidence Intervals other than 90% can be found in
literature (e.g. in Oosterbaan 1988).

By repeating the above procedure for other values of TI, we obtain a


confidence belt.
In theory, confidence belts are somewhat wider than those shown in the
graph. The reason for this is that mean values and standard deviations of the
applied binomial distributions have to be estimated from a data series of
limited length. Hence, the true means and standard deviations can be either
smaller or larger than the estimated ones. In practice, however, the exact
determination of confidence belts is not a primary concern because the error
made in estimating them is small compared to their width.

The confidence belts in Figure 6.3 show the predicted intervals for the
frequencies that can be expected during a very long future period with no
systematic changes in hydrologic conditions. For shorter future periods, the
confidence intervals are wider than indicated in the graphs. The same is true
when hydrologic conditions change.

Reference
Oosterbaan. n.d.p. 175-186. Frequency and Regression Analysis.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy