HEC-HMS Technical Reference Manual-V4-20241104 - 222614
HEC-HMS Technical Reference Manual-V4-20241104 - 222614
Introduction
The Hydrologic Modeling System is designed to simulate the precipitation-runoff processes of dendritic
watershed systems. It is designed to be applicable in a wide range of geographic areas for solving the widest
possible range of problems. This includes large river basin water supply and flood hydrology, and small
urban or natural watershed runoff. Hydrographs produced by the program are used directly or in conjunction
with other software for studies of water availability, urban drainage, flow forecasting, future urbanization
impact, reservoir spillway design, flood damage reduction, floodplain regulation, and systems operation.
Program Overview
For precipitation-runoff-routing simulation, the program provides the following components:
• Precipitation methods which can describe an observed (historical) precipitation event, a frequency-
based hypothetical precipitation event, or an event that represents the upper limit of precipitation
possible at a given location.
• Snow melt methods which can partition precipitation into rainfall and snowfall and then account for
accumulation and melt of the snowpack. When a snow method is not used, all precipitation is
assumed to be rain.
• Evapotranspiration methods which are used in continuous simulation for computing the amount of
infiltrated soil water that is removed back to the atmosphere through evaporation and plant use.
• Loss methods which can estimate the amount of precipitation that infiltrates from the land surface
into the soil. By implication, the precipitation that does not infiltrate becomes surface runoff.
• Direct runoff methods that describe overland flow, storage, and energy losses as water runs off a
watershed and into the stream channels. These are generally called transform methods because the
"transform" uninfiltrated precipitation into watershed outflow.
• Hydrologic routing methods that account for storage and energy flux as water moves through stream
channels.
• A distributed transform model for use with distributed precipitation data, such as the data available
from weather radar.
• A simple one-layer and more complex five-layer soil-moisture-accounting model for use in continuous
simulation. They can be used to simulate the long-term response of a watershed to wetting and
drying.
The program also includes a number of tools to help process parameter data and computed results,
including:
• An automatic calibration tool that can be used to estimate parameter values and initial conditions for
most methods, given observations of hydrometeorological conditions.
• An analysis tool to assist in developing frequency curves throughout a watershed on the basis of
storms with an associated exceedance probability.
Links to a database management system that permits data storage, retrieval and connectivity with other
analysis tools available from HEC and other sources is also included.
• The Hydrologic Modeling System HEC-HMS User's Manual (USACE, 1998b) describes how to use the
computer program. While the user's manual identifies the models that are included in the program, its
focus is the program's user interface. Thus, the user's manual provides a description of how to use
the interface to provide data, to specify model parameters, to execute the program, and to review the
results. It provides examples of all of these tasks.
• The Hydrologic Modeling System HEC-HMS Applications Guide (USACE, 2002) describes how to
apply the program to completing a hydrology study. A number of different types of studies are
described, including typical goals, required information, and needed output data. The steps of
performing the study are illustrated with a case study.
The user's manual and the HEC-HMS program are available on the Hydrologic Engineering Center's web site.
The address is www.hec.usace.army.mil1.
1 http://www.hec.usace.army.mil
7 Infiltration and runoff volume Summarizes the methods that are included for
estimating runoff volume, given precipitation
References
US Army Corps of Engineers, USACE (1998) HEC-1 flood hydrograph package user's manual. Hydrologic
Engineering Center, Davis, CA.
USACE (2000) Hydrologic Modeling System HEC-HMS User's Manual. Hydrologic Engineering Center, Davis,
CA.
USACE (2002) Hydrologic Modeling System HEC-HMS Applications Guide. Hydrologic Engineering Center,
Davis, CA.CHAPTER 2
Primer on Models
This chapter explains basic concepts of modeling and the most important properties of models. It also
defines essential terms used throughout this technical reference manual.
What is a Model?
Hydrologic engineers are called upon to provide information for a variety of water resource studies:
…simplified systems that are used to represent real-life systems and may be substitutes of the real
systems for certain purposes. The models express formalized concepts of the real systems. (Diskin,
1970)
…a symbolic, usually mathematical representation of an idealized situation that has the important
structural properties of the real system. A theoretical model includes a set of general laws or theoretical
principles and a set of statements of empirical circumstances. An empirical model omits the general laws
and is in reality a representation of the data. (Woolhiser and Brakensiek, 1982)
Researchers have also developed analog models that represent the flow of water with the flow of electricity
in a circuit. With those models, the input is controlled by adjusting the amperage, and the output is measured
with a voltmeter. Historically, analog models have been used to calculate subsurface flow.
The HEC-HMS program includes models in a third category—mathematical models. In this manual, that term
defines an equation or a set of equations that represent the response of a hydrologic system component to a
change in hydrometeorological conditions. Table 2 shows some other definitions of mathematical models;
each of these applies to the models included in the program.
Mathematical models, including those that are included in the program, can be classified using a number of
different criteria. These focus on the mechanics of the model: how it deals with time, how it addresses
randomness, and so on. While knowledge of this classification is not necessary to use the program, it is
helpful in deciding which of the models to use for various applications. For example, if the goal is to create a
model for predicting runoff from an ungaged watershed, the fitted-parameter models included in the
program that require unavailable data are a poor choice. For long-term runoff forecasting, use a continuous
model, rather than a single-event model; the former will account for system changes between rainfall events,
while the latter will not.
Event or Continuous
This distinction applies primarily to models of infiltration, surface runoff, and baseflow. An event model
simulates a single storm. The duration of the storm may range from a few hours to a few days. The key
identifying feature is that the model is only capable of representing watershed response during and
immediately after a storm. Event infiltration models do not include redistribution of the wetting front between
A continuous model simulates a longer period, ranging from several days to many years. In order to do so, it
must be capable of predicting watershed response both during and between precipitation events. For
infiltration models, this requires consideration of the drying processes that occur in the soil between
precipitation events. Surface runoff models must be able to account for dry surface conditions with no
runoff, wet surface conditions that produce runoff during and after a storm, and the transition between the
two states. Baseflow methods become increasing important in continuous simulation because the vast
majority of the hydrograph is defined by inter-storm flow characteristics. Most of the models included in
HEC-HMS are event models.
Spatially-Averaged or Distributed
This distinction applies mostly to models of infiltration and surface runoff. A distributed model is one in
which the spatial (geographic) variations of characteristics and processes are considered explicitly, while in
a spatially-averaged model, these spatial variations are averaged or ignored. While not always true, it is often
the case that distributed models represent the watershed as a set of grid cells. Calculations are carried out
separately for each grid cell. Depending on the complexity of the model, a grid cell may interact with its
neighbor cells by exchanging water either above or below the ground surface.
It is important that note that even distributed models perform spatial averaging. As we will see later in detail,
most of the models included in HEC-HMS are based on differential equations. These equations are written at
the so-called point scale. By point scale we mean that the equation applies over a length ∆x that is very small
(differential) compared to the size of the watershed. In a spatially-averaged model, the equation is assumed
to apply at the scale of a subbasin. Conversely, in a distributed model the equation is typically assumed to
apply at the scale of a grid cell. Therefore it is accurate to say that distributed models also perform spatial
averaging but generally do so over a much smaller scale than typical spatially-averaged models. HEC-HMS
includes primarily spatially-averaged models.
Empirical or Conceptual
This distinction focuses on the knowledge base upon which the mathematical models are built. A conceptual
model is built upon a base of knowledge of the pertinent physical, chemical, and biological processes that
act on the input to produce the output. Many conceptual models are said to be based on "first principles."
This usually means that a control volume is established and equations for the conservation of mass and
either momentum or energy are written for the control volume. Conservation is a basic principle of physics
that cannot be broken. Through the writing of the equations, a model of the process will emerge. In other
cases, conceptual models are developed through a mechanistic view instead of first principles. A
mechanistic view attempts to represent the dynamics of a process explicitly. For example, water has been
observed to move through soil in very predictable ways. A mechanistic view attempts to determine what
processes cause water to move as it is observed. If the processes can be described by one or more
mathematical equations, then a model can be developed to directly describe the observed behavior.
An empirical model, on the other hand, is built upon observation of input and output, without seeking to
represent explicitly the process of conversion. These types of models are sometimes called "black box"
models because they convert input to output without any details of the actual physical process involved. A
common way to develop empirical models is to collect field data with observations of input and resulting
output. The data is analyzed statistically and a mathematical relationship is sought between input and
output. Once the relationship is established, output can be predicted for an observed input. For example,
observations of inflow to a river reach and resulting flow at a downstream location could be used to develop
Deterministic or Stochastic
A deterministic model assumes that the input is exactly known. Further, it assumes that the process
described by the model is free from random variation. In reality there is always some variation. For example,
you could collect a large sample of soil in the field and take it into a laboratory. Next you could divide the
large sample into 10 equal small samples and estimate the porosity of each one. You would find a slightly
different value for the porosity of each small sample even though the large sample was collected from a
single hole dug in the field. This is one example of natural variation in model input. Process variation is
somewhat different. Suppose a flood with a specific peak flow enters a section of river. The flood will move
down through the reach and the resulting outflow hydrograph will show evidence of translation and
attenuation. However, the bed of the river is constantly moving in response to both floods and inter-flood
channel flows. The movement of the bed means that the exact same flood with the same specific peak flow
could happen again, but the outflow hydrograph would be slightly different. While you might try to describe
the reach carefully enough to eliminate the natural variation in the process, it is not practically possible to do
so.
Deterministic models essentially ignore variation in input by assuming fixed input. The input may be changed
for different scenarios or historical periods, but the input still takes on a single value. Such an assumption
may seem too significant for the resulting model to produce meaningful results. However, deterministic
models nevertheless are valuable tools because of the difficulty of characterizing watersheds and the
hydrologic environment in the first place. Stochastic models, on the other hand, embrace random variation by
attempting to explicitly describe it. For example, many floods in a particular river reach may be examined to
determine the bed slope during each flood. Given enough floods to examine, you could estimate the mean
bed slope, its standard deviation, and perhaps infer a complete probability distribution. Instead of using a
single input like deterministic models, stochastic models include the statistics of variation both of the input
and process. All models included in HEC-HMS are deterministic.
Measured-Parameter or Fitted-Parameter
This distinction between measured and fitted parameters is critical in selecting models for application when
observations of input and output are unavailable. A measured-parameter model is one in which model
parameters can be determined from system properties, either by direct measurement or by indirect methods
that are based upon the measurements. The Green and Ampt infiltration model is an example of a measured
parameter model. It includes hydraulic conductivity and wetting front suction as parameters. Both
parameters can be measured directly using appropriate instruments imbedded in the soil during a wetting-
drying cycle. Many other parameters used in infiltration models can be reliably estimated if the soil texture is
known; texture can be determined by direct visual examination of the soil.
A fitted-parameter model, on the other hand, includes parameters that cannot be measured. Instead, the
parameters must be found by fitting the model with observed values of the input and the output. The
Muskingum routing model is an example of a fitted parameter model. The K parameter can be directly
estimated as the travel time of the reach. However, the X parameter is a qualitative estimate of the amount
of attenuation in the reach. Low values of X indicate significant attenuation while high values indicate pure
translation. The only way to estimate the value of X for a particular reach is to examine the upstream
hydrograph and the resulting outflow hydrograph. HEC-HMS includes both measured-parameter models and
fitted-parameter models.
State Variables
These terms in the model's equations represent the state of the hydrologic system at a particular time and
location. For example, the deficit and constant-rate loss model that is described in Chapter 5 tracks the
mean volume of water in natural storage in the watershed. This volume is represented by a state variable in
the deficit and constant-rate loss model's equations. Likewise, in the detention model of Chapter 10, the
pond storage at any time is a state variable; the variable describes the state of the engineered storage
system.
Parameters
These are numerical measures of the properties of the real-world system. They control the relationship of the
system input to system output. An example of this is the curve number that is a constituent of the SCS curve
number runoff model described in Chapter 5. This parameter, a single number specified when using the
model, represents complex properties of the real-world soil system. If the number increases, the computed
runoff volume will increase. If the number decreases, the runoff volume will decrease.
Parameters can be considered the "tuning knobs" of a model. The parameter values are adjusted so that the
model accurately predicts the physical system response. For example, the Snyder unit hydrograph model has
two parameters, the basin lag, tp, and peaking coefficient, Cp. The values of these parameters can be
adjusted to "fit" the model to a particular physical system. Adjusting the values is referred to as calibration.
Calibration is discussed in Chapter 9.
Parameters may have obvious physical significance, or they may be purely empirical. For example, the
Muskingum-Cunge channel model includes the channel slope, a physically significant, measurable
parameter. On the other hand, the Snyder unit hydrograph model has a peaking coefficient, Cp. This
parameter has no direct relationship to any physical property; it can only be estimated by calibration.
Boundary Conditions
These are the values of the system input—the forces that act on the hydrologic system and cause it to
change. The most common boundary condition in the program is precipitation; applying this boundary
condition causes runoff from a watershed. Another example is the upstream (inflow) flow hydrograph to a
channel reach; this is the boundary condition for a routing model.
Initial Conditions
All models included in the program are unsteady-flow models; that is, they describe changes in flow over
time. They do so by solving, in some form, differential equations that describe a component of the hydrologic
system. Solving differential equations that involve time always requires knowledge about the state of the
system at the beginning of the simulation.
The solution of any differential equation is a report of how much the output changes with respect to changes
in the input, the parameters, and other critical variables in the modeled process. For example, the solution of
the routing equations will tell us the value of ∆Q/∆t, the rate of change of flow with respect to time. But in
using the models for planning, designing, operating, responding, or regulating, the flow values at various
times are needed, not just the rate of change. Given an initial value of flow, Q at some time t, in addition to
the rate of change, then the required values are computed using the following equation in a recursive
fashion:
Method
As noted above, a mathematical model is the equations that represent the behavior of hydrologic system
components. This manual uses the term method in this context. For example, the Muskingum-Cunge channel
routing method described in Chapter 8 encapsulates equations for continuity and momentum to form a
mathematical model of open-channel flow for routing. All of the details of the equations, initial conditions,
state variables, boundary conditions, and technique of solving the equations are contained within the
method.
Input
When the equations of a mathematical model are solved with site-specific conditions and parameters, the
equations describe the processes and predict what will happen within a particular watershed or hydrologic
system. In this manual, this is referred to as an application of the model. In using a program to solve the
equations, input to the program is necessary. The input encapsulates the site-specified conditions and
parameters. With HEC-HMS, the information is supplied by completing forms in the graphical user interface.
The input may also include time-series data, paired data functions, or grid data from an HEC-DSS database
(USACE, 1995).
Program
If the equations of a mathematical model are too numerous or too complex to solve with pencil, paper, and
calculator, they can be translated into computer code. Techniques from a branch of mathematics called
numerical analysis are used to solve the equations within the constraints of performing calculations with a
computer. The result is a computer program. The term model is often applied to a computer program
because the particular program only solves one mathematical model. However, HEC-HMS includes a variety
of methods for modeling hydrologic components. Thus it does not make sense to call it a model; it is a
computer program.
Programs may be classified broadly as those developed for a specific set of parameters, boundary
conditions or initial conditions, and those that are data-driven. Programs in the first category are "hard wired"
to represent the system of interest. To change parameters, boundary conditions or initial conditions, the
References0
Diskin, M.H. (1970). "Research approach to watershed modeling, definition of terms." ARS and SCS
watershed modeling workshop, Tucson, AZ.
Ford, D.T., and Hamilton, D. (1996). "Computer models for water-excess management." Larry W. Mays ed.,
Water resources handbook, McGraw-Hill, NY.
Meta Systems (1971). Systems analysis in water resources planning. Water Information Center, NY.
Overton, D.E., and Meadows, M.E. (1976). Stormwater modeling. Academic Press, NY.
USACE (1995) HEC-DSS user's guide and utility manuals. Hydrologic Engineering Center, Davis, CA.
Woolhiser, D.A, and D.L. Brakensiek (1982) "Hydrologic system synthesis." Hydrologic modeling of small
watersheds, American Society of Agricultural Engineers, St. Joseph, MO.CHAPTER 3
Program Components
This chapter describes how the methods included in the program conceptually represent watershed
behavior. It also identifies and categorizes these methods on the basis of the underlying mathematical
models.
Watershed Processes
Figure 1 is a systems diagram of the watershed runoff process, at a scale that is consistent with the scale
modeled well with the program. The processes illustrated begin with precipitation. The precipitation may be
rainfall or could optionally include snowfall as well. In the simple conceptualization shown, the precipitation
can fall on the watershed's vegetation, land surface, and water bodies such as streams and lakes.
Figure 1.Systems diagram of the runoff process at local scale (after Ward, 1975).
• Methods that process precipitation, snow, and potential evapotranspiration meteorologic data.
• Methods that represent direct runoff, including overland flow and interflow.
Model Categorization
Initial and constant event, spatially averaged, conceptual, fitted and measured
parameter
Soil moisture accounting (SMA) continuous, spatially averaged, conceptual, fitted and
measured parameter
Model Categorization
User-specified unit hydrograph (UH) event, spatially averaged, empirical, fitted parameter
Model Categorization
3. Define the physical characteristics of the watershed by creating and editing a basin model.
6. Create a simulation by combining a basin model, meteorologic model, and control specifications and
view results.
Meltrate ATI
Figure 4.Subbasin component editor including data for loss, transform, and baseflow methods. Area is
required.
Reach The reach is used to convey streamflow in the basin model. Inflow to
the reach can come from one or many upstream elements. Outflow
from the reach is calculated by accounting for translation and
attenuation. Channel losses can optionally be included in the routing.
Source The source element is used to introduce flow into the basin model.
The source element has no inflow. Outflow from the source element is
defined by the user.
Sink The sink is used to represent the outlet of the physical watershed.
Inflow to the sink can come from one or many upstream elements.
There is no outflow from the sink.
Diversion The diversion is used for modeling streamflow leaving the main
channel. Inflow to the diversion can come from one or many upstream
elements. Outflow from the diversion element consists of diverted flow
and non-diverted flow. Diverted flow is calculated using input from the
user. Both diverted and non-diverted flows can be connected to
hydrologic elements downstream of the diversion element.
Frequency Storm Used to develop a precipitation event where depths for various
durations within the storm have a consistent exceedance
probability.
SCS Storm Applies a user specified SCS time distribution to a 24-hour total
storm depth.
References1
USACE (2005) HEC-HMS user's manual. Hydrologic Engineering Center, Davis, CA.
Ward, R.C. (1975) Principles of hydrology. McGraw-Hill Book Company (UK) Limited, London.
CHAPTER 4
Subbasin Characteristics
Reach Characteristics
GIS References
Meteorology
General
This chapter describes how meteorology information is entered into the program using a Meteorologic
Model. The Meteorologic Model is responsible for preparing the boundary conditions that act on the
watershed during a simulation. Consequently, a Meteorologic Model may be used with one or more basin
models. The model can be configured to represent numerous meteorological processes, including
precipitation, temperature, short and longwave radiation, and evapotranspiration. The program provides
three options for each type of model process:
• Specified Gage Methods. These methods assign a discrete time-series to a known gage location.
The time-series may be historical or hypothetical. Typically for lumped modeling, a single gage is
assigned to one or more subbasin elements that provide a representative basin average.
• Gridded Methods. These methods utilize gridded data to allow for semi-distributed modeling.
Gridded data inherently contain the spatial and temporal distribution of time-series data that allow for
reduced assumptions; this is typically the recommended method when possible. Gridded methods
are limited by the availability of data, particularly for models where fine resolution is needed or for
older historical events. The availability of gridded data is continually improving as this has been an
active area of research and development.
• Interpolated Methods. These methods leverage multiple point gage data to infer a spatial and
temporal pattern. The time-series data for point gages that are distributed across the model domain
are interpolated to better represent spatial variability. The interpolation is performed based on user-
specified controls such as radius of influence. This method combines concepts from the Specified
Gage and Gridded methods.
Precipitation
In watershed hydrology, the response of a watershed is driven by precipitation that falls on the watershed
and evapotranspiration from the watershed. The precipitation may be observed as rainfall from a historical
event, it may be a frequency-based hypothetical rainfall event, or it may be an event that represents the upper
limit of precipitation that is possible at a given location at a specific time. Historical precipitation data are
useful for calibration and verification of model parameters, for real-time forecasting, and for evaluating the
Mechanisms of Precipitation
While all precipitation involves some form of water falling from the atmosphere to the land surface,
precipitation forms for a variety of reasons. Two principal mechanisms of precipitation are coalescence and
cooling.
Coalescence
Under the coalescence mechanism, a water droplet forms around a nuclei when the temperature is below the
dew point. The nuclei could be a dust particle, carbon dioxide, salt particle, or any other airborne non-water
particle. As the amount of water coalesced in the droplet increases, the droplet falls at an increased velocity.
The droplet will break apart when its diameter reaches approximately 7 mm. The pieces of the broken droplet
can then form the nuclei of more droplets. Depending on wind conditions in the atmosphere, a droplet may
grow and break apart many times before it finally reaches the ground.
Cooling
Under the cooling mechanism, precipitation occurs when the amount of moisture in the atmosphere exceeds
the saturation capacity of air. Warm air can hold more water than cold air. If warm, moist air is cooled
sufficiently, water in excess of the saturation capacity will fall as precipitation. Adiabatic cooling occurs
when an air mass at a low elevation is lifted to a higher elevation. Frontal cooling happens along the border
between a warm weather front and a cold front. Contact cooling is the result of warm air blowing across a
cold lake. Finally, radiation cooling occurs when air is heated during the day and absorbs evaporated water,
but then cools during the night. Any of the various cooling processes can lead to precipitation. Certain
cooling processes may be more likely at some times of the year, and not all processes occur over every
watershed.
Types of Precipitation
There are three different types of precipitation that are classified by the producing mechanism. The different
types are closely related to weather patterns.
Convective
Convective precipitation occurs when warm, moist air rises in the atmosphere. Pressure decreases as
elevation increases, which causes the temperature to fall. If the moist air mass rises to a sufficiently high
elevation, precipitation will condense and fall. The tremendous energy associated with convection processes
often leads to very intense precipitation rates. However, a convective storm usually has a small area and a
short duration. Summer thunderstorms are the principal example of this type of precipitation.
Cyclonic
Cyclonic precipitation occurs when warm, moist air is drawn into a low-pressure cold front. The warm air
rises as it is drawn into the low-pressure zone and is subjected to adiabatic cooling. The intensity of the
precipitation is determined by the magnitude of the low-pressure system and the presence of a warm and
moist air mass. Cyclonic storms tend to be large and have a light- to medium-intensity precipitation rate.
Measuring Precipitation
Accurately measuring precipitation is one of the greatest challenges in water resources engineering; the lack
of measured precipitation may be a significant hurdle to hydrologic modeling. The complexity, accuracy, and
robustness of a hydrologic model are meaningless if the precipitation boundary condition is incorrect.
Precipitation can be measured at a point using some type of gage, or it can be measured spatially using a
tool such as radar.
Point Measurements from Gages
Each of the precipitation measuring devices described in the table below captures rainfall or snowfall in a
storage container that is open to the atmosphere. The depth of the collected water is then observed,
manually or automatically, and from those observations, the depth of precipitation at the location of the gage
is obtained.
Option Categorization
Manual (also referred to as non-recording, This gage is read by a human observer. An example is
totalizer, or accumulator gage) shown below. Often such gages are read daily, so detailed
information about the short-term temporal distribution of
the rainfall is not available.
From the gaged data, one might estimate mean areal precipitation (MAP) as a weighted average of the
depths observed. The weights assigned might depend, for example, on how far the gage is from one or more
user-specified index points in the watershed. In this example, if an index point at the centroid of the
watershed is selected, then the weights will be approximately equal, so the MAP will equal the arithmetic
average of the depths observed at gages A and B.
The MAP estimated from the gage network in this manner is a good representation of rainfall on a watershed
if the raingage network is adequately dense in the vicinity of the storm. The gages near the storm must also
be in operation, and must not be subject to inadvertent inconsistencies (Curtis and Burnash, 1996).
The National Weather Service provides guidelines on the density of a raingage network. These suggest that
the minimum number of raingages, N, for a local flood warning network is:
1)
in which A = area in square miles. However, even with this network of more than the minimum number of
gages, not all storms may be adequately measured. Precipitation gages such as those illustrated in the
figures previously are typically 8-12 inches (20-30 cm) in diameter. Thus, in a one sq-mi (2.6 km2) watershed,
the catch surface of the gage represents a sample of precipitation on approximately 1/100,000,000th of the
total watershed area. With this small sample size, isolated storms may not be measured well if the storm
cells are located over areas in which "holes" exist in the gage network or if the precipitation is not truly
uniform over the watershed.
The impact of these "holes" is illustrated by the figure below. Figure (a) shows the watershed from the figure
above, but with a storm superimposed. In this case, observations at gages A and B would not represent well
the rainfall because of the areal distribution of the rainfall field. The "true" MAP likely would exceed the MAP
computed as an average of the observations. In that case, the runoff would be under-predicted. Similarly, the
gage observations do not represent well the true rainfall in the case shown in Figure (b). There, the storm cell
is over gage A, but because of the location of the gage, it is not a good sampler of rainfall for this watershed.
One potential solution to the problem of holes in the rainfall observations is to increase the number of gages
in the network. But even as the number of gages is increased, one cannot be assured of measuring
adequately the rainfall for all storm events. Unless the distance between gages is less than the principal
dimension of a typical storm cell, the rainfall on a watershed may be improperly estimated. A second
solution is to use of rainfall depth estimates from weather radar.
The WMO Guide to Hydrological Practices (1994) explains that
Radar permits the observation of the location and movement of areas of precipitation, and certain types of
radar equipment can yield estimates of rainfall rates over areas within range of the radar.
Weather radar data are available from National Weather Service (NWS) Weather Surveillance Radar Doppler
units (WSR-88D) throughout much of the United States. Each of these units provides coverage of a 230-km-
radius circular area. The WSR-88D radar transmits an S-band signal that is reflected when it encounters a
raindrop or another obstacle in the atmosphere. The power of the reflected signal, which is commonly
expressed in terms of reflectivity, is measured at the transmitter during 360º azimuthal scans, centered at
the radar unit. Over a 5- to 10-minute period, successive scans are made with 0.5º increments in elevation.
The reflectivity observations from these scans are integrated over time and space to yield estimates of
particle size and density in an atmospheric column over a particular location. Varying levels of analysis may
be performed to check and correct inconsistencies in the measured data. The final data products are
distributed in a variety of digital formats. Grid cells are typically on the order of 4 km by 4 km.
Frequency Storm
The objective of the frequency-based hypothetical storm is to define an event for which the precipitation
depths for various durations within the storm have a consistent exceedance probability. Nesting the various
precipitation depths leads to the notion of a "balanced" storm. For example, consider a synthetic storm with
0.1 annual exceedance probability (AEP). If the storm is 6 hours long, it will also contain the 3-hour 0.1 AEP
storm, and the 1-hour 0.1 AEP storm. When actual historical gage records are examined, it is often a false
assumption that each duration of a storm event corresponds to the same frequency of exceedance.
However, generating nested storms does produce consistent results that are valuable for design and
regulation purposes.
The estimated 10-minute and 30-minute depths are inserted into the depth-duration values entered by the
user to create an augmented depth-duration relationship.
The augmented depth-duration relationship is next adjusted for storm area. The values entered by the user
represent "point" values. Point values represent the precipitation characteristics observed at a point in the
watershed. Precipitation at a point (perhaps measured by a rain gage) can be very intense and change
rapidly over a short time, but high intensity cannot be sustained simultaneously over a large area. As the area
of consideration increases, average intensity decreases compared to the point of maximum intensity, much
like a circus tent. For example, a small thunderstorm may release an intense burst of rainfall over a small
area. However, the physical dynamics of thunderstorms do not allow for the intense rainfall to be
widespread. Further, if you were to consider a large area around the thunderstorm, the same precipitation
volume averaged over the large area would result in a much lower intensity. For a specified frequency and
storm duration, the average rainfall depth over an area is less than the depth at a point. To account for this,
the U.S. Weather Bureau (1958) used averages of annual series of point and areal values for several dense,
recording-raingage networks to develop reduction factors. The factors indicate how much point depths are
to be reduced to yield areal-average depths. The factors, expressed as a percentage of point depth, are a
The final adjustment of the depth-duration relationship is optional and accounts for the type of input that is
used and the type of output desired: annual duration or partial duration. Virtually all precipitation analyses
that give depth for a specific duration use partial duration values. This means that the entire precipitation
record was scanned and all events greater than a threshold value were included in the statistical analysis. In
an analysis of this type, some years may contribute multiple events while other years provide none. However,
in some cases the input precipitation data may use annual duration instead. The user must select the type of
precipitation data that will be entered, and the type of output that is desired. When the input and output types
do not match, the reduction factors shown in the table below are used to convert the data as necessary. Note
that the conversion only applies to relatively frequent storms. As the annual exceedance probability
decreases, the difference between annual and partial duration statistics becomes negligible. In practice, this
conversion is rarely needed, as the input and output types are typically the same.
Reduction factors for converting partial-duration input to annual-duration output.
0.50 0.88
0.20 0.96
0.10 0.99
Finally, the program is uses the processed depth-duration relationship to generate a "nested" hyetograph. It
interpolates to find depths for durations that are integer multiples of the time interval selected for runoff
modeling. Linear interpolation is used, after taking logarithms of both the depth and duration data.
Performing the interpolation in log-log space improves the quality of intermediate estimates (Herschfield,
Parameter Estimation
Storm Depth
In the United States, depths for various durations can be obtained from a variety of sources. Currently, the
best available product is NOAA Atlas 14, which provides precipitation-frequency estimates for most regions
of the United States, separated into individual volumes. These data can be accessed from the NOAA
Precipitation Frequency Data Server (https://hdsc.nws.noaa.gov/pfds/). In addition to NOAA Atlas 14,
several products are available for the entire country, including TP-40 (Herschfield, 1961) for durations from
30 minutes to 24 hours and TP-49 (Miller, 1964) for durations from 2 to 10 days. The Eastern part of the
country has extra data for short durations in HYDRO-35 (Fredrick, Myers, and Auciello, 1977). Some locations
have specialized data developed locally, for example the Midwest has available Bulletin 71 (Huff and Angel,
1992). More recent site- or project-specific regional precipitation-frequency studies are becoming more
common. These various reports are all similar in that they contain maps with isopluvial lines of constant
precipitation depth. Each map is labeled with an annual exceedance probability and storm duration. Knowing
the location of the watershed on the map, the depth for each required duration and exceedance probability
can be interpolated between the isopluvial lines.
Each of the maps included in the reference sources is typically developed independently of other maps. That
is, the map for the 0.01 probability of exceedance and 1-hour duration is developed separately from the 0.01
probability of exceedance and 6-hour duration. Because of this independence there can be inconsistencies in
the values estimated from the maps. If this raw data is input to the program, there can result fluctuations in
the computed hyetograph. These fluctuations can be reduced by smoothing the data before entering it into
the program. The precipitation depth values for the range of durations, all with the same exceedance
probability, should be plotted. A smooth line should be fit through the data. A best-fit line can be used
Basin-average depths for multiple durations can be extracted from frequency grids within HMS.
HMS supports the direct import of precipitation-frequency grids (such as NOAA Atlas 14 or
similar) for use in the Frequency Storm method. These grids must represent an isohyetal map,
which contains point rainfall depths for a specific recurrence interval and rainfall duration.
Storm Area
The storm area should be set equal to the drainage area at the evaluation location. The evaluation location
will be where the flow estimate is needed, for example at the inflow to a reservoir or at a particular river
station where a flood damage reduction measure is being designed. When there are several evaluation
locations in a watershed, separate storms must be prepared for each location. Failure to set the storm area
equal to the drainage area at the evaluation location leads to incorrect depth-area adjustments and either
over or underestimation of the flow for a particular exceedance probability.
Area reduction is an optional input to the Frequency Storm method, but in most cases should be used. The
only exceptions are for very small study areas (less than 30 square miles) or if reduction factors have already
been applied to the rainfall depths prior to input to HMS. Areal reduction factors apply a percentage to the
point value. Precipitation-frequency products like NOAA Atlas 14 provide rainfall depths for a point maximum
and must be adjusted to account for the reduction in intensity as storm area increases. In HMS, area
reduction may be applied using TP-40/49 curves or by user entry. TP-40 is used for durations up to 24 hours,
and TP-49 is used for durations longer than 24 hours. TP-40/49 should not be applied to watersheds (total
study areas) larger than 400 square miles, as the curves are asymptotic at large areas.
TP-40/49 was developed using a limited set of data and may not be appropriate for all study
locations. The preferred approach is to develop site- or region-specific areal reduction factors
(ARFs) based on historical storm data. Depth-area-duration tables can be extracted from storm
grids and used in the development of ARF estimates. ARFs estimated using historical data can
then be entered to HMS as User-Specified.
If the Frequency Storm method is used in a Depth-Area Analysis, the Storm Area setting must be set to
"User-Specified". The Depth-Area Analysis will calculate the storm area for each analysis point based on the
area upstream.
As stated earlier, virtually all precipitation maps are given in partial duration form, so input is assumed to be
of partial duration type. The selection of the output type as partial or annual duration depends on the
intended use of the computed results. Frequently the computed results are used for floodplain regulation or
the design of flood damage reduction measures. In these cases, the type of output is determined by the type
of damages that are expected. Some damages are generally assumed to happen only once in a year. For
example, the time required to rebuild residential housing usually means it can only be damaged once in a
year. If two large floods occurred in the same year, the housing would be flooded twice before it could be
rebuilt and no additional damage would occur. Annual duration output should be selected. Partial duration
output should be used if damages can happen more than once in the same year. This is often the case in
agricultural crops, where fields can be plowed and replanted after a flood only to be reflooded. Partial
3-day depth migrating from version 4.10 (or older) to 4.11 (or newer)
The 3-day depth is computed by logging of the 2-day and 4-day depth (blue) before areal reduction due to the
storm area and interpolating the 3-day depth (orange) as shown in the Figure below. Converting the
logarithm by raising the base value gives you the 3-day depth value added to the component editor.
For example:
The 3-day depth was not directly computed in the software but could be estimated after the balanced
hyetograph is created. To estimate a 3-day depth, the 2-day and 4-day depths are first areally reduced. The
figure below shows just the log 2-day and 4-day reduced depths; however, this interpolation is performed
from the intensity duration out to the storm duration to build the storm hyetograph.
The interpolated log depths are used to compute an incremental depth value. Un-logging the interpolated
depth values and sequentially subtracting the cumulative depth by the next cumulative depth gives you the
incremental depth value.
For example, for a 15 minute time interval:
To obtain the "un-reduced" 3-day depth (backing out of areal reduction), the areal reduction factor needs to
be computed. For TP-40 areal reduction, the equation is
The factor is different for each duration. The factor is set to 0.12 for the 3-day reduction curve.
For example:
• Reduction Factor (3-day for 254.52 square mile watershed, TP-40): 0.9413
Gage Weights
Many watersheds are large enough that they contain multiple precipitation gages, especially in urban areas.
An important question is immediately presented: How should the information at each gage be used to
compute MAP over the watershed? A common approach is to take a fraction of the precipitation that occurs
at each gage in order to compute MAP. The user-specified gage weights method provides a great deal of
flexibility for the user to specify fractions using a generalized weighting scheme.
where PMAP = total storm MAP over the subbasin; pi(t) = precipitation depth measured at time t at gage i; and
wi = weighting factor assigned to gage i. If gage i is not a recording device, the quantity is replaced
by the total storm depth entered by the user. Many techniques have been developed for computing the gage
weighting factors for a subbasin; some of them are described in the next section on estimating parameters.
Problems can occur when interpolating precipitation over a large subbasin. For many reasons, the mean
annual precipitation is likely to vary as a result of regional meteorological trends. When the variation is
significant, the techniques presented previously for estimating depth factors must be modified. Consider the
case where a precipitation gage has a mean annual precipitation depth of 76 cm. A subbasin in a study may
be closer to that gage than to any other, but the subbasin may have an estimated annual precipitation of 88
cm. This suggests that on average, if 1 cm of precipitation is measured at the gage, that slightly more
precipitation should be applied over the subbasin. The index precipitation can be used to correct for this
situation.
The index precipitation for the gage and subbasin are applied together, by adjusting the gage data before it is
used with the MAP factors to calculate the MAP for the subbasin. The precipitation gage data is then
computed as:
where PMAP = total storm MAP for the subbasin; pi(t) = precipitation depth measured at time t at gage i; Isub is
the index precipitation for the subbasin; Ii is the index precipitation for the gage. As before, if gage i is not a
recording device, the quantity is replaced by the total storm depth entered by the user. The index
precipitation for a gage is usually estimated as the mean annual precipitation computed from the historical
records at the gage. Another logical choice is to use the mean spring precipitation if the study goal is to
produce a watershed model that works well only in spring months. However, there is no rule that requires the
index precipitation to be the mean precipitation, whether for a year, a season, or a month. The index
precipitation can be used carefully to apply a user-selected ratio of the measured precipitation to each
subbasin.
While the mean annual precipitation can be estimated for a gage using the historical record, it can be difficult
to estimate for a subbasin. Typically regional information on precipitation patterns must be used. One
example of generally available data for estimating the annual precipitation for a subbasin is the PRISM data
set (Daly, Neilson, and Phillips, 1994).
The time-series data recorded at each gage implicitly includes both volume and timing of the precipitation. In
some cases it may be desirable to change the volume for a gage without changing the timing. This may be
necessary if high winds during the storm cause the gage to under-catch precipitation and consequently
under estimate the actual precipitation. Specifying the total storm depth for a recording gage is always
optional. When a total storm depth is included, the precipitation gage data is then computed as:
where PMAP = total storm MAP for the subbasin; pi(t) = precipitation depth measured at time t at gage i; Isub is
the index precipitation for the subbasin; Ii is the index precipitation for the gage, Dmeasure is the total depth
where PMAP = total storm MAP for the subbasin; pi(t) = precipitation depth measured at time t at gage i; wi is
the temporal weight for gage i.
Parameter Estimation
This method requires a MAP weighting factor for each gage that will be used to compute a hyetograph for a
subbasin. A separate temporal weighting factor is also required. The weights are determined and entered by
the user; the program is not able to automatically estimate the weighting factors. The use of the index
precipitation is optional. The following methods could be considered for estimating the weighting factors.
This method assigns a weight to each gage equal to the reciprocal of the total number of gages used for the
MAP computation. Gages in or adjacent to the watershed can be selected.
This is an area-based weighting scheme, predicated on an assumption that the precipitation depth at any
point within a watershed is best estimated as the precipitation depth at the nearest gage to that point. Thus,
it assigns a weight to each gage in proportion to the area of the watershed that is closest to each gage.
As illustrated in the figure (a) below, the gage nearest each point in the watershed may be found graphically
by connecting the gages, and constructing perpendicular bisecting lines; these form the boundaries of
polygons surrounding each gage. The area within each polygon is nearest the enclosed gage, so the weight
assigned to the gage is the fraction of the total area that the polygon represents.
Details and examples of the procedure are presented in Chow, Maidment, and Mays (1988), Linsley, Koehler,
and Paulus (1982), and most hydrology textbooks.
This is also an area-based weighting scheme. Contour lines of equal precipitation are estimated from the
point measurements, as illustrated by the figure (b) below. This allows a user to exercise judgment and
knowledge of a basin while constructing the contour map. MAP is estimated by finding the average
precipitation depth between each pair of contours (rather than precipitation at individual gages), and
weighting these depths by the fraction of total area enclosed by the pair of contours. Again, details and
If a single recording gage is used to establish the temporal pattern of the hyetograph, the resulting MAP
hyetograph will have the same relative distribution as the one recording gage. For example, if the gage
recorded 10% of the total precipitation in 30 minutes, the MAP hyetograph will have 10% of the MAP in the
same 30-minute period.
On the other hand, if two or more gages are used, the pattern will be a weighted average of the pattern
observed at those gages. Consequently, if the temporal distribution at those gages is significantly different,
as it might be with a moving storm, the average pattern may obscure information about the precipitation on
the subbasin. This is illustrated by the temporal distributions shown in the figure below. Here, hyetographs of
rainfall at two gages are shown. At gage A, rain fell at a uniform rate of 10 mm/hr from 00:00 hours until
02:00 hours. No rain was measured at gage A after 02:00. At gage B, no rain was observed until 02:00, and
then rainfall at a uniform rate of 10 mm/hr was observed until 04:00. The likely pattern is that the storm
moved across the subbasin from gage A toward gage B. If these gage data are used with Equations 4 and 5
to compute an average pattern, weighting each gage equally, the result is a uniform rate of 5 mm/hr from
0:000 until 04:00. This may fail to appropriately represent the temporal distribution and intensity of the
storm. A better scheme might be to select one of the gages as a pattern for the watershed average.
in which Z is the reflectivity factor; R is the rainfall intensity; and a and b are empirical coefficients. Thus, as a
product of the weather radar, rainfall for cells of a grid that is centered about a radar unit can be estimated.
This estimate is the MAP for that cell and does not necessarily suggest the rain depth at any particular point
in the cell.
The National Weather Service, Department of Defense, and Department of Transportation (Federal Aviation
Administration) cooperatively operate the WSR-88D network. They collect and disseminate the weather radar
data to federal government users. The NEXRAD Information Dissemination Service (NIDS) was established
to provide access to the weather radar data for users outside of the federal government. Each WSR-88D unit
that is designated to support the NIDS program has four ports to which selected vendors may connect. The
NIDS vendors, in turn, disseminate the data to their clients using their own facilities, charging the clients for
the products provided and for any value added. For example, one NIDS vendor in 1998 was distributing a 1-
km x 1-km mosaic of data. This mosaic is a combined image of reflectivity data from several radar units with
overlapping or contiguous scans. Combining images in this manner increases the chance of identifying and
eliminating anomalies. It also provides a better view of storms over large basins.
The following figure illustrates the advantages of acquiring weather radar data. Figure (a) shows the
watershed with a grid system superimposed. Data from a radar unit will provide an estimate of rainfall in
each cell of the grid. Commonly these radar-rainfall estimates are presented in graphical format, as
illustrated in Figure (b), with color codes for various intensity ranges. (This is similar to the images seen on
television weather reports.)
With estimates of rainfall in grid cells, a "big picture" of the rainfall field over a watershed is presented. With
this, better estimates of the MAP at any time are possible due to knowledge of the extent of the storm cells,
the areas of more intense rainfall, and the areas of no rainfall. By using successive sweeps of the radar, a
time series of average rainfall depths for cells that represent each watershed can be developed.
HMR 52 Storm
The HMR 52 Storm method generates a probable maximum precipitation (PMP) hypothetical storm as
detailed in Hydrometeorological Report No. 52 (HMR 52) (Hansen, Schreiner, and Miller, 1982).
Hydrometeorological Report No. 51 (HMR 51) contains the PMP index maps for the Eastern U.S., and HMR
52 contains information about the application of the PMP depths to a watershed. HMR 51 and HMR 52 apply
to those areas of the United States east of the 105th meridian, with some exclusions, as shown in the figure
below.
The hydrometeorological reports can be accessed from the National Weather Service (NWS)
Hydrometeorological Design Studies Center (HDSC):
https://www.weather.gov/owp/hdsc_pmp
Regions covered by different NWS PMP documents (as of 2015) (NWS HDSC)
HMR 52 describes a procedure for developing temporal and spatial storm patterns for a 72-hour PMP
estimate provided by HMR 51. The HMR 52 method computes a storm area and represents it as elliptical
rings of decreasing rainfall intensity. These rings are referenced with a storm center (X and Y coordinates)
Standard isohyetal pattern recommended for spatial distribution of PMP (HMR 52)
The X and Y coordinates specify the location of the storm center, and are entered using the same coordinate
system as the geometric data for the subbasin polygons. An initial estimate of the basin centroid provides a
good starting point. The orientation is measured in degrees increasing clockwise from north (HMR 52
Section 4). If the actual orientation deviates from the preferred orientation by more than 40 degrees, a
reduction is applied. According to HMR 52, the storm orientation should be bounded by 135 and 315
degrees. The peak intensity parameter specifies the time at which the precipitation intensity will be greatest
within the 72-hour storm period (HMR 52 Section 2.3). The depth of rain falling during the period of peak
intensity is subdivided into 1-hour increments using the 1 to 6 Ratio parameter (HMR 52 Section 6.5). Finally,
the total storm area must be specified. The storm area represents the area of maximum intensity and
produces the largest runoff.
The basin model must be geo-referenced to use the HMR 52 storm method.
Hypothetical Storm
The Hypothetical Storm method is a flexible and generalized modeling method for modeling idealized
storms. The hypothetical storm is often used for design or risk analysis or may represent simplified versions
of real storms with desirable properties. The user has full control over the storm temporal pattern and spatial
The Hypothetical Storm method subsumes the SCS Storm method, which was available in HEC-
HMS version 4.2.1 and prior versions.
User-Specified Pattern
This method is the most generalized, and is the most common and recommended hypothetical storm
method due to its flexibility. This method creates a synthetic storm based on a single storm duration and
recurrence interval (i.e. frequency of exceedance).
Parameter Estimation
The parameters that must be defined are the rainfall depth(s), temporal pattern, storm duration, storm area,
point-to-area reduction, and spatial distribution. The spatial distribution may be uniform for all subbasins in
the basin model or may be variable by each subbasin.
Depth
The rainfall depth for the storm may be entered as a point depth or by reading from a precipitation-frequency
grid. These represent cumulative depths for a specified frequency and rainfall duration, such as the 0.01
annual exceedance probability (AEP) 72-hour storm depth. In both options, the rainfall depth typically
represents a point estimate. In real storms, rainfall intensity decreases with increasing storm area.
In the United States, depths for various durations can be obtained from a variety of sources. Currently, the
best available product is NOAA Atlas 14, which provides precipitation-frequency estimates for most regions
of the United States, separated into individual volumes. These data can be accessed from the NOAA
Precipitation Frequency Data Server (https://hdsc.nws.noaa.gov/pfds/). In addition to NOAA Atlas 14,
several products are available for the entire country, including TP-40 (Herschfield, 1961) for durations from
30 minutes to 24 hours and TP-49 (Miller, 1964) for durations from 2 to 10 days. The Eastern part of the
country has extra data for short durations in HYDRO-35 (Fredrick, Myers, and Auciello, 1977). Some locations
have specialized data developed locally, for example the Midwest has available Bulletin 71 (Huff and Angel,
1992). More recent site- or project-specific regional precipitation-frequency studies are becoming more
common. These various reports are all similar in that they contain maps with isopluvial lines of constant
precipitation depth. Each map is labeled with an annual exceedance probability and storm duration. Knowing
the location of the watershed on the map, the depth for each required duration and exceedance probability
can be interpolated between the isopluvial lines.
Areal Reduction
To account for the decrease in rainfall intensity over a larger area, a point-to-area conversion, also called an
areal reduction factor (ARF), must be applied, except for very small watershed areas. The underlying reason
for areal reduction is that precipitation averaged over an area is less than a point maximum. This is
especially important for larger watersheds.
The storm area should be set equal to the drainage area at the evaluation location. The evaluation location
will be where the flow estimate is needed, for example at the inflow to a reservoir or at a particular river
station where a flood damage reduction measure is being designed. When there are several evaluation
locations in a watershed, separate storms must be prepared for each location. Failure to set the storm area
TP-40/49 was developed using a limited set of data and may not be appropriate for all study
locations. The preferred approach is to develop site- or region-specific areal reduction factors
(ARFs) based on historical storm data. Depth-area-duration tables can be extracted from storm
grids and used in the development of ARF estimates. ARFs estimated using historical data can
then be entered to HMS as User-Specified.
Temporal Pattern
The storm pattern is a dimensionless representation of how the cumulative rainfall depth varies in time. The
units of each axis of the pattern are in percent, from 0% to 100%, with the point (100,100) representing the
total storm duration and the total accumulated rainfall depth.
The temporal pattern can be derived from historical storms by accumulating the rainfall depth over each
increment of time and standardizing the two axes. Temporal patterns are also available from NOAA Atlas 14
at the Precipitation Frequency Data Server as supplementary information. These patterns are based on
probabilistic analyses of time patterns of heavy rainfall across the U.S. The patterns are classified by
"quartile", which defines in which quarter the most volume of rainfall occurs, and by "percentile" which is a
ranking of the peakedness of storms.
SCS Patterns
The SCS methods are based on hypothetical storms developed by the Soil Conservation Service (SCS), now
known as the Natural Resources Conservation Service (NRCS). The SCS storms are legacy products that
were used for drainage planning in the United States. These storms were developed by the SCS as averages
of rainfall patterns; they are represented in a dimensionless form. There are four patterns: Type I, Type IA,
Type II, and Type III.
The SCS designed the storm for small drainage area of the type for which they usually provide assistance.
The intended use is for estimating both peak flow rate and runoff volume from precipitation of a "critical"
duration. Storm-producing mechanisms vary across the United States, so four different storm patterns were
developed. The patterns and associated regions are shown in the figure below; the actual data values can be
found separately (USDA, 1992). The Type I and Ia storms represent Pacific climates with generally wet
winters and dry summers; these are used on the Pacific coast from Washington to California, plus Alaska
and Hawaii. The Type III storm represents areas bordering the Gulf of Mexico and Atlantic seaboard where
tropical storms and hurricanes generate heavy runoff. The Type II storm is used in the remainder of the
United States. Storm types have not been defined for other locations in the world; a storm type may be
selected based on similar weather patterns and comparisons of cumulative precipitation for typical storms.
Approximate geographic boundaries for SCS storm distributions. Reproduction of Figure B-2 in TR-55.
Parameter Estimation
The storm type should be selected based on the location of the watershed after consulting the figure above.
The boundaries are approximate, so engineering judgement may be used to select a storm type on the basis
of the meteorologic patterns. The precipitation depth to be applied to the pattern can be selected from any of
the sources discussed in the Frequency Storm section. Because the SCS storm method does not account for
depth-area reduction or annual-partial duration conversion, the user must make these adjustments manually
before entering a depth value.
Interpolated Precipitation
in which wC = weight assigned to gage C; dC = distance from node to gage C; dD = distance from node to
gage D in southeastern quadrant; dE = distance from node to gage E in southwestern quadrant; and dF =
distance from node to gage F in northwestern quadrant of grid. Weights for gages D, E and A are computed
similarly. The distance between each gage and the node is computed using a curved earth assumption as:
where d is the distance between a gage and the node; rad is the radius of the earth at 6370 km; A is one-half
pi minus latitude of the gage in radians, B is one-half pi minus the latitude of the node in radians, and C is the
longitude of the node in radians minus the longitude of the gage.
With the weights thus computed, the node hyetograph ordinate at time t is computed as:
Where Inode is the index at the node, IA is the index at gage A, and IC, ID, and IE are the index at each of the
remaining gages.
This example has used only one node in the subbasin. However, it is possible to include more than one node.
In this case a weight must be specified for each node as the final precipitation hyetograph for the subbasin is
computed as:
Where p(t) is the precipitation at each time t for the subbasin; wi is the weight for node i; pi(t) is the
precipitation at each time t for node i as computed by equation 10. Node weights will need to add to 1.
Node weights must sum up to 1; in cases where weights surpass this value, the software will automatically
normalize the weights to ensure their collective sum equals 1.
Most recording gages that are used with this method report data at a 1-hour or shorter interval. However,
there are many gages available that only report once a day, giving the total daily precipitation. These gages
can also be used for calculating the precipitation at each node. Each "daily" gage is preprocessed before
beginning the calculations for a node. The processed daily gage is used exactly as if it were a recording gage
when dynamically computing the precipitation for each node.
The preprocessing for a daily gage utilizes recording gages near the daily gage to compute a pattern
hyetograph and then applies the precipitation recorded at the daily gage. The processing is similar to what
happens at a node. Hypothetical north-south and east-west axes are constructed through the coordinates of
the daily gage. The adjacent recording gages are sorted into each of the four quadrants surrounding the daily
gage. The closest gage is selected in each quadrant; this process is performed once and not for each time
interval as is done when processing for a node. For each time step, pattern precipitation is computed at the
daily gage using equations 7, 8, and 10 with the substitution of the daily gage coordinates for the node
Parameter Estimation
This method requires at least one node for each subbasin. The parameters then are the latitude and
longitude of the node and any gages that will be used. The coordinates of each gage are generally known or
can be found by examining a map. The use of the index precipitation is optional.
Selecting Node Locations
A common practice is to specify a single node for each subbasin, located at the centroid of the subbasin.
This can be a quick way to initially estimate parameters because it is relatively easy to compute the
coordinates of the subbasin centroid using a geographic information system. This is less arbitrary that it first
seems. By definition, the centroid is closer to more of the subbasin than any other point. Using this
placement assumes that centering in the subbasin is the best representation of the subbasin-average
precipitation.
An alternate method of placing the node would be to examine the precipitation trends over the subbasin.
Compute the average annual subbasin precipitation by consulting regional maps. One source for such maps
is the PRISM project (Daly, Neilson, and Phillips, 1994) which includes both total annual and monthly
estimates of precipitation. Once the average annual precipitation amount for the subbasin is known, a node
could be located at a point in the subbasin with the same average annual precipitation depth. Ideally, the
selected point would be near the centroid. This placement attempts to match individual storm events that
will be simulated consistent with the average trends in the subbasin.
Regardless of the method used to locate the node, placement should consider the surrounding gages.
Sometimes the location initially selected for the node will almost always ignore a gage that is relatively close,
as shown in Figure 16. Because of the node placement, all four gages fall into only three quadrants. Most of
the time only the three closest gages will be used and gage B will never be used unless there is missing data
at gage A. By moving the node slightly East, each gage will fall into a separate quadrant and be used.
2)
where I(d, t) [in/hr or mm/hr] is the instantaneous intensity at time t within day d, Pdaily(d) [in or mm] is the
daily total precipitation on day d, and ktri(t) (hours) is the kernel function (unit hyetograph) of the isosceles
triangle. The kernel function is defined as:
3)
where tpk is the storm time to peak [hours] and D is the storm duration [hours], which is allowed to range
from 0.5 * Δt (where Δt is the computational time step) to 24 hours. A non-monotonic linear interpolation
function, implemented using the Apache Commons Mathematics Library3, is used to construct this kernal
function. Once a kernal function has been constructed, a kernal value for the current computational interval,
k(t), is extracted. This kernal value is then normalized as:
4)
The normalized sum of all kernal values, knormalized sum, within a day for the kernel function is then computed.
Finally, the sub-daily precipitation depth, Psub-daily, is computed as:
5)
Required Parameters
Parameters that are required to utilize this method within HEC-HMS include a daily precipitation gridset,
a Temporal Disaggregation method, a Storm Characteristics method, a Storm Duration [minutes], and
a Storm Time to Peak [minutes]. An optional Time Shift method can be used to adjust the gridded
precipitation data in time.
2 https://doi.org/10.1175/JHM-D-18-0203.1
3 https://commons.apache.org/proper/commons-math/
The Temporal Disaggregation method dictates how precipitation will be disaggregated in time. Currently,
only the Bohn el al. 2019 option is available for the Temporal Disaggregation method. Additional options will
be added in the future.
There are currently three options for the Storm Characteristics method: Fixed Value, Annual Pattern,
and Grid. When using the Annual Pattern method, Parameter Value Patterns must be used to specify both
the Storm Duration and Storm Time to Peak. When using the Grid method, Storm Duration and Storm Time to
Peak gridsets must be used to specify both parameters.
The Storm Duration defines the duration of precipitation in each day (i.e., base width of the isosceles triangle
shaped temporal distribution). This parameter can vary between 1 and 1440 minutes.
The Storm Time to Peak defines the peak of the precipitation intensity in each day (i.e., peak of the isosceles
triangle shaped temporal distribution). This parameter can vary between 0 and 1439 minutes.
Both the Storm Duration and Storm Time to Peak should be entered using the same time zone of
the simulation (e.g., UTC).
Specified Hyetograph
The program includes a rich variety of methods for processing raw precipitation data into a hyetograph for
each subbasin. However, there are so many methods for processing precipitation data that it is not feasible
to include all of them within the program. This method allows you to process your own data and provide a
hyetograph for the program to use. You may also choose to subdivide a watershed into many subbasins so
that it becomes reasonable to use only one precipitation gage for each subbasin. In either case, the program
makes no assumptions about the source of the precipitation data, it only applies the hyetograph to each
subbasin as specified by the user. The hyetograph is entered in the program as if it were gage data.
While all of the major processing of the raw data must be performed external to the program, some
"convenience" processing is done. The hyetograph entered by the user will be at some interval, for example,
15 minutes. However, you may wish to use control specifications with a time step different from the original
data. Instead of re-entering the data manually, the program will automatically interpolate the data to the
requested time step. While the data may be entered as incremental or cumulative, it will be converted to
cumulative before interpolation is performed. The interpolation process does not affect the original data, it is
performed "on-the-fly" during a compute so that the data agrees with the time step.
The hyetograph entered by the user inherently includes both volume and timing information. In most cases
the data should be used exactly as entered, with the possibility of interpolation described above. In some
cases it is convenient to prepare a hyetograph that should be treated as a pattern. In this case, the volume
implicit in the hyetograph is not as important and the timing pattern it represents. You may optionally enter a
total storm depth when selecting a hyetograph for each subbasin. This can be useful when examining the
impact of the same storm occurring with a different precipitation depth.
4 https://www.hec.usace.army.mil/confluence/hmsdocs/hmsguides/working-with-gridded-boundary-condition-data/using-the-new-
metsim-precipitation-and-temperature-methods
where SPFE is the standard-project-flood index-precipitation depth in inches; and R24HR(i) is the percent of
the index precipitation occurring during 24-hour period i. R24HR(i) is given by:
where TRSDA = storm area, in square miles. Each 24-hour period is divided into four 6-hour periods. The ratio
of the 24-hour precipitation occurring during each 6-hour period is calculated as:
where R6HR(i) is the ratio of 24-hour precipitation occurring during hour period i. The program computes the
precipitation for each time interval in the 6-hour interval of the 24-hour period (except the peak 6-
hour period) with:
where ∆t is the computation time interval, in hours. The peak 6-hour precipitation of each day is distributed
according to the percentages in Table 12. When using a computation time interval less than one hour, the
peak 1-hour precipitation is distributed according to the percentages in Table 13. (The selected time interval
must divide evenly into one hour.) When the time interval is larger than shown in Table 12 or Table 13, the
percentage for the peak time interval is the sum of the highest percentages. For example, for a 2-hour time
interval, the values are (14 + 12)%, (38 + 15)%, and (11 + 10)%. The interval with the largest percentage is
preceded by the second largest and followed by the third largest. The second largest percentage is preceded
by the fourth largest, the third largest percentage is followed by the fifth largest, and so on.
1 10 4
2 12 8
3 15 19
4 38 50
5 14 11
6 11 8
5 3 3
10 4 7
15 5 12
20 6 18
25 9 27
30 17 44
35 25 69
40 11 80
45 8 88
55 4 97
60 3 100
Parameter Estimation
A storm area must be selected in order for the distribution to be developed. In general, the area should match
the drainage area for the watershed that drains to the location where the flood protection project will be
constructed. The area may be slightly larger than the drainage area at the actual proposed construction site.
The SPS index precipitation value is taken from Plate 2 in EM 1110-2-1411, as shown in Figure 20. The
lowest isohyet line has a value of 9 inches and passes through central Minnesota, Northern Michigan, New
York, and Maine. A high isohyet line with a value of 19 inches follows the Texas-Louisiana gulf coast and
crosses to Florida. Select the best index precipitation value based on the location of the flood protection
project.
Each subbasin must have a so-called transposition factor. The factors are selected by overlaying the SPS
isohyetal pattern over the complete project watershed. An area-weighted average should be used to
determine the factor for each subbasin. The isohyetal pattern is taken from Plate 12 in EM 1110-2-1411, as
shown in Figure 21.
Precipitation References
Bohn, Theodore J., Kristen M. Whitney, Giuseppe Mascaro, and Enrique R. Vivoni (2019) A Deterministic
Approach for Approximating the Diurnal Cycle of Precipitation for Use in Large-Scale Hydrological Modeling.
Journal of Hydrometeorology, Volume 20: Issue 2. https://doi.org/10.1175/JHM-D-18-0203.1
Bonnin, G.M., D. Martin, B. Lin, T. Parzybok, M. Yekta, and D. Riley (2004) Precipitation-Frequency Atlas of the
United States. National Weather Service, Silver Spring, MD.
Chow, V.T., D.R. Maidment, and L.W. Mays (1988) Applied hydrology. McGraw-Hill, New York, NY.
Curtis, D.C. and R.J.C. Burnash (1996) "Inadvertent rain gauge inconsistencies and their effects on hydrologic
analysis." California-Nevada ALERT users group conference, Ventura, CA.
Daly, C., R.P. Neilson, and D.L. Phillips (1994) "A statistical-topographic model for mapping climatological
precipitation over mountainous terrain." Journal of Applied Meteorology, vol 33 pp 140-158.
Ely, P.B. and J.C. Peters (1984) "Probable maximum flood estimation – eastern United States." Water
Resources Bulletin, American Water Resources Association, 20(3).
Fredrick, R.H., V.A. Myers, and E.P. Auciello (1977) Five- to 60-minute precipitation frequency for the eastern
and central United States, Hydro-35. National Weather Service, Silver Spring, MD.
Hansen, E.M., Schreiner, L.C., and Miller, J.F. (1982) Application of Probable Maximum Precipitation
Estimates - United States East of the 105th Meridian, NOAA Hydrometeorological Report No. 52. Weather
Bureau, US Dept. of Commerce, Washington, D.C.
Herschfield, D.M. (1961) Rainfall frequency atlas of the United States for durations from 30 minutes to 24
hours and return periods from 1 to 100 years, TP 40. Weather Bureau, US Dept. of Commerce, Washington,
DC.
Huff, F.A., and J.R. Angel (1992) Rainfall frequency atlas of the Midwest, Bulletin 71. Illinois State Water
Survey, Champaign, IL.
Interagency Advisory Committee on Water Data (1982). Guidelines for determining flood flow frequency,
Bulletin 17B. USGS, Reston, VA.
Levy, B., and R. McCuen (1999) "Assessment of storm duration for hydrologic design." ASCE Journal of
Hydrologic Engineering, 4(3) 209-213.
Temperature
Five temperature methods are available for use within HEC-HMS. These methods include:
• Gridded Temperature
• Interpolated Temperature
• Specified Thermograph
Required Parameters
Shortwave Radiation
Shortwave Radiation is a radiant energy produced by the sun with wavelengths ranging from infrared through
visible to ultraviolet. Shortwave radiation is therefore exclusively associated with daylight hours for a
particular location on the Earth's surface. The energy arrives at the top of the Earth's atmosphere with
a flux (Watts per square meter) that varies very little during the year and between years. Consequently, the
flux is usually taken as a constant for hydrologic simulation purposes. Some of the incoming radiation is
reflected by the top of the atmosphere and some is reflected by clouds. A portion of the incoming radiation is
absorbed by the atmosphere and some is absorbed by clouds. The Albedo is the fraction of the shortwave
radiation arriving at the land surface that is reflected back into the atmosphere. The shortwave radiation that
is not reflected or absorbed above the land surface, and is not reflected by the land surface, is available to
drive hydrologic processes such as evapotranspiration and snowpack melting.
Extraterrestrial Radiation
The shortwave solar radiation at the top of the earth's atmosphere is called extraterrestrial radiation.
Extraterrestrial Radiation is a theoretical amount of solar radiation that would be available at the Earth's
surface in the absence of the attenuation by the atmosphere. Variation in extraterrestrial radiation is caused
by two sources, the first being the variation of radiation emitted by the sun and the second being the
distance from the sun to the top of Earth's atmosphere. Earth is closest to the sun on January 3rd and
farthest from the sun on July 4th. The figure below shows that extraterrestrial radiation varies between 1400
and 1330 based on the time of the year.
where,
is the solar constant in W/m^2
N is the day of the year measured from January 1
On a cloudless day, about 70% of the extraterrestrial radiation reaches the earth's surface. With dense cloud
cover, about 25% of the extraterrestrial radiation can still reach the land surface in the form of diffuse energy.
where
So is the estimated daily extraterrestial solar radiation or insolation in
Tt is the total radiative transmittance
Total transmittance can be estimated as a function of minimum and maximum temperatures as:
where
A, B and C are empirical coefficients
is the range of daily minimum and maximum temperature
A is the clear sky transmittance which is the fraction of radiation transmitted to the earth's surface on a clear
day free of cloud cover. This varies based on elevation and air pollution at the site of interest and is typically
estimated as approximately 0.7. Values for B and C dictate how rapidly Tt increases in relation to the range in
daily temperatures. A single value of A, 0.7, and C, 2.4, are defined by the user for the Bristow Campbell
method in HMS. B may vary seasonally based on average monthly temperature range as defined by:
Required Parameters
The parameters required to utilize this method within HEC-HMS are the clear sky transmittance, the exponent
coefficient and the average monthly temperature range.
where
n is the actual duration of sunshine [hour],
N is the maximum possible duration of sunshine or daylight hours [hour],
n/N is the relative sunshine duration [-],
is the extraterrestrial radiation [MJ m-2 day-1],
is the fraction of extraterrestrial radiation reaching the earth on completely overcast days (when n=0)
is the fraction of extraterrestrial radiation reaching the earth on clear days (when n = N).
The duration of sunshine, n, is recorded with a Campbell Stokes sunshine recorder and can be input as a
Sunshine Gage in HEC-HMS.
The extraterrestrial radiation for a given day of the year is estimated based on the solar constant, solar
declination angle, time of year and the area of interest's location with Equation 21 from FAO56:
where
extraterrestrial radiation [MJ m-2 day-1],
solar constant = 0.0820 MJ m-2 min-1,
inverse relative distance Earth-Sun (Equation 23),
sunset hour angle (Equation 25 or 26) [rad],
ϕ latitude [rad] = pi/180 * decimal degrees
δ solar declination (Equation 24) [rad].
The inverse relative distance Earth-Sun, , and the solar declination, δ, are given by Equations 23 and 24 in
FAO56:
FAO56 also provides methodology for computing on a sub-daily timestep based on solar angles at the
beginning and ending of the time interval with FAO56 Equation 28:
where
t is standard clock time at the midpoint of the period [hour].
is the longitude at the center of the local time zone [degrees west of Greenwich].
is the longitude of the measurement site [degrees west of Greenwich],
seasonal correction for solar time [hour]
The seasonal correction, , is given by FAO56 Equation 32 and 33:
Finally, daylight hours for a given latitude on earth, N, is given by FAO56 Equation 34:
Required Parameters
The only parameters required to utilize this method within HEC-HMS is the decimal degrees or degrees
minute seconds of the central meridian of the local time zone and a sunshine gage in each subbasin
assigned in the Meteorologic Model.
where
is a Hargreaves shortwave coefficient
is the extraterrestrial radiation
is the difference between the mean maximum temperature and mean minimum temperature
The extraterrestrial radiation, , is computed in HMS using equations from Food and Agricultural
Organization Paper No. 56. Details for these equations can be found in FAO 56 Shortwave Radiation
Method5. The default Hargreaves shortwave coefficient is 0.17 per square root of degrees Celsius; this is
equivalent to 0.1267 per square root of degrees Fahrenheit. The default Hargreaves shortwave coefficient of
0.17 per square root of degree Celsius is implicit in the Hargreaves and Samani (1985) potential
evapotranspiration formulation. The Hargreaves shortwave coefficient can be adjusted by the user.
Required Parameters
The only parameters required to utilize this method within HEC-HMS is the Hargreaves coefficient [deg C-1]
and the central meridian of the time zone. In addition, air temperature must be specified as a meteorologic
boundary condition.
While HEC-HMS provides a default coefficient value of 0.17 deg C-2 or 0.1267 deg F-2 this value must be
calibrated and validated.
5 https://www.hec.usace.army.mil/confluence/display/HMSTRM/FAO56+Method
Nearby topography can shade the land surface, resulting in a decrease in the solar radiation reaching that
location. This reduction can be estimated using solar azimuth (Duffie and Beckman, 1980) and solar
elevation angles along with topography geometry to determine if the direct line of solar radiation is blocked
from reaching a given cell's surface. If the sun rays are blocked, is set to 0 for that hour and if it is not
blocked, is set to 1.
In HEC-HMS, this equation does not take into account canopy reduction, cloud absorption or topographic
shading, so , , and are taken as 1.
Required Parameters
The user must select the method for computing Solar Declination, Aspect Reduction, Earth Distance
Reduction, and Atmospheric Absorption Reduction. The methods are pre-loaded into HEC-HMS and are
selected from a dropdown list in the component editor.
Required Parameters
At least one radiation gage with observed data must be input to use this method. The user will assign a
radiation gage to each subbasin within the Meteorologic Model editor.
Note
This is the recommended choice for use with the Priestley Taylor Evapotranspiration Method, where an
effective radiation is used which includes both shortwave and longwave radiation.
• Nearest Neighbor
Required Parameters
Radiation Gages used in this method must be loaded in as time-series of radiation data with defined latitude
and longitude information. The basin model should be georeferenced so gages can be accurately applied to
subbasins.
Gridded Shortwave
The most common use of the method is to utilize gridded shortwave radiation estimates produced by an
external model, for example, a dynamic atmospheric model. If a gridded shortwave radiation estimate is
used with a transform method other than ModClark, an area-weighted average of the grid cells in the
subbasin is used to compute the shortwave radiation time-series for each subbasin.
The atmospheric models used for research, weather, climate and air quality forecasting use integration of
the main equations governing atmospheric behavior. These equations include the gas law, the continuity of
mass, the first law of thermodynamics (heat), and Navier Stokes equations. All popular atmospheric models
solve the Navier-Stokes equation numerically.
Two prinicipal numerical methods for solving the Navier Stokes equation are the finite-difference method
and spectral method. The partial time derivatives and the partial spatial derivatives in the Navier-Stokes
equation can be approximated by finite differences. The spectral method uses the Fourier's theorem which
states that any periodic signal can be expanded as a Fourier series which is a summation of sine and cosine
waves.
An overview of atmospheric models and governing numerical equations can be found here: Atmospheric
Model Overview6.
6 https://www.sciencedirect.com/topics/earth-and-planetary-sciences/atmospheric-model
Longwave Radiation
All living and non-living bodies emit Longwave Radiation. The magnitude of the radiation is proportional to
the temperature (measured in Kelvin degrees) of the body raised to the fourth power. Significant sources of
longwave radiation in hydrologic applications include the atmosphere itself, and any clouds that may be
present locally in the atmosphere. Clouds usually have a higher heat content and higher temperature than
clear atmosphere, and therefore there is increased downwelling longwave radiation on cloudy days. Whether
the atmosphere and clouds are a net source of longwave radiation to the land surface depends on their
temperature relative to the land surface temperature. In most cases, the net longwave radiation is incoming
during the daylight hours, and outgoing during the night hours.
The Longwave Radiation Method included in the Meteorologic Model is only necessary when Energy Balance
Methods are used for Evapotranspiration or Snowmelt. Each option produces the Downwelling Longwave
Radiation arriving at the land surface.
where
Required Parameters
Temperature (or Dew Point Temperature) and a Windspeed methods must be selected in the Meteorologic
Model. It is required to input the central meridian of the local time zone of the basin model area. There is
currently no specification for the time zone so the meridian must be specified manually. Meridians west of
zero longitude should be specified as negative while meridians east of zero longitude should be specified as
positive. A representative elevation for each subbasin is also required.
Satterlund Longwave
The Satterlund equation was developed to achieve better agreement between calculated and measured
values of longwave radiation. Two equations for estimating radiation used at the time, Idso and Jackson,
1969 and Brutsaert, 1975, were found to yield large differences in comparison to measured radiation at
temperatures below 0°C.
1. It does not yield a value in excess of ideal black body radiation at any temperature or humidity
extreme,
3. It does not yield values lower than those due to the CO2 content of the air.
To ensure the carbon dioxide constraint was met, a variable exponent of vapor pressure was needed in the
emissivity, ε, term of the new equation:
where
is the screen vapor pressure in millibars,
is the screen air temperature in Kelvin and
b is an empirical constant equal to 2016
This emissivity term is used in the final form of the Satterlund equation for longwave radiation, :
where
Required Parameters
Values for a temperature and emissivity coefficient are required for this method. A default value for the
temperature coefficient, b, is 2016 Kelvin and is provided for the user. The emissivity coefficient is included
for calibration purposes, however the default value of 1.08 is widely used. Temperature, Windspeed, and Dew
Point methods must be selected in the Meteorologic Model.
Specified Pyrgeometer
A Pyrgeometer is an instrument that can measure Downwelling Longwave Radiation. They are not part of
basic meteorological observation stations, but may be included at first-order stations.
Any object that has a temperature greater than absolute zero emits electromagnetic radiation. The
wavelengths and intensity of the emitted radiation are proportional to the temperature of the item. The
pyrgeometer is an instrument that measures the rate at which these emitting surfaces lose heat to space by
measuring the downward longwave (infrared) radiation in the mid-infrared region of the electromagnetic
spectrum (about 4-50 µm).
Pyrgeometers measure downward longwave radiation with a resistance-changing element. When the
element is placed in the path of infrared radiation, its resistance/voltage changes proportionally to the
amount of energy transferred. The thermopile sensor is incased in a black material to ensure the most
infrared radiation possible is absorbed. When infrared radiation hits the thermopile sensor, two wires of
different materials, typically nickel and copper, absorb energy and heat to different temperatures. This
difference creates the voltage difference. Downward radiation is measured based on the voltage changes of
a thermopile which changes in proportion with the amount of energy transferred. Measurements from
pyrgeometers take into account other atmospheric variables such as water vapor and clouds that can absorb
or reflect longwave radiation. Finally, the data is used to calculate the infrared radiation flux, which is a
measure of radiation in .
Note
This is the recommended choice for use with the Priestley Taylor Evapotranspiration Method, where an
effective radiation is used which includes both shortwave and longwave radiation.
Interpolated Longwave
Similar to Specified Pyrgeometer method, the Interpolated Shortwave method within HEC-HMS requires the
user input radiation gages of observed solar radiation data within the study area. Unlike Specified
Pyrgeometer, multiple gages can be applied to a single subbasin through interpolation. A radius of influence
can be used to exclude gages beyond a certain distance from a subbasin node. Gage readings are then
interpolated across nearby gages in accordance with the interpolation method chosen by the user:
• Nearest Neighbor
• Bilinear
The Inverse Distance interpolation method assumes the weight, or influence, of a gage is equal to the inverse
of its distance from the interpolated cell. The Inverse Distance Squared interpolation method assumes the
weight of a gage is equal to the inverse of the square of its distance from the interpolated cell. The Nearest
Neighbor interpolation method simply assigns the nearest value to the cell center of interest without
considering values of other nearby points. Bilinear interpolation within HEC-HMS relies on triangulation of the
irregularly spaced gage locations. Based on the gage coordinates, a Triangulated Irregular Network (TIN) is
created to represent the gage network in the basin model coordinate system. This TIN defines triangles,
where each gage is a corner of one or more triangles. Given this TIN, a value at any given point is computed
by first identifying the triangle in which that point falls, then interpolating within that triangle using
Barycentric Coordinates. Three or more radiation gages must be loaded into the model and the gages need
to bound all grid cells.
The result of this method is a computed continuous shortwave radiation grid.
Required Parameters
Radiation Gages used in this method must be loaded in as time-series of radiation data with defined latitude
and longitude information. The basin model should be georeferenced so gages can be accurately applied to
subbasins.
where
is the Stefan-Boltzmann constant
is air emissivity
N is fractional cloud clover
Within HEC-HMS, air emissivity in kPa is taken from Bras (1990) given by the equation:
where
RH is the relative humidity in percent and
is the saturation vapor pressure
The saturation pressure is taken from either Smith (1993) or FAO (1998).
Smith (1993) defines saturation vapor pressure, , in millibars as:
where
T is the air temperature in Celsius
The FAO estimate for in kPa is given by equation 11 of FAO (1998):
7 https://www.sciencedirect.com/topics/earth-and-planetary-sciences/atmospheric-model
Pressure
Atmospheric pressure, also referred to as barometric pressure, is the force exerted against the surface of the
earth by the weight of the air above it. Pressure is affected by a variety of factors including Air Temperature,
Altitude, and Humidity. It plays an integral role in several other meteorologic processes available to be
modeled in HEC-HMS including Evapotranspiration, Relative Humidity, Longwave Radiation, and Snowmelt.
In HEC-HMS, the Pressure Method is a required component of the Meteorologic Model when certain types of
Evapotranspiration Methods or Longwave Radiation Methods are selected. The types of Pressure Methods
currently available include Gridded Pressure, Specified Barograph, Interpolated Pressure, and Barometric
Pressure.
Barometric Pressure
The Barometric Pressure Method implements an atmospheric pressure algorithm descibed within Anderson
(2006)8 and Follum et al (2015)9 that is based on a standard atmospheric altitude versus pressure
relationship. The algorithm for atmospheric pressure, , is defined as:
where
is the elevation (meters) above sea-level.
8 https://www.weather.gov/media/owp/oh/hrl/docs/22snow17.pdf
9 https://www.researchgate.net/publication/281358823_A_Radiation-Derived_Temperature-
Index_Snow_Routine_for_the_GSSHA_Hydrologic_Model
Required Parameters
A Lapse Rate must be defined for each subbasin. Although optional, it is recommended that a terrain model
is associated with the basin model and that the basin model is georeferenced. Otherwise, pressure will be
calculated assuming sea-level elevation.
Gridded Pressure
The most common use of the Gridded Pressure Method is to utilize radar-based pressure estimates. Using
additional software, it is possible to develop a gridded representation of pressure data or to use output from
atmospheric models. If it is used with a transform method other than ModClark, an area-weighted average of
the grid cells in the subbasin is used to compute the thermograph for each subbasin.
Required Parameters
The Gridded Pressure Method requires the selection of an Air Pressure Gridset, which must first be created
in HEC-HMS as a Grid Data object.
Interpolated Pressure
Similar to the Specified Barograph Method, the Interpolated Shortwave Method in HEC-HMS requires the
user to input air pressure gages of observed atmospheric pressure within the study area. Unlike Specified
Barograph, multiple gages can be applied to a single subbasin through interpolation. A radius of influence
can be used to exclude gages beyond a certain distance from a subbasin node. Gage readings are then
interpolated across nearby gages in accordance with the interpolation method chosen by the user:
• Nearest Neighbor
• Bilinear
The Inverse Distance interpolation method assumes the weight, or influence, of a gage is equal to the
inverse of its distance from the interpolated cell. The Inverse Distance Squared interpolation method
assumes the weight of a gage is equal to the inverse of the square of its distance from the interpolated
cell. The Nearest Neighbor interpolation method simply assigns the nearest value to the cell center of
interest without considering values of other nearby points. Bilinear interpolation within HEC-HMS relies on
triangulation of the irregularly spaced gage locations. Based on the gage coordinates, a Triangulated
Irregular Network (TIN) is created to represent the gage network in the basin model coordinate system. This
TIN defines triangles, where each gage is a corner of one or more triangles. Given this TIN, a value at any
given point is computed by first identifying the triangle in which that point falls, then interpolating within that
triangle using Barycentric Coordinates. Three or more pressure gages must be loaded into the model and the
gages need to bound all grid cells.
The result of this method is a computed, continuous atmospheric pressure gridset.
Specified Barograph
A Barograph is an instrument that can measure Atmospheric Pressure over time. They are typically included
with basic meteorological observation stations. Traditionally, most barographs consisted of a pen arm that
would rise and descend in accordance with relative changes in atmospheric pressure. Today, most
mechanical barographs have been replaced with electronic barographs that use computer-based methods to
measure pressure.
Required Parameters
At least one Air Pressure Gage, created as a Time-Series Data object, with observed data must be input to
use this method. Values may be specified in units of Kilo Pascals or Inches of Mercury. The user will assign
an air pressure gage to each subbasin within the Meteorologic Model editor.
Required Parameters
The Gridded Dew Point Temperature Method requires the selection of a Temperature Gridset, which must
first be created in HEC-HMS as a Grid Data object. Temperatures should be in units of degrees Fahrenheit or
degrees Celsius.
Required Parameters
The Gridded Humidity Method requires the selection of a Humidity Gridset, which must first be created in
HEC-HMS as a Grid Data object. The units of the gridset should be expressed in percentages.
• Nearest Neighbor
• Bilinear
The Inverse Distance interpolation method assumes the weight, or influence, of a gage is equal to the
inverse of its distance from the interpolated cell. The Inverse Distance Squared interpolation method
assumes the weight of a gage is equal to the inverse of the square of its distance from the interpolated
cell. The Nearest Neighbor interpolation method simply assigns the nearest value to the cell center of
interest without considering values of other nearby points. Bilinear interpolation within HEC-HMS relies on
triangulation of the irregularly spaced gage locations. Based on the gage coordinates, a Triangulated
Irregular Network (TIN) is created to represent the gage network in the basin model coordinate system. This
TIN defines triangles, where each gage is a corner of one or more triangles. Given this TIN, a value at any
given point is computed by first identifying the triangle in which that point falls, then interpolating within that
triangle using Barycentric Coordinates. Three or more temperature gages must be loaded into the model and
the gages need to bound all grid cells.
The result of this method is a computed, continuous dew point temperature gridset.
Required Parameters
Temperature Gages used in this method must be input as a time-series of dew point data with defined
latitude and longitude information. The basin model should be georeferenced so gages can be accurately
applied to subbasins. An Interpolation Method must be chosen and then a Radius of Influence for each gage
must be defined in kilometers or miles.
Interpolated Humidity
Similar to the Specified Humidograph Method, the Interpolated Humidity Method in HEC-HMS requires the
user to input humidity gages with time-series values of percent humidity within the study area. Unlike the
specified approach, multiple gages can be applied to a single subbasin through interpolation. A radius of
• Nearest Neighbor
• Bilinear
The Inverse Distance interpolation method assumes the weight, or influence, of a gage is equal to the
inverse of its distance from the interpolated cell. The Inverse Distance Squared interpolation method
assumes the weight of a gage is equal to the inverse of the square of its distance from the interpolated
cell. The Nearest Neighbor interpolation method simply assigns the nearest value to the cell center of
interest without considering values of other nearby points. Bilinear interpolation within HEC-HMS relies on
triangulation of the irregularly spaced gage locations. Based on the gage coordinates, a Triangulated
Irregular Network (TIN) is created to represent the gage network in the basin model coordinate system. This
TIN defines triangles, where each gage is a corner of one or more triangles. Given this TIN, a value at any
given point is computed by first identifying the triangle in which that point falls, then interpolating within that
triangle using Barycentric Coordinates. Three or more humidity gages must be loaded into the model and the
gages need to bound all grid cells.
The result of this method is a computed, continuous humidity gridset.
Required Parameters
Temperature Gages used in this method must be input as a time-series of humidity data with defined
latitude and longitude information. The basin model should be georeferenced so gages can be accurately
applied to subbasins. An Interpolation Method must be chosen and then a Radius of Influence for each gage
must be defined in kilometers or miles.
Required Parameters
The user must specify a precipitation rate threshold in units of inches per day or millimeters per day.
Additionally, a wet humidity and dry humidity value must be specified, both in units of percent.
10 https://www.weather.gov/media/owp/oh/hrl/docs/22snow17.pdf
11 https://www.researchgate.net/publication/281358823_A_Radiation-Derived_Temperature-
Index_Snow_Routine_for_the_GSSHA_Hydrologic_Model
Required Parameters
At least one Temperature Gage, created as a Time-Series Data object, with measured or computed dew
point temperatures must be input to use this method. Values may be specified in units of Fahrenheit or
Celsius. The user will assign a temperature gage to each subbasin within the Meteorologic Model editor.
Specified Humidograph
Whereas dew point is an absolute measurement, humidity is expressed as a percentage. Humidity can be
measured using a hygrometer. Relative humidity is a ratio of the amount of water vapor in a given volume of
air to the maximum amount of water vapor that can be held within the same volume.
Humidity values can be specified directly within HEC-HMS using the Specified Humidograph Method.
Required Parameters
At least one Humidity Gage, created as a Time-Series Data object, with humidity values must be input to use
this method. Values must be specified as a percentage. The user will assign a humidity gage to each
subbasin with the Meteorologic Model editor.
Snowpack/Snow Cover
The terms snowpack and snow cover are often used interchangeably but they do have slightly different
meanings. The term snowpack is used when referring to the physical and mechanical properties of the snow
on the ground. The term snow cover is used when referring to the snow accumulation on ground, and in
particular, the areal extent of the snow-covered ground. This distinction will be respected for the most part.
Both snowpack and snow cover refer to the total snow and ice on the ground, including both new snow and
any existing un-melted snow and ice.
From the time of its deposition until melting, snow on the ground is a fascinating and unique material. Snow
is a highly porous, sintered material made up of a continuous ice structure and a continuously connected
pore space, forming together the snow microstructure. As the temperature of snow is almost always near its
melting temperature, snow on the ground is in a continuous state of transformation, known as
metamorphism.
At the melting temperature, liquid water may partially fill the pore space. In general, therefore, all three
phases of water - ice, vapor, and liquid - can coexist in snow on the ground.
Snowpack
A laterally extensive accumulation of snow on the ground that persists through winter and melts
in the spring and summer. AMS glossary
Snowpack
The total snow and ice on the ground, including both new snow and the previous snow and ice
which have not melted. NSIDC glossary
Snowpack
The accumulation of snow at a given site and time; term to be preferably used in conjunction
with the physical and mechanical properties of the snow on the ground. UNESCO glossary
Snow cover
In general, the accumulation of snow on the ground surface, and in particular, the areal extent of
snow-covered ground (NSIDC, 2008); term to be preferably used in conjunction with the
climatologic relevance of snow on the ground. UNESCO glossary
Seasonal Snow
In almost all cases, the HEC-HMS snowmelt model is applied to seasonal snow. Seasonal snow is snow that
accumulates during one season and does not last for more than one year. An example of seasonal snow is
shown in Figure 1. The chart on this slide displays the Snow Water Equivalent (SWE) measured on Mount
Alyeska in Alaska over the winter of 2000-01. Notice that the chart starts with a near monotonic increase in
SWE up to the Annual Maximum SWE. This is the accumulation period. After the Annual Maximum SWE the
ablation or melting period occurs. In many cases, the accumulation period is longer than the ablation period.
In this case, the accumulation lasts for 7 months and the melt for 2 months. Other sites will vary in the timing
and length of the accumulation and ablation periods. Some sites may have two or more accumulation
periods and the SWE may drop to zero between the periods.
Temperature gradients and sub-freezing temperatures are more likely to exist in the snowpack during the
accumulation period. Uniform temperatures throughout the snowpack at 32°F (0°C) are most likely to exist
during the ablation period, especially during periods of continuous snowmelt.
Maritime A warm deep snow cover. Max depth can be 75-500 >15
in excess of 300 cm. Melt features (ice
layers, percolation columns) very common.
Coarse grained snow due to wetting
ubiquitous. Basal Melting common.
Snow Metamorphism
A snowpack is not static. The ice crystals and grains that comprise the snowpack are constantly evolving
and changing throughout the winter season. This process is called snow metamorphism. There are two
major processes of snow metamorphism that can occur. The first results from the general tendency of ice
crystals to change their form to a more spherical shape. The rate of this metamorphism depends on the
temperature of the snowpack. The closer the pack temperature is to 0°C (32°F) the faster metamorphism will
occur. It proceeds relatively rapidly when the snowpack is melting. The second process of snowpack
metamorphism is driven by vertical temperature gradients in the snowpack. The temperature gradient
creates a vapor gradient which effectively moves water molecules between ice crystals by vapor transport. A
strong temperature gradient through the snowpack (>10°C m-1) may result in the formation of new, relatively
large, ice crystals within the snowpack termed depth hoar. This second form of metamorphism is common in
the arctic but it can occur anywhere the difference between cold air temperatures and relatively warm ground
temperatures are large and long lasting.
The first process of metamorphism occurs in all snowpacks. Its beginnings lay in the incredible variety of ice
crystal shapes deposited on a snowpack. Something most crystal shapes share in common on reaching the
snowpack is large surface-area-to-volume ratios. These large ratios are created during the rapid crystal
growth that occurs in the atmosphere when snow crystals form. When the ice crystals are incorporated into
the snowpack they no longer grow and tend towards their spherical equilibrium form. Snow metamorphism
describes the change in the snow crystals and grains to less angular, more rounded forms with time. This
type of metamorphism causes a gradual increase in the snowpack density, a reduction in the surface
reflection of sunlight (described by the surface albedo) and changes other snowpack properties with time.
Metamorphism occurs quickly when the snowpack is melting. This rapid metamorphism causes the surface
albedo to decline relatively rapidly. The decline in albedo increases the shortwave radiation (sunlight) that
can be absorbed by the snowpack and can increase the rate of snowmelt.
Hydrologic snowmelt models generally do not model snow metamorphism directly. Some approaches do
model the changes in the snowpack density and albedo that result from metamorphism, as will be shown
later.
Snow Properties
The primary physical properties of a snowpack important to hydrology are three properties that can vary
from point to point: Snow Water Equivalent (SWE), snowpack temperature, as represented by its Cold Content,
and Liquid Water Content; and one spatial property: the Snow Covered Area (SCA). The point properties are
applicable at a specific location at a specific time, and represent the entire single layer of the snowpack at
that location. These properties will vary from location to location throughout a watershed and with time. The
primary spatial property of the snow cover for hydrology is the SCA. SCA describes the area of a watershed
that is snow covered. SCA often changes with time, especially during periods of snowmelt.
where ρw = the density of water; SWE = the snow water equivalent; = the depth averaged snow density;
and D = the snow depth. Snow depth denotes the total height of the snowpack, i.e., the vertical distance from
the ground to the snow surface. Unless otherwise specified, SWE and snow depth are related to a single
location at a given time. Snow density, i.e., mass per unit volume, is normally determined by weighing snow
of a known volume. Theoretically, total snow density encompasses all constituents of the snowpack - ice,
liquid water, vapor, and air. In practice, only the ice and liquid water are included in estimates of SWE as the
air and water vapor make only a negligible contribution to the density. Rearranging equation
As shown in equation , if D and are both known, then SWE can be estimated (as the density of water is
always known). It is also clear that D alone is not sufficient to estimate SWE unless some estimate of snow
density can be made. In fact, snow depth is a parameter that is of little use in snowmelt hydrology, except
when used to estimate SWE.
Snow Type Density (kg/m3) Snow Depth for One Inch Water
While 0°C (32°F) is often called the melting temperature of ice, ice happily survives at 0°C (32°F).
It is only when heat is transferred into ice which is at 0°C (32°F) that melting, the phase change
from solid ice to liquid water, occurs. Similarly, liquid water also happily survives at 0°C (32°F). It
is only when heat is transferred away from liquid water which is at 0°C (32°F) that freezing, the
phase change from liquid water to solid ice, occurs. If ice and liquid water are coexisting in a
given volume, their masses are not changing with time, and there is no heat transfer into or out
of the volume, then their temperature must be 0°C (32°F).
In many cases, during the accumulation phase of the winter season, a difference in temperature exists
between the top surface and the base of the snowpack. This temperature gradient may be large or small
depending on the conditions. Small temperature gradients occur with deep snow and moderately cold
temperatures. Large gradients tend to occur in shallow snowpacks with very cold air temperatures.
Large gradients can drive heat and water vapor from the warmer portions of the snowpack (generally the
base) to the colder portions (generally the surface). This heat and mass flux can lead to rapid
metamorphism, causing changes in the ice crystal size and shape.
The snowpack temperature cannot be greater than 0°C (32°F), the temperature at which ice and water
coexist. If the snowpack temperature is less than 0°C (32°F) then liquid water cannot exist for any length of
time in the snowpack. If the snowpack temperature is at 0°C (32°F), then liquid water can exist in the
snowpack. The snowpack is at 0°C (32°F) and isothermal, with uniform temperature from surface to base
(zero temperature gradient), during the active melt period.
The concept of “Cold Content” grew directly out of attempts to predict snowmelt using the Temperature
Index approach. Snowmelt cannot occur unless the snowpack temperature is at 0°C (32°F). Cold content is
the heat necessary to warm the snowpack up to 0°C (32°F) in terms of the amount of ice this heat would
melt.
The heat required per unit area to raise temperature of the snowpack to 32°F (0°C), Hc_heat, is
where = the depth averaged snow density; Cp_ice = the heat capacity of ice; D = the snow depth; and ΔT =
the temperature below 0°C (32°F).
In practice, the flow of heat is not estimated during snowmelt modeling when using the Temperature Index
approach. Rather (as will be shown in the section on modeling below), the snowpack temperature is more
conveniently described by expressing the Cold Content as the negative of the depth of frozen water that
Hc_heat would melt.
where λ = the latent heat of fusion of water; and Cc = the Cold Content in units of negative inches (or cm).
moist Ts = 0°C. The water is not visible even at 10x 0–3 1.5
magnification. When lightly crushed, the snow
has a distinct tendency to stick together.
very wet Ts = 0°C. The water can be pressed out by 8–15 11.5
moderately squeezing the snow in the hands,
but an appreciable amount of air is confined
within the pores (funicular regime).
Soaked or slush Ts = 0°C. The snow is soaked with water and >15 >15
contains a volume fraction of air from 20 to
40% (funicular regime).
Pendular regime (of water) The condition of low liquid water content where a continuous air
space as well as discontinuous volumes of water coexist in a snowpack, i.e., air-ice, water-ice,
and air-liquid interfaces are all found. Grain-to-grain bonds give strength. The volume fraction of
free water does not exceed 8 %.
Funicular regime (of water) The condition of high liquid water content where liquid exists in
continuous paths covering the ice structure; grain-to-grain bonds are weak. The volume fraction
of free water exceeds 8 %.
where SWEt = the SWE of the snowpack (depth); t = time; Pt = the precipitation rate (depth/time); Rt = the
runoff rate (depth/time); Vt = mass gained from or lost to water vapor (depth/time); and Bt = the snow gained
from or lost to blowing snow (depth/time). (Variables with the subscript t are time varying.) The precipitation
can be in the form of snow or rain – both will increase the SWE. The runoff rate (Rt) is determined by phase
change in the snowpack which is determined through the energy balance calculations.
In general, the principal mass input into the snowpack is precipitation in the form of snow, which increases
the snowpack SWE during the accumulation period; and the principal mass loss is liquid water runoff, which
decreases the SWE during the ablation period. These are the processes modeled by the HEC-HMS
Temperature Index Snow Model. However, mass lost to water vapor and blowing snow erosion and
deposition can also be important at some locations.
Sublimation and condensation (Vt) describe the phase change of ice crystals of the snowpack directly into
water vapor or the phase change of water vapor into ice or liquid water in the snowpack. Sublimation and
condensation occur when heat is transferred between the snowpack and the atmosphere through the latent
heat flux. (More about this in the heat transfer section below.) Blowing snow and sublimation can play
significant roles in tundra, along alpine ridges, and other areas where strong winds and low humidity are
common. In any case, sublimation, condensation, and blowing snow are not by the HEC-HMS Temperature
Index Snow Model. As a result, the snowpack mass balance can be simplified to
where Qtotal = the net heat flux at the snow surface (units of joules per second per unit area). (Qtotal > 0,
implies a net heat flow into the snowpack). Note that equation includes the snow depth and density inside
the differentiation as the depth and density can also change with time. In general, the snow depth will only
change with time under these conditions ( <0°C (32°F)) if there is snowfall occurring. This will be discussed
in more detail below. Assume that the snow depth and density are constant with time as is usual under these
conditions. Equation is then written as
where Cc = the Cold Content of the snowpack, as defined in equation . Note that equation and equation are
entirely equivalent.
If =0°C (32°F), which is equivalent to Cc =0, and the net heat transfer is into the snowpack, that is, Qtotal > 0,
then the conditions are set for phase change, melting, to occur. The rate liquid water is formed from melting
ice, Mt (depth/time), is
If the percentage of the Liquid Water Content of the snowpack is less than LWCmax% then the liquid water
created by melting snow increases the Liquid Water Content of the snowpack and no runoff occurs.
If LWC/SWE = then
Once the percentage of the Liquid Water Content of the snowpack is equal to LWCmax% then all liquid water
formed goes into runoff, which reduces the SWE and the LWC. This process can continue until the SWE is
zero.
There are a number of different modes of heat transfer that are included in Qtotal. These will be discussed
next.
Overview
Snowmelt is ultimately driven by the transfer of heat energy into the snowpack from the atmosphere and the
surrounding environment. There are four primary modes of heat transfer between the snowpack and its
environment: sensible heat transfer, latent heat transfer, long wave radiation heat transfer, and short-wave
radiation heat transfer. There is also heat transfer from precipitation falling on the snow surface as rain or
snow, and heat transfer from the soil layer beneath the snowpack. This can be stated as
where ea = the vapor pressure of the air; and esat(Tss) = the saturated vapor pressure immediately above the
snow surface at the snow surface temperature, Tss. The equation can be cast in terms of the relative
humidity of the air (which is often known) by noting that the definition of relative humidity, RH,
where esat(Ta) = the saturation vapor pressure of the air at the air temperature Ta. Substituting this into the
previous equation, it can be seen that the latent heat transfer is the from air into the snowpack when
where RHnuet = the neutral relative humidity at which no latent heat flux occurs. During the ablation period,
the snow surface temperature is 0°C (32°F) and the saturated vapor pressure above the snowpack, esat(Tss)
It can be seen in Figure that sublimation of the snowpack will not be an uncommon occurrence during the
ablation period, even when the air temperature is 10°F above freezing or more. During the ablation period, the
sensible and latent heat fluxes will often be in different directions: the sensible heat flux into the snowpack
and the latent heat flux out of the snowpack. The latent heat flux will be into the snowpack generally only
during periods of high relative humidity, for example rain on snow events, fog, and other periods when very
moist air is present.
The rates of sensible and latent heat transfer are both determined by the degree of turbulence in the
atmosphere above the snow and the stability of the atmosphere. The primary way by which turbulence is
generated is by wind drag over the snow and ground surface. The rate of the turbulent energy generation is
very sensitive to the velocity of the wind. The ability of wind to increase the heat transfer rate is called
convection. However, wind is not the only creator of turbulence in the atmosphere – natural convection of
sensible and latent heat from the snow and ground surface can also create turbulence. Natural convection
occurs when the density of the air immediately at the snow surface is less than the density of the air above.
This difference in density causes the air to rise vertically upwards though buoyancy. The atmosphere is said
to unstable when natural convection occurs. Mixed convection occurs when wind convection is augmented
by natural convection. A contrasting case occurs when the atmosphere is stable – that is the air near the
ground is denser than the air above. Under stable conditions, natural convection does not occur and the wind
convection is damped. Under very stable conditions convection may not occur at all if the wind velocity is
low, and sensible and latent heat transfer can drop to very low levels.
The rates of sensible and latent heat transfer can be calculated using formulas such as
Where QLatent = the rate of latent heat transfer; Qsensible = the rate of sensible heat transfer; ρa = the density of
air; Lv = the latent heat of sublimation; Cpa = the heat capacity of air; UZ = the wind speed measured at an
elevation Z; ea = the water vapor pressure in the atmosphere; esat(Tsnow) = the saturated vapor pressure
immediately above the snow surface; Ta = the air temperature; and Tsnow = the snow surface temperature. CL
and Cs are stability factors that account for the influence of the stability of the atmosphere on the rates of
where g = the acceleration of gravity. When Ri >0, (that is, Ta > Tsnow) the conditions are stable, and the
stability factor tends to be small, indicating that sensible and latent heat transfer rates are small. If Ri >>0
then any convection is effectively damped and the transfer rates drop to near zero. When Ri = 0 (Ta = Tsnow),
the conditions are neutral, and the heat transfer is controlled by forced convection. When Ri < 0 (Ta < Tsnow),
the conditions are unstable, and the stability factor tends to be large, indicating that the sensible and latent
heat transfer rates are greater due to augmentation of forced convection by natural convection.
Representative values of the stability factors are shown in Figure below.
During the ablation period, the snow surface temperature is 0°C (32°F) and the air temperature is generally
greater than 0°C (32°F) which means that Ta > Tsnow and Ri > 0 and the atmosphere is stable. The
atmosphere becomes more and more stable as the air temperature increases relative to the snow surface
temperature. This means that the turbulent fluxes of latent and sensible heat are effectively suppressed by
the increasing stability of the air during the ablation period unless there is a significant wind velocity.
where QLW = the longwave radiation emitted per unit time per unit area; σ = the Stefan-Boltzmann constant
(5.67 x 10-8 W m-2 °K-1); Ts = the surface temperature in degrees Kelvin; and ε = the emissivity of the surface
(emissivity is between 0 and 1. If a body is a “perfect” emitter of radiation, ε = 1. In fact, many bodies, such as
snow and vegetation are close to being perfect emitters).
where εs = the emissivity of the snow surface (accepted values range from .97 to 1.0); and Tsnow = the
temperature of the snow surface (°K).
There is also downwelling (or incoming radiation) emitted from the atmosphere itself and by vegetation and
structures in the vicinity. The downwelling radiation absorbed by the snow surface is energy gained by the
snow that warms the snow. The overall impact of longwave radiation is found by summing the downwelling
and upwelling longwave radiation at the surface:
where QLWnet = the net longwave at the snow surface; and QLWâ = the downwelling longwave radiation. The
longwave radiation emitted by the atmosphere that reaches the snow surface is
where εa = the emissivity of the atmosphere. εa is affected by the vapor pressure (ea), air temperature (Ta),
and cloud cover (clf). The cloud cover is parameterized by the sky cloud fraction, with clf = 1 for a complete
cloud cover, clf = 0 clear skies. εa can be estimated by a variety of formulas. A representative formula is
where εcl = the clear-sky emissivity. Note that as clf increases from 0 to 1, εa proportionally increases
between the clear sky value, εcl and the limiting value of 1 for a completely cloud covered sky. The clear-sky
emissivity is often estimated as
The equation for the downwelling longwave radiation from the atmosphere can be found by combining the
above equations as
The downwelling longwave radiation from the atmosphere is relatively small during cold, clear periods with
low humidity (clf = 0, ea ~0, εcl ~0.68). It is relatively large during warm cloud covered periods (clf = 1, εcl =1).
Downwelling longwave radiation can also be emitted by the vegetative canopies above the snow surface.
The total downwelling radiation can be described as the sum of the radiation from the sky and the vegetative
canopies as
where QLWvâ = the longwave radiation emitted by the canopy. The vegetative canopy is parameterized by the
sky view factor, Svf, with Svf, = 1 if there is no vegetative canopy above the snow, and Svf = 0 if the view of the
sky is completely blocked by the canopy. The longwave radiation emitted by the canopy is estimated as
The branches, stems, leaves, and other components of the vegetative canopy are generally assumed to have
an emissivity of 1 and their temperature assumed to be equal to the air temperature.
The net longwave radiation at the snow surface can now be written as
?)
Note the upwelling longwave radiation emitted by the snow surface has been given the opposite sign of the
downwelling radiation. Expanding this equation
where QSW = the shortwave radiation absorbed at the snow surface; QSWâ = the downwelling shortwave
radiation that reaches the snow surface; and α = the albedo of the snow (0< α <1)
Albedo.
Albedo is the ratio of the reflected shortwave radiation to the downwelling shortwave radiation reaching the
surface. It is well known that the albedo of snow varies wavelength by wavelength. However, in snow
hydrology, it is generally the broadband albedo that is of interest. The broadband albedo is found through a
weighted integration of the albedo at each wavelength that comprises shortwave radiation. The albedo is
determined by the crystalline structure of the snowpack surface. Shortwave radiation tends to be reflected
by the surface of ice crystals and absorbed in the interior of crystals. As mentioned above, newly fallen snow
typically has large surface areas to volume ratios. As a result, the albedo of newly fallen snow is large,
generally in the range of 0.85-0.95. Snow metamorphism is the modification of the snow crystals and grains
to less angular, more rounded forms with time. Metamorphism increases the size of the crystals which
decreases the surface area and increases their volume. This causes the albedo to decline as the
metamorphism progresses. As long as the air temperature is less than 0°C (32°F), metamorphism proceeds
slowly and the rate of decline of the albedo is relatively slow. Each new snowfall ‘resets’ the albedo back to
the newly fallen value and the metamorphism and albedo decline start over again. However, when the air
temperature is greater than 0°C (32°F) and active snowmelt is occurring, metamorphism occurs quickly and
the rate of decline of the albedo is relatively rapid. The albedo can decline to values of about 0.40 for well-
aged snow. The albedo may drop to even lower values when the snowpack is shallow (snow depths of 0.5 m
or less) allowing the ground surface beneath the snow to have an influence. Dust, soot, forest debris such as
bark and twigs, and other deposited matter can also influence the snow surface albedo, and generally cause
it to decline.
Many factors can influence the amount of shortwave radiation reaching the ground at any location. The
journey of shortwave radiation begins at the surface of the sun where it is emitted. It then travels through
space for a short span of 8 minutes and 20 seconds to reach the top of the atmosphere of the earth. This
top-of-the-atmosphere value can be directly calculated as
where I0â = the top of the atmosphere shortwave radiation (Wm-2); r0 = the mean distance between the earth
and sun; r = the actual Earth-Sun distance; S0 = the solar constant at the mean Earth-Sun distance r0 (1369.3
w/m2); and q0 = the solar zenith angle, the angle measured at the earth's surface between the location of the
sun in the sky and the local zenith (The local zenith is the point in the sky directly above a particular location.)
The Earth-Sun distance r varies throughout the year because the earth follows a slightly elliptical orbit around
the sun. Each of these geometrical parameters in this equation, r0, r, and q0, can be calculated with precision
because the clockwork nature of the earth’s orbit around sun and the obliquity (tilt) of the earth itself are
both well understood. (Whether or not the solar constant, S0, is, in fact, a constant is a question beyond the
scope of this write up. Certainly, ongoing observations suggest that any variations are relatively small.) The
formulas for the geometrical parameters are straightforward but computationally intensive if done by hand.
In short, the top of the atmosphere shortwave
radiation can be calculated precisely for any location on earth for any time if the following are known: day of
year, latitude, longitude, time of day, and offset from Greenwich Mean Time. The portion of the top of the
atmosphere shortwave radiation that actually reaches the surface of the earth depends on the conditions of
the atmosphere, - primarily the presence of clouds. The earth’s atmosphere is not perfectly transparent to all
the shortwave radiation even on a cloud free day when some radiation will be scattered, absorbed by gases
and water vapor, and scattered by aerosol particles. These reductions in solar radiation on cloud-free days
will tend to be relatively small unless the atmosphere is particularly turbid.
Clouds have a major impact on the sunlight reaching the earth. The impact will vary depending on the
location of clouds relative to the position of the sun, the type of clouds, and the percentage of the sky
where at = an attenuation factor due to dust, scattering and absorption by the atmosphere (at < 1); clf = the
sky cloud factor (0 < clf < 1); α = the snow albedo; Ioâ = the top of the atmosphere shortwave radiation; and
QSWâ = the shortwave radiation reaching the snow surface. If the snow surface is not horizontal corrections
can be made based on the slope and aspect of the immediate topography. Shadows from surrounding
terrain can also impact the downwelling shortwave radiation.
The top of the atmosphere shortwave radiation arriving at any location follows a seasonal cycle. In the
Northern Hemisphere, the minimum top of the atmosphere radiation occurs at the winter solstice (December
21st). The value of the daily average shortwave radiation at the winter solstice decreases from south to
north. North of the Arctic Circle (66° 33′ 47.3”), the daily average shortwave radiation is zero at the winter
solstice because north of the Arctic Circle is continually dark at that time of year. As the season progresses
in time the solar radiation increases at every latitude in the northern hemisphere reaching a maximum on the
summer solstice (June 21st). The relative change from winter minimum to summer maximum is greatest in
the northern latitudes and less in the southern. The further south a position is located the earlier in the year it
will reach a given level of solar radiation above its minimum. At the summer solstice the daily average
shortwave radiation is remarkably uniform from the North Pole to the equator. However, the length of the
sunlit portion of the day also varies from the North where there are 24 hours of continuous daylight to a
minimum at the equator where there are 12 daylight hours. This means that the instantaneous or hourly
radiation is less in the north because the daily average is applied over more hours of daylight.
Daily average broadband downwelling shortwave radiation measured at three SNOTEL sites located in the same region of Idaho. Each
day has been averaged all years in the POR.
Snowfall.
The precipitation is falling as snow when Ta £ TPX. The sensible heat that arrives at the surface of the
snowpack due to snowfall is
where St = the snowfall rate in terms of the snow water equivalent (depth/time); and Ta = the air temperature.
Note that it is assumed that the temperature of the snowfall is the same as the air temperature, Ta. The
snowfall sensible heat may or may not have an impact on the average snowpack temperature. This can be
determined by restating equation as
Note that the rate of change in SWE is equal to the snowfall rate, which is stated as
Substituting equation into equation , the change in the average snowpack temperature, , due to snowfall is
The average snowpack temperature, , will be changed by snowfall only if the air temperature and are
different.
The above equation can also be stated in terms of the Cold Content, Cc. First, the definition of Cold Content,
as written in equation , is restated in terms of SWE
The change of cold content with time can be found by taking the derivative of equation with respect to time
and substituting in the expression for the rate of change of the average snowpack temperature, , from
equation
The rate of change of the Cold Content is described by , however, this can be stated in a more compact form
if the rate that Cold Content arrives as snowfall, is defined as
Then
The precipitation is falling as rain when Ta > TPX. Rainfall impacts the energy balance of the snowpack
through the sensible heat that it brings to the snowpack and the through the possibility of phase change of
the liquid water. The sensible heat is a determined by the temperature of the rain when it reaches the snow
surface. Once the liquid water has cooled to the ice/water equilibrium temperature further heat extraction
must result in phase change of the liquid water to ice. Generally, freezing of rainfall in the snowpack can only
happen if the snowpack temperature is less than the ice/water equilibrium temperature.
The sensible heat that arrives at the surface of the snowpack due to rainfall is
where Pt = the rainfall rate (depth/time); and Ta = the air temperature. Note that it is assumed that the
temperature of the rainfall is the same as the air temperature, Ta. Also note that the water can only be cooled
to Tm, the ice/water equilibrium temperature (32°F (0°C)).
Once the liquid water has reached the ice/water equilibrium temperature further cooling must result in phase
change of the liquid water to ice. Generally, freezing of rainfall in the snowpack can only happen if the
snowpack temperature is less than the ice/water equilibrium temperature. Freezing of rainfall in the
snowpack is a very effective means of raising the snowpack temperature due to the latent heat released by
the liquid water when it freezes.
The potential latent heat that arrives at the surface of the snowpack due to rainfall is
Note that latent heat will be extracted from the liquid rainfall only as long as the snowpack temperature, , is
less than the ice/water equilibrium temperature (32°F (0°C)), Tm.
Mass Balance
A temperature index snow model simplifies the heat transfer calculations into the snowpack by estimating
the heat transfer into or out of the snowpack as a function of the difference between the surface
temperature and the air temperature.
During dry melt conditions, the program uses the following equation to compute snowmelt (assuming no
cold content in the snow pack):
where WetMeltRate is in inches/(Degree Fahrenheit-Day) or mm/(Degree Celsius-Day) and the constant term,
0.168, has units of hour/(Degree Fahrenheit-Day) or hour/(Degree Celsius-Day). The rain on snow equation is
based on equation 5-18 in Engineering Manual 1110-2-1406.
The snowmelt capability in HEC-HMS estimates the following snowpack properties at each time step: The
Snow Water Equivalent (SWE) accumulated in the snowpack; the snowpack temperature (actually, the
snowpack cold content but this is equivalent to the snowpack temperature); snowmelt (when appropriate);
the liquid water content of the snowpack; and finally, the runoff at the base of the snowpack.
Energy Balance
Cold Content
The rate of change of cold content with time can be approximated starting with the definition from equation
as
where Qt = the rate of heat transfer per unit area (energy per unit area per time); h= a heat transfer coefficient
(energy per unit area per time per degree air temperature); Tat = the air temperature; Ts = a representative
temperature of the snow pack; and cr = the “cold rate” that will be discussed below. Note that, following the
example of Anderson (1973) and others, the engineering approximation of heat transfer has been used.
There is a question of what the representative temperature of the snowpack should represent. To be entirely
consistent with the concept of engineering heat transfer coefficient, Ts should equal the surface temperature
of the snowpack. However, this is not very satisfactory because the surface temperature of the snow pack is
not known a priori. This is because the heat transfer from the snow pack is controlled both by the heat
transfer from the surface to the atmosphere and by the heat conduction through the snowpack itself; with
the slower of the two processes controlling the rate. To overcome this problem, the representative
temperature of the snow pack will be considered to represent some interior temperature of the snowpack. If
the snowpack is shallow, the temperature will be representative of the entire snowpack; if the snowpack is
deep, the temperature will be representative of the upper layer. This representative temperature, termed the
“Antecedent Temperature Index for Cold Content” (ATICC) will be estimated using quasi-engineering
approach to heat transfer in a somewhat similar manner as the cold content, as described below.
The cold content is found by first estimating an “Antecedent Temperature Index for Cold Content” (ATICC)
“near” the snow surface, ATICC, defined and estimated as (Anderson 1973, Corps of Engineers 1987, p 18)
where ATICC2 = the index temperature at the current time step; ATICC1 = the index temperature at the
previous time step; and TIPM is a non-dimensional parameter. The problem is that limited documentation
exists to describe how the parameter TIPM is related to the time step, snow material properties, or heat
transfer conditions.
In this section a consistent approach for estimating cold content is developed that is based on the approach
of estimating changes in cold content based on the temperature difference between the air temperature and
ATICC. First, an approach for estimating ATICC is developed. To do this, we turn to a simple heat budget type
analysis of the snow pack in order to gain some insight. A straightforward heat budget of the snow pack can
be written
where d = the “depth” of the snow pack associated with the depth of the index temperature; TATI = the snow
temperature measured by the antecedent temperature index, h* = the “effective” heat transfer coefficient
from the snow surface to the atmosphere (wm-2°C-1); and Ta = the air temperature. Note that we are
assuming that a region of the snow pack temperature has a uniform temperature TATI. This assumption is a
bit dubious BUT it makes TATI entirely analogous to ATICC. Note also that h* can be defined as
where h = the heat transfer coefficient from the snow surface to the atmosphere; ks = the snow thermal
conductivity; and ls = the effective snow depth through which thermal conduction occurs. h* will be
dominated by whichever process is slower: heat transfer from the snow to the atmosphere or thermal
conduction through the snow depth ls. If it is assumed that the snowpack temperature is To at time t = 0, the
solution for equation is
where To = the initial snow pack temperature; and t = time from start. Setting TATI ATICC2 and To ATICC1,
equation (6) can be restated as
We can see that TIPM is a function of the material properties of the snow pack, ρs, cp, and d; the heat transfer
regime as indicated by h*; and the time step, t. If we assume that the material properties and heat transfer
regime of the snow pack are set by the value of TIPM corresponding to a given time step of one day (for
example, TIPM1 is the value of TIPM corresponding to a time step, t1, of one day) then the value of TIPM at
another time step with same material and heat transfer properties can be found as
where TIPM1 is the value of TIPM corresponding to a time step of one day; and t2/t1 is the ratio of the model
time step (t2) to one day (t1). Equation employs the value of TIPM1 calibrated from a time step of one day,
and allows it to be used in model runs of 1 hour or even 1 minute and arrive at the same results for ATICC if
the air temperature is the same.
If a simple differential equation for cold content is used
where cc = cold content (inches day-1); and cr = cold rate (in. day-1 °F-1). Equation can be integrated by again
setting TATI ATICC2; noting the solution for ATICC2 as given in equation to arrive at
Where log = the natural logarithm; and, as before, TIPM1 is the value of TIPM calibrated for a time step of one
day; and t2/t1 is the ratio of the model time step (t2) to one day (t1). where t2/t1 is the ratio of the model time
step (t2) to one day (t1) (or, more exactly t1 should correspond to the units of cr.).
Hybrid Snow
This Hybrid snow method is based on the Radiation-derived Temperature Index (RTI) snow model (Follum et
al., 201512; Follum et al., 201913). The Hybrid method improves upon the temperature index method by using
estimates of air temperature, shortwave radiation, and longwave radiation at a grid cell to derive a radiation
temperature which may better represent the energy fluxes into/out of the snow pack than air temperature
alone. The HEC-HMS Hybrid snow method is inherently gridded (there is no banded
implementation). Differences between the original RTI snow model and the Hybrid method in HEC-HMS are
noted on this page.
6)
where S0 is the incident shortwave radiation and Kr, Katm, Kc, Kv, Ks, and Kt are reduction factors for the
distance from the earth to the sun, atmospheric scattering, slope and aspect of the terrain, and topographic
shading, respectively. In the original RTI model, the incident shortwave radiation for each grid cell is adjusted
by reduction factors for the distance from the earth to the sun, atmospheric scattering, absorption by clouds,
vegetation, slope/aspect of the terrain, and topographic shading.
12 https://www.researchgate.net/publication/281358823_A_Radiation-Derived_Temperature-
Index_Snow_Routine_for_the_GSSHA_Hydrologic_Model
13 https://www.researchgate.net/publication/334133077_A_Comparison_of_Snowmelt-Derived_Streamflow_from_Temperature-
Index_and_Modified-Temperature-Index_Snow_Models
The reduction in shortwave radiation due to atmospheric thickness, aerosols, and moisture is computed for
each cell based on its elevation (Allen et al., 2005):
7)
8)
The snow surface temperature is computed using the Stefan-Boltzmann Law to relate radiated energy to
temperature:
9)
where is the emissivity of snow (assumed to be 0.99) and is the Stefan-Boltzmann Constant
(5.6703728287 × 10-8 kg s-3 K-4).
Precipitation is partitioned into rain and snow using the Rain Threshold Air Temperature and
the Snow Threshold Air Temperature. When the air temperature is greater than or equal to the Rain
Threshold Air Temperature, any precipitation is assumed to be rain. When the air temperature is less than or
equal to the Snow Threshold Air Temperature, any precipitation is assumed to be snow. When the air
temperature is between the two threshold temperatures, the amount of precipitation is partitioned between
snowfall and rainfall based on the air temperature. The fractions of precipitation in the form of rain and snow
are computed as:
10)
11)
Melt occurs when the energy input into the snowpack overcomes the heat deficit. The change in heat deficit (
) within the snowpack due to differences between the air and snow surface temperatures is calculated
as:
12)
13)
where NMFmax is the Maximum Negative Melt Factor, Mf is the Melt Factor, and Mf,max is the maximum melt
factor.
The original RTI snow model does not include a precipitation intensity condition (i.e., the
algorithm only checks if precipitation has exceeded 1.5 mm in 6 hours).
When at least 1.5 mm of precipitation occurs during the previous 6 hours and the average hourly
precipitation exceeds 0.25 mm/hr, an energy balance is used to calculate the amount of snow melt (M) with
the assumption that snow surface temperature is 0°C, incoming solar radiation is negligible, and incoming
longwave radiation is equal to black body radiation:
14)
where fu is the Wind Function, rh is the relative humidity, Pa is atmospheric pressure, and esat is the saturated
vapor pressure.
When the precipitation accumulation and intensity conditions are not met, potential snow melt is computed
as:
15)
In the original RTI model, the melt factor is computed from a minimum and maximum melt factor
and parameters that account for seasonal melt variation. In the HEC-HMS implementation, the
melt factor is a user-specified parameter.
The snowpack heat deficit is updated and actual snow melt are calculated based on one of three conditions:
0 0
Otherwise 0
14 https://www.weather.gov/media/owp/oh/hrl/docs/22snow17.pdf
Required Parameters
The Rain and Snow Threshold Air Temperatures are used to differentiate between precipitation falling as
rain and snow, respectively. In particular, precipitation that falls at an air temperature above the Rain
Threshold Temperature will occur purely as rain while precipitation that falls at an air temperature below the
Snow Threshold Temperature will occur purely as snow. The Rain Threshold Air Temperature must always be
greater than or equal to the Snow Threshold Air Temperature. Decreasing the Rain Threshold Air
Temperature will cause more precipitation to fall purely as rain while increasing the Rain Threshold Air
Temperature will cause less precipitation to fall purely as rain. Conversely, decreasing the Snow Threshold
Air Temperature will cause less precipitation to fall purely as snow while increasing the Snow Threshold Air
Temperature will cause more precipitation to fall purely as snow. These two parameters can be equivalent or
differ by up to a few degrees.
The Base Temperature is the temperature above which snow begins to melt. This parameter typically has a
value around the freezing temperature, but can vary by a few degrees. Decreasing the Base Temperature will
cause snow melt to occur at colder temperatures while increasing the Base Temperature will require higher
temperatures to cause snow melt.
The Melt Factor is a coefficient used to calculate snow melt. As a result, it impacts the rate of snow melt.
Increasing the Melt Factor will increase the rate of snow melt while decreasing the Melt Factor will decrease
the rate of snow melt.
The Maximum Negative Melt Factor is a coefficient used to calculate the heat deficit. This parameter has a
positive value despite its name. In order for snow melt to occur, the amount of energy in the snowpack has
overcome the heat deficit. Therefore, increasing the Maximum Negative Melt Factor will increase the heat
deficit and delay the initiation of snow melt. Decreasing the Maximum Negative Melt Factor will decrease the
heat deficit and cause snow melt to initiate sooner.
As in the Temperature Index method, the ATI Coefficient is used to weight the previous time step's ATI in the
computation of the current time step's ATI. Increasing the ATI will apply more weight to the previous time
step's ATI.
The Wind Function is used to calculate the impediment of flow of vapor when the air temperature is warmer
than the snowpack surface temperature. Increasing the Wind Function will increase snow melt and
decreasing the Wind Function will decrease snow melt.
Rain Threshold Air deg F Temperature above which -58.0 to 113.0 deg 32.0 to 40.0 deg F
Temperature deg C precipitation falls as rain F 0.0 to 4.4 deg C
-50.0 to 45.0 deg C
Snow Threshold Air deg F Temperature above which -58.0 to 113.0 deg 30.0 to 35.0 deg F
Temperature deg C precipitation falls as F -1.1 to 1.7 deg C
snow -50.0 to 45.0 deg C
Base Temperature deg F Temperature above which -148.0 to 113 deg F 30.0 to 35.0 deg F
deg C snow begins to melt -100.0 to 45.0 deg -1.1 to 1.7 deg C
C
Melt Factor in/deg F-6 Coefficient used to 2.19E-5 to 0.052 0.001 to 0.01 in/
hr calculate snow melt in/deg F-6 hr deg F-6 hr
mm/deg 0.001 to 2.4 mm/ 0.046 to 0.46 mm/
C-6 hr deg C-6 hr deg C-6 hr
Maximum Negative in/deg F-6 Coefficient used to 2.19E-5 to 0.052 0.001 to 0.05 in/
Melt Factor hr calculate heat deficit in/deg F-6 hr deg F-6 hr
mm/deg 0.001 to 2.4 mm/ 0.046 to 2.28 mm/
C-6 hr deg C-6 hr deg C-6 hr
ATI Coefficient unitless Controls how much 0.001 to 1.0 0.5 to 0.99
weight is put on
temperatures from
previous time intervals
when computing ATI
Wind Function in/in Hg-6 Used to calculate wind 0.0013 to 1.33 in/in 0.5 to 0.75 in/in
hr scour from snowpack Hg-6 hr Hg-6 hr
mm/mb-6 0.001 to 1.0 mm/ 0.37 to 0.56 mm/
hr mb-6 hr mb-6 hr
Basic Concepts
The Energy Budget Method is based on the Utah Energy Balance (UEB) model (Tarboton and Luce, 199619;
Luce, 200020; Tarboton and Luce, 200121; You, 200422). The UEB snowmelt model is a physically-based
energy and mass balance model. Energy is exchanged between the snowpack, the air above, and the soil
below.
15 https://www.hec.usace.army.mil/confluence/display/HMSGUIDES/
Calibrating+Gridded+Snowmelt%3A+Upper+Truckee+River%2C+California
16 https://www.hec.usace.army.mil/confluence/display/HMSGUIDES/
Calibrating+Point+Snowmelt%3A+Swamp+Angel+Study+Plot%2C+Colorado
17 https://www.hec.usace.army.mil/confluence/pages/viewpage.action?pageId=133991602
18 https://www.hec.usace.army.mil/confluence/hmsdocs/hmsum/4.12/subbasin-elements/selecting-a-snowmelt-
method#id-.SelectingaSnowmeltMethodv4.12-GriddedHybrid
19 https://hydrology.usu.edu/dtarb/snow/snowrep.pdf
20 https://hydrology.usu.edu/dtarb/luce_dissertation.pdf
21 https://hydrology.usu.edu/dtarb/wsc2001.pdf
22 https://hydrology.usu.edu/dtarb/yjs_dissertation.pdf
The current implementation of the Energy Budget method within HEC-HMS does not include
reduction factors for absorption by clouds or forest cover.
Surface heat conduction describe the exchange of heat from the snow surface into the snowpack. Snow
surface heating varies dramatically over the course of a day and over longer time periods resulting in a
nonlinear temperature profile. Nonlinearity in snowpack temperature profile is largely caused by daily
temperature fluctuations at the surface, which have a sinusoidal pattern.
where is the soil thermal diffusivity, is the frequency of low-frequency temperature variation, and
where is daily average surface temperature and is daily average depth average snowpack
temperature.
The shallow snow correction involves computation of an effective thermal depth of combined snowpack and
ground and a weighted thermal conductivity when the thermal damping depth extends into the ground. The
shallow snowpack correction is applied when the snow depth is less than the effective depth.
Required Parameters
The Rain and Snow Threshold Air Temperatures are used to differentiate between precipitation falling as
rain and snow, respectively. In particular, precipitation that falls at an air temperature above the Rain
Threshold Temperature will occur purely as rain while precipitation that falls at an air temperature below the
Snow Threshold Temperature will occur purely as snow. The Rain Threshold Air Temperature must always be
greater than or equal to the Snow Threshold Air Temperature. Decreasing the Rain Threshold Air
Temperature will cause more precipitation to fall purely as rain while increasing the Rain Threshold Air
Temperature will cause less precipitation to fall purely as rain. Conversely, decreasing the Snow Threshold
Air Temperature will cause less precipitation to fall purely as snow while increasing the Snow Threshold Air
Temperature will cause more precipitation to fall purely as snow. These two parameters can be equivalent or
differ by up to a few degrees.
The following table presents units, a summary description, allowable values within HEC-HMS, and a
recommended range for each of the aforementioned parameters.
Rain Threshold Air deg F Temperature above which -58.0 to 113.0 deg 32.0 to 40.0 deg F
Temperature deg C precipitation falls as rain F 0.0 to 4.4 deg C
-50.0 to 45.0 deg C
Snow Threshold Air deg F Temperature above which -58.0 to 113.0 deg 30.0 to 35.0 deg F
Temperature deg C precipitation falls as F -1.1 to 1.7 deg C
snow -50.0 to 45.0 deg C
New Snow Albedo - Albedo of fresh snow 0.0 to 1.0 0.6 - 0.95
Ground Effective ft Soil depth that is included 3.28E-03 to 3.281 3.28E-03 to 3.281
Depth m within the energy budget ft ft
computations 0.001 to 1.0 m 0.001 to 1.0 m
Capillary Retention - Amount of water that can 0.001 to 0.25 0.03 to 0.05
Fraction be held within the
snowpack
Snow Hydraulic ft/s Describes how efficiently 0.328 to 1.076 ft/s 0.328 to 1.076 ft/s
Conductivity m/s liquid water moves 0.0001 to 0.1 m/s 0.0001 to 0.1 m/s
throughout the snowpack
Snow Thermal Btu/ft/ Describes how efficiently 0.017 to 0.17 Btu/ 0.02 to 0.1
Conductivity deg C/hr heat is transferred within ft/deg C/hr Btu/ft/deg C/hr
J/m/deg the snowpack 100 to 1100 J/m/ 125 to 1000 J/m/
C/hr deg C/hr deg C/hr
Temperature • "Mature" method that has been • May be too simple for some situations
Index used successfully in thousands of
• Limited snowpack outputs compared to
studies throughout the U.S.
other methods
• Easy to set up and use
• Only requires precipitation and air
temperature boundary conditions
• More parsimonious than other
methods
Snowmelt References
Allen, R. G., Walter, I. A., Elliott, R., Howell, T., Itenfisu, D., and Jensen, M. (2005). "The ASCE Standardized
Reference Evapotranspiration Equation." ASCE, Reston, VA.
Anderson, E. (2006). "Snow accumulation and ablation model - SNOW-17, NWSRFS User Documentation."
U.S. National Weather Service, Silver Springs, MD.
Follum, M. L., Downer, C. W., Niemann, J. D., Roylance, S. M., and Vuyovich, C. M. (2015). "A radiation-derived
temperature-index snow routine for the GSSHA hydrologic model." Journal of Hydrology, 529, 723-736.
Follum, M. L., Niemann, J. D., and Fassnacht, S. R. (2019). "A comparison of snowmelt-derived streamflow
from temperature-index and modified-temperature-index snow models." Hydrological Processes, 33,
3030-3045.
Luce, C. H. (2000). "Scale Influences on the Representation of Snowpack Processes." [Doctoral dissertation -
Utah State University].
The fundamental water balance relationship that a continuous simulation model must satisfy to accurately
represent the hydrologic cycle is:
Evaporation
Evaporation is the process of converting water from the liquid to the gaseous state. The process happens
throughout a watershed. Water evaporates from the surface of lakes, reservoirs, and streams. Water also
evaporates from small depressions on the ground surface that fill with precipitation during a storm. Water
23 http://dx.doi.org/10.1175/1520-0442(1995)008%3c1261:ASSCCS%3e2.0.CO;2
Transpiration
Transpiration is the process of plants removing water from the soil and expelling it to the atmosphere. The
water is extracted by the roots, travels through the plant vascular system, and exits through structures called
stomata on the underside of the leaves. Some of the soil water is retained for the biological processes of the
plant, while the process of evaporation that happens in the stomata cools the plant.
Root-Water Uptake
Water uptake does not begin with the roots; it begins within the stomata which are usually found on the
underside of leaves. The stomata are tiny chambers with an opening to the air that can be regulated by the
plant. The stomata are opened or closed in response to many different environmental and physiological
factors. When the stomata are open, water vapor leaves through the opening as long as the relative humidity
is less than 100%. The source of the vapor is water that evaporates inside the stomata, where it is found in
the space between the cells that form the walls of the stomata. The evaporated water causes a meniscus to
form in the space between the cells and a consequent capillary force is transferred to the vascular system of
the plant. The capillary force is transmitted through the water in the vascular system from the leaves down to
the roots. Microscopic hairs on the roots keep them in contact with the moist soil. Water is thus drawn into
the roots due to the transmitted capillary force. The water moves throughout the vascular system of the
Combined Evapotranspiration
It can be relatively straightforward to measure evaporation from an open water body such as a lake.
However, measuring transpiration separately from evaporation over vegetation is very difficult. Consider that
the trees, grass, or crops will be transpiring during daylight hours. Also, any water on the ground surface
between the plants will be evaporating. Any measurement techniques will record the sum of evaporation and
transpiration. In most cases it is not important in the context of hydrologic simulation to be able to separate
the two distinct processes. The important component of the hydrologic cycle is the water that is removed
from the soil and returned to the atmosphere. Therefore, an inability to measure the evaporation and
transpiration separately is not a limitation in hydrologic simulation. It is almost always the case that
evaporation and transpiration are combined and termed evapotranspiration.
• Specified Evapotranspiration
• Annual Evapotranspiration
The following sections detail their unique concepts and uses.
Estimating Parameters
There are many sources of data for pan evaporation rates. The source could be the monthly evaporation
normals for a site where data is collected with an evaporation pan. More often the data are presented within
the context of a regional analysis utilizing multiple measure sites, for example Roderick and Farquhar (2004).
Within the United States the National Weather Service has estimated average evaporation as shown in Figure
25.
Figure 25.Calculated evaporation climatology in millimeters for the month of January using data from 1971 to
2000.
The measured pan evaporation rates overestimate evapotranspiration. The usual practice is to multiple the
pan evaporation rate by a reduction ratio in order to approximate the evapotranspiration. The ratio typically
ranges from 0.5 to 0.85 with the specific value depending on how the evaporation pan is sited and the
atmospheric conditions. The ratio is larger when the relative humidity is higher. The ratio decreases as the
windspeed increases. Typical values taken from the United Nations Food and Agriculture Organization (FAO,
1998) are shown in Table 14 and Table 15. The values are for a Class A evaporation pan and depend on the
how the pan is located relative to vegetation, as shown in Figure 26.
RH (%) --> Low (< 40) Medium (40-70) High (> 70)
Table 15.Pan coefficients for Class A pan sited on bare ground with a bare ground fetch but measuring
transpiration over grass, and different levels of mean relative humidity (RH) and windspeed.
RH (%) --> Low (< 40) Medium (40-70) High (> 70)
The monthly average method is designed to work with data collected using evaporation pans. Pans are a
simple but effective technique for estimating evaporation. There is a long history of using them and data is
widely available throughout the United States and other regions.
Estimating Parameters
There are many sources of data for pan evaporation rates. The source could be the monthly evaporation
normals for a site where data is collected with an evaporation pan. More often the data are presented within
the context of a regional analysis utilizing multiple measure sites, for example Roderick and Farquhar (2004).
Within the United States the National Weather Service has estimated average evaporation as shown in Figure
25.
Figure 25.Calculated evaporation climatology in millimeters for the month of January using data from 1971 to
2000.
Daily and monthly average pan evaporation rates for CONUS can be visualized here: https://
www.cpc.ncep.noaa.gov/products/Soilmst_Monitoring/US/Evap/Evap_clim.shtml
The measured pan evaporation rates overestimate evapotranspiration. The usual practice is to multiple the
pan evaporation rate by a reduction ratio in order to approximate the evapotranspiration. The ratio typically
ranges from 0.5 to 0.85 with the specific value depending on how the evaporation pan is sited and the
atmospheric conditions. The ratio is larger when the relative humidity is higher. The ratio decreases as the
windspeed increases. Typical values taken from the United Nations Food and Agriculture Organization (FAO,
1998) are shown in Table 14 and Table 15. The values are for a Class A evaporation pan and depend on the
how the pan is located relative to vegetation, as shown in Figure 26.
RH (%) --> Low (< 40) Medium (40-70) High (> 70)
Table 15.Pan coefficients for Class A pan sited on bare ground with a bare ground fetch but measuring
transpiration over grass, and different levels of mean relative humidity (RH) and windspeed.
RH (%) --> Low (< 40) Medium (40-70) High (> 70)
Hamon Method
16)
where c is a coefficient, N is the number of daylight hours, and Pt is the saturated water vapor density at the
daily mean temperature.
The number of daylight hours N is computed as (Allen et al., 1998):
17)
24 https://ascelibrary.org/doi/epdf/10.1061/TACEAT.0008673
18)
19)
20)
Required Parameters
The only parameter required to utilize this method within HEC-HMS is the coefficient [in/g/m3 or mm/g/m3].
In addition, air temperature must be specified as a meteorologic boundary condition.
A tutorial using the Gridded Hamon method in an event simulation can be found here: Gridded
Precipitation Method25.
A tutorial using the Gridded Hamon method in an continuous simulation can be found here:
Advanced Applications of HEC-HMS Final Project26.
25 https://www.hec.usace.army.mil/confluence/hmsdocs/hmsguides/meteorologic-models-for-historical-precipitation/gridded-
precipitation-method
26 https://www.hec.usace.army.mil/confluence/hmsdocs/hmsguides/hec-hms-example-applications/advanced-applications-of-hec-hms-
final-project
Hargreaves Method
21)
22)
where KRS is a coefficient, Ra is extraterrestrial radiation, and Tmax and Tmin are the daily maximum and
minimum air temperature, respectively.
When the Hargreaves Evapotranspiration Method is used in combination with the Hargreaves Shortwave
Radiation Method, the computed Hargreaves evapotranspiration form is equivalent to Hargreaves and Allen
(2003) Eq. 8:
23)
Required Parameters
The only parameter required to utilize this method within HEC-HMS is the coefficient [deg C-1]. In addition, air
temperature must be specified as a meteorologic boundary condition.
where is the dryness coefficient, R is the net incoming radiation, G is the heat flux into the ground (R - G =
LE + H) where H is sensible heat and LE is latent heat, is the slope of the saturation vapour pressure
curve, and is the psychrometric constant.
Required Parameters
The only parameter required to utilize this method within HEC-HMS is the dryness coefficient. In addition, air
temperature and net radiation must be specified as a meteorologic boundary condition. Net radiation should
be computed, entered in the program as a radiation time-series gage, and selected as the shortwave
radiation method.
27 https://journals.ametsoc.org/view/journals/mwre/100/2/1520-0493_1972_100_0081_otaosh_2_3_co_2.xml
28 https://royalsocietypublishing.org/doi/10.1098/rspa.1948.0037
29 https://repository.rothamsted.ac.uk/item/8v5v7/evaporation-and-environment
where Rn is the net radiation at the crop surface, G is the soil heat flux , is the mean air density at
constant pressure, cp is the specific heat of air, es is the saturation vapour pressure, ea is the actual vapour
pressure, es - ea is the vapour pressure deficit, is the slope of the saturation vapour pressure temperature
relationship, and is the psychrometric constant, and rs and ra are the (bulk) surface and aerodynamic
resistances, respectively.
The bulk surface resistance accounts for the resistance of vapour flow through the transpiring crop
(stomata, leaves) and evaporating soil surface. The aerodynamic resistance describes the upward resistance
from vegetation resulting from the friction from air flowing over vegetated surfaces.
While a large number of empirical evapotranspiration methods have been developed worldwide, some have
been calibrated locally leading to limited global validity. The FAO Penman Monteith method uses the concept
of a reference surface, removing the need to define parameters for each crop and stage of growth.
Evapotranspiration rates of different crops are related to the evapotranspiration rate from the reference
surface through the use of crop coefficients. A hypothetical grass reference was selected to avoid the need
for local calibration. According to FAO (Allen et al., 1998):
The reference surface closely resembles an extensive surface of green grass of uniform height, actively
growing, completely shading the ground and with adequate water. The requirements that the grass surface
should be extensive and uniform result from the assumption that all fluxes are one-dimensional upwards.
The reference crop is defined as a hypothetical crop with a height of 0.12 m, a surface resistance of 70 s/m,
and an albedo of 0.23. The FAO's simplified equation for reference evapotranspiration is (Allen et al., 1998):
25)
where ETo is the reference evapotranspiration, Rn is the net radiation at the crop surface, G is the soil heat
flux density, T is the mean daily air temperature at 2 m height, u2 is the wind speed at 2 m height, es is the
saturation vapour pressure, ea is the actual vapour pressure, es - ea is the vapour pressure deficit, is the
slope of the saturation vapour pressure curve, and is the psychrometric constant.
Required Parameters
The parameterization is entirely dependent on the atmospheric conditions: solar radiation, air temperature,
humidity, and wind speed measurements. Weather measurements should be made at 2 m above the ground
surface (or converted to that height).
Priestley Taylor
Interception
Many watersheds have some type of vegetation growing on the land surface. The vegetation in a natural
watershed could be grass, shrubs, or forest. Agricultural watersheds could have field crops such as wheat or
row crops such as tomatoes. Even urban watersheds often have vegetation with some cities maintaining
extensive urban forests. Falling precipitation first impacts the leaves and other surfaces of the vegetation.
Some of the precipitation will remain on the plant while the remainder will eventually reach the ground. The
portion of precipitation that remains on the plant is called interception.
Precipitation that is intercepted can return to the atmosphere through evaporation. Evaporation is
significantly reduced during a precipitation event because the vapor pressure gradient is reduced by the high
humidity associated with precipitation. However, after the precipitation event is over, the humidity will usually
drop and restore the vapor pressure gradient. This allows evaporation to increase and intercepted
precipitation will return to the atmosphere.
The amount of interception is a function of the species of plant and the life stage of the plant. In general,
forests have the highest potential for interception with evergreen species collecting more precipitation than
deciduous types. Shrubs often have an intermediate about of interception capability with grasses and crops
showing the least ability to capture precipitation. Life stage is also important. Young plants are usually
smaller and consequently capture less precipitation. Deciduous trees can capture a significant amount of
precipitation during summer months when the canopy is full, but collect almost no precipitation in the winter
when the leaves have fallen off.
Water that impacts on vegetation and does not remain as intercepted precipitation can reach the ground
through two primary routes: throughfall or stemflow.
Throughfall refers to precipitation that initially lands on the vegetation surface, and then falls off the
vegetation to reach the ground. The leaves of a particular plant species have a limited capacity for holding
water in tension. Water beyond this capacity cannot remain on the vegetation for very long and will
eventually fall off the leaf. It is possible for the amount of water that can be held on a leaf to be affected by
atmospheric conditions such as windspeed.
Surface Depressions
Precipitation can arrive on the ground surface through a variety of pathways. The precipitation lands directly
on the ground when there is no vegetation in the watershed, or the precipitation can pass through gaps in the
vegetation cover. Precipitation may also arrive on the surface as throughfall or stemflow. The water on the
ground will collect in depressions. The capacity of depressions to hold water varies according to the land
use. For example, a typical asphalt parking lot has a very small capacity for storing surface water.
Conversely, conservation agriculture practices use tillage techniques designed to increase the capture of
water in surface depressions.
Water captured in surface depressions can infiltrate into to the soil after precipitation has stopped. The
amount of depression storage can control the partitioning of precipitation between infiltration and surface
runoff. Watersheds with a small depression storage capacity will capture very little precipitation and
infiltration will occur only during storm events. Watersheds with substantial depression storage will capture
precipitation and infiltration it during the storm event, and water in depressions at the end of the event will
infiltration after the storm has stopped. Water that is not captured in surface depressions will usually flow
over the surface as direct runoff.
26)
where ν is the flow per unit area, ψ is the matric potential (a negative value), z is the spatial coordinate
(measured positive downward), and K is the hydraulic conductivity. If the soil is saturated, then K is the
saturated hydraulic conductivity and is a function of the soil properties and the water properties. For
unsaturated conditions the conductivity is still a function of soil and water properties, but is additionally a
Data Requirements
The program considers that all land and water in a watershed can be categorized as either:
• Pervious surface
Directly-connected impervious surface in a watershed is that portion of the watershed for which all
contributing precipitation runs off, with no infiltration, evaporation, or other volume losses. The infiltration
loss methods included in the program include the ability to specify the percentage of the watershed which is
impervious. Impervious surface is usually associated with urbanized areas including roads, parking lots, and
building roofs. Precipitation on the pervious surfaces is subject to losses.
Infiltration
If the moisture deficit is greater than zero, water will infiltrate into the soil layer. Until the moisture deficit has
been satisfied, no percolation out of the bottom of the soil layer will occur. After the moisture deficit has
been satisfied, the rate of infiltration into the soil layer is defined by the constant rate. The percolation rate
out of the bottom of the soil layer is also defined by the constant rate while the soil layer remains saturated.
Percolation stops as soon as the soil layer drops below saturation (moisture deficit greater than zero).
Moisture deficit increases in response to the canopy extracting soil water to meet the potential ET demand.
Since this method allows for the extraction of infiltrated water, this method can be used for both
event and continuous simulations.
Evapotranspiration
Evapotranspiration removes water from the soil layer between storm events. The potential
evapotranspiration rate is taken from the meteorologic model, where a variety of methods are available for
representing that process. The evapotranspiration rate is used as specified by the meteorologic model
without any modification. Water is removed from the soil layer at the potential rate for every time interval
when there is no precipitation. There is no further evapotranspiration after the water in the soil layer is
reduced to zero. Evapotranspiration will start again as soon as water is present in the soil layer and there is
no precipitation.
Required Parameters
Parameters that are required to utilize this method within HEC-HMS include the initial deficit [inches or
millimeters], maximum deficit [inches or millimeters], constant rate [in/hr or mm/hr], and directly connected
impervious area [percent].
The initial deficit defines the volume of water that is be required to fill the soil layer at the start of the
simulation while the maximum deficit specifies the total amount of water the soil layer can hold.
The maximum deficit is typically defined using the product of the effective soil porosity and an assumed
active layer depth, but it should be calibrated using observed data.
The initial deficit must be less than or equal to the maximum deficit. Both parameters are
specified as effective depths (e.g., inches or millimeters).
The constant rate defines the rate at which precipitation will be infiltrated into the soil layer after the initial
deficit has been satisfied in addition to the rate at which percolation occurs once the soil layer is saturated.
Typically, this parameter is equated with the saturated hydraulic conductivity of the soil.
Finally, the percentage of the subbasin which is directly connected impervious area can be specified.
Directly connected impervious areas are surfaces where runoff is conveyed directly to a waterway or
stormwater collection system. These surfaces differ from disconnected impervious areas where runoff
encounters permeable areas which may infiltrate some (or all) of the runoff prior to reaching a waterway or
stormwater collection system. No loss calculations are carried out on the specified percentage of the
subbasin; all precipitation that falls on that portion of the subbasin becomes excess precipitation and
subject to direct runoff.
The values presented here are meant as initial estimates. This is the same for all sources of similar data
including Engineer Manual 1110-2-1417 Flood-Runoff Analysis32 and the Introduction to Loss Rate
Tutorials33. Regardless of the source, these initial estimates must be calibrated and validated.
30 https://www.hec.usace.army.mil/confluence/hmsdocs/hmsguides/applying-loss-methods-within-hec-hms/applying-the-deficit-and-
constant-loss-method
31 https://www.hec.usace.army.mil/confluence/hmsdocs/hmsguides/applying-loss-methods-within-hec-hms/formatting-gssurgo-data-
for-use-within-hec-hms
32 https://www.publications.usace.army.mil/Portals/76/Publications/EngineerManuals/EM_1110-2-1417.pdf?ver=VFC-
A5m2Q18fxZsnv19U8g%3d%3d
33 https://www.hec.usace.army.mil/confluence/hmsdocs/hmsguides/applying-loss-methods-within-hec-hms/introduction-to-the-loss-
rate-tutorials
27)
28)
29)
where pt = precipitation rate [in/hr or mm/hr] at time t, ERAIN = precipitation exponent, AK = loss rate
coefficient at the beginning of the time interval, DLTK = incremental increase in the loss rate coefficient
during the first DLTKR [in or mm] of accumulated loss, Ft. When Ft is greater than DLTKR, DLTK = 0. Note
that there is no direct conversion between metric and English units for the coefficients used by this method.
Consequently, separate calibrations excesses are required to derive site-specific coefficient for both unit
systems.
Required Parameters
Parameters that are required to utilize this method within HEC-HMS include the initial range DLTKR, [in or
mm], the initial coefficient STRKR, the coefficient ratio, the precipitation exponent ERAIN, and directly
connected impervious area [percent]. The initial range (DLTKR) is the amount of initial accumulated
infiltration during which the loss rate is increased. This parameter is considered to be a function primarily of
antecedent soil moisture deficiency and is usually storm dependent. The initial coefficient (STRKR) specifies
the starting loss rate coefficient on the exponential infiltration curve. It is assumed to be a function of
infiltration characteristics and consequently may be correlated with soil type, land use, vegetation cover, and
other properties of a subbasin. The coefficient ratio indicates the rate at which the exponential decrease in
infiltration capability proceeds. It may be considered a function of the ability of the surface of a subbasin to
absorb precipitation and should be reasonably constant for large, homogeneous areas. The precipitation
exponent reflects the influence of precipitation rate on subbasin-average loss characteristics. It reflects the
manner in which storms occur within an area and may be considered a characteristic of a particular region;
this parameter varies between 0.0 and 1.0.
The values presented here are meant as initial estimates. This is the same for all sources of similar data
including Engineer Manual 1110-2-1417 Flood-Runoff Analysis34 and the Introduction to Loss Rate
Tutorials35. Regardless of the source, these initial estimates must be calibrated and validated.
34 https://www.publications.usace.army.mil/Portals/76/Publications/EngineerManuals/EM_1110-2-1417.pdf?ver=VFC-
A5m2Q18fxZsnv19U8g%3d%3d
35 https://www.hec.usace.army.mil/confluence/hmsdocs/hmsguides/applying-loss-methods-within-hec-hms/introduction-to-the-loss-
rate-tutorials
According to EM 1110-2-1417
…the transport of infiltrated rainfall through the soil profile and the infiltration capacity of the soil is
governed by Richards' equation…[which is] derived by combining an unsaturated flow form of Darcy's law
with the requirements of mass conservation.
EM 1110-2-1417 describes in detail how the Green and Ampt model combines and solves these equations. In
summary, the model computes the precipitation loss on the pervious area in a time interval as:
in which ft = loss during period t, K = saturated hydraulic conductivity, ( - ) = volume moisture deficit, Sf
= wetting front suction, and Ft = cumulative loss at time t. The precipitation excess on the pervious area is
the difference in the MAP during the period and the loss computed with the equation shown above. As
implemented, the Green and Ampt model also includes an initial abstraction. This initial condition represents
interception in the canopy or surface depressions not otherwise included in the model. This interception is
separate from the time to ponding that is an integral part of the model. The solution method used follows
that of Li et al. (1976)36.
Since no means for extracting infiltrated water is included, this method should only be used for
event simulation.
Required Parameters
Parameters that are required to utilize this method within HEC-HMS include the initial moisture content or
deficit [in/in or mm/mm], wetting front suction head [in or mm], saturated hydraulic conductivity [in/hr or
mm/hr], and directly connected impervious area [percent].
36 https://ascelibrary.org/doi/pdf/10.1061/JRCEA4.0001092
The initial moisture content or deficit defines the starting saturation of the soil layer at the start of the
simulation. This parameter is a function of the watershed moisture at the beginning of the simulation. It may
be estimated in the same manner as the initial abstraction for other loss models.
The wetting front suction head describes the movement of water downwards through the soil column.
The saturated hydraulic conductivity defines the minimum rate at which precipitation will be infiltrated into
the soil layer after the soil column is fully saturated.
Finally, the percentage of the subbasin which is directly connected impervious area can be specified.
Directly connected impervious areas are surfaces where runoff is conveyed directly to a waterway or
stormwater collection system. These surfaces differ from disconnected impervious areas where runoff
encounters permeable areas which may infiltrate some (or all) of the runoff prior to reaching a waterway or
stormwater collection system. No loss calculations are carried out on the specified percentage of the
subbasin; all precipitation that falls on that portion of the subbasin becomes excess precipitation and
subject to direct runoff.
The values presented here are meant as initial estimates. This is the same for all sources of similar data
including Engineer Manual 1110-2-1417 Flood-Runoff Analysis39 and the Introduction to Loss Rate
Tutorials40. Regardless of the source, these initial estimates must be calibrated and validated.
Since no means for extracting infiltrated water is included, this method should only be used for
event simulation.
The underlying concept of the initial and constant-rate loss model is that the maximum potential rate of
precipitation loss, , is constant throughout an event. Thus, if is the MAP depth during a time interval t
to t+ , the excess, , during the interval is given by:
37 https://www.hec.usace.army.mil/confluence/hmsdocs/hmsguides/applying-loss-methods-within-hec-hms/applying-the-green-and-
ampt-loss-method
38 https://www.hec.usace.army.mil/confluence/hmsdocs/hmsguides/applying-loss-methods-within-hec-hms/formatting-gssurgo-data-
for-use-within-hec-hms
39 https://www.publications.usace.army.mil/Portals/76/Publications/EngineerManuals/EM_1110-2-1417.pdf?ver=VFC-
A5m2Q18fxZsnv19U8g%3d%3d
40 https://www.hec.usace.army.mil/confluence/display/HMSGUIDES/Introduction+to+the+Loss+Rate+Tutorials
An initial loss, , is added to the model to represent interception and depression storage. Interception
storage is a consequence of absorption of precipitation by surface cover, including plants in the watershed.
Depression storage is a consequence of depressions in the watershed topography; water is stored in these
and eventually infiltrates or evaporates. This loss occurs prior to the onset of runoff. Until the accumulated
precipitation on the pervious area exceeds the initial loss volume, no runoff occurs. Thus, the excess is given
by:
31)
Required Parameters
Parameters that are required to utilize this method within HEC-HMS include the initial loss [inches or
millimeters], constant rate [in/hr or mm/hr], and directly connected impervious area [percent].
A tutorial describing an example application of this loss method, including parameter estimation
and calibration, can be found here: Applying the Initial and Constant Loss Method41.
A tutorial describing how gSSURGO data can be formatted for use within HEC-HMS can be found
here: Formatting gSSURGO Data for Use within HEC-HMS42.
The initial loss defines the volume of water that is required to fill the soil layer at the start of the simulation.
This parameter is typically defined using the product of the soil moisture state at the start of the simulation
and an assumed active layer depth, but it should be calibrated using observed data. If the watershed is in a
saturated condition, Ia will approach zero. If the watershed is dry, then Ia will increase to represent the
maximum precipitation depth that can fall on the watershed with no runoff; this will depend on the watershed
terrain, land use, soil types, and soil treatment. Table 6-1 of EM 1110-2-1417 suggests that this ranges from
10-20% of the total rainfall for forested areas to 0.1-0.2 inches for urban areas.
The constant rate defines the rate at which precipitation will be infiltrated into the soil layer after the initial
loss volume has been satisfied. Typically, this parameter is equated with the saturated hydraulic
conductivity of the soil. The SCS (1986) classified soils on the basis of this infiltration capacity, and Skaggs
and Khaleel (1982) have published estimates of infiltration rates for those soils, as shown in the following
table. These may be used in the absence of better information. Because the model parameter is not a
measured parameter, it and the initial condition are best determined by calibration. Chapter 9 of this manual
describes the program's calibration capability.
Finally, the percentage of the subbasin which is directly connected impervious area can be specified.
Directly connected impervious areas are surfaces where runoff is conveyed directly to a waterway or
stormwater collection system. These surfaces differ from disconnected impervious areas where runoff
encounters permeable areas which may infiltrate some (or all) of the runoff prior to reaching a waterway or
stormwater collection system. No loss calculations are carried out on the specified percentage of the
subbasin; all precipitation that falls on that portion of the subbasin becomes excess precipitation and
subject to direct runoff.
41 https://www.hec.usace.army.mil/confluence/hmsdocs/hmsguides/applying-loss-methods-within-hec-hms/applying-the-initial-and-
constant-loss-method
42 https://www.hec.usace.army.mil/confluence/hmsdocs/hmsguides/applying-loss-methods-within-hec-hms/formatting-gssurgo-data-
for-use-within-hec-hms
The values presented here are meant as initial estimates. This is the same for all sources of similar data
including Engineer Manual 1110-2-1417 Flood-Runoff Analysis43 and the Introduction to Loss Rate
Tutorials44. Regardless of the source, these initial estimates must be calibrated and validated.
43 https://www.publications.usace.army.mil/Portals/76/Publications/EngineerManuals/EM_1110-2-1417.pdf?ver=VFC-
A5m2Q18fxZsnv19U8g%3d%3d
44 https://www.hec.usace.army.mil/confluence/display/HMSGUIDES/Introduction+to+the+Loss+Rate+Tutorials
The layered Green Ampt loss method uses two layers to represent the dynamics of water movement in the
soil. Surface water infiltrates into the upper layer, called layer 1. Layer 1 produces seepage to the lower
layer, called layer 2. Both layers are functionally identical but may have separate and distinct parameters.
Separate parameters can be used to represent layered soil profiles and also allows for better representation
of stratified soil drying between storms. Each layer is described using a bulk depth and water content values
for saturation, field capacity, and wilting point. Soil water in layer 2 can percolate out of the soil profile. The
layered Green and Ampt method is intended to be used in combination with the linear reservoir baseflow
method. When used in this manner, the percolated water can be split between baseflow and deep aquifer
recharge.
First, precipitation fills the canopy storage. Precipitation that exceeds the canopy storage will overflow onto
the land surface. The new precipitation is added to any water already in surface storage. The infiltration rate
from the surface into layer 1 is calculated with the Green and Ampt equation so long as layer 1 is below
saturation. The infiltration rate changes to the current seepage rate when layer 1 reaches saturation.
Infiltration water is added to the storage in layer 1. Seepage out of layer 1 only occurs when the storage
exceeds field capacity. Maximum seepage occurs when layer 1 is at saturation and declines to zero at field
capacity. The seepage rate changes to the percolation rate when layer 2 is saturated. Seepage is added to
the storage in layer 2. Percolation out of layer 2 only occurs when the storage exceeds field capacity.
Maximum percolation occurs when layer 2 is at saturation and declines to zero at field capacity. Most soils
observe decreasing hydraulic conductivity rates at greater depths below the surface. This means that
typically the seepage rate is reduced to the percolation rate when layer 2 saturates, and the infiltration rate is
reduced to the seepage rate when layer 1 saturates. The infiltration rate will change to the percolation rate if
both layers 1 and 2 are saturated. Both convergence control and adaptive time stepping are used to
accurately resolve the saturation of each layer.
The canopy extracts water from soil storage to meet the potential ET demand. First, soil water is extracted
from layer 1 at the full ET rate. This extraction from layer 1 continues until half of the available water has
been taken to meet the ET demand. The available water is defined as the saturation content minus the
wilting point content, multiplied by the bulk layer thickness. Second, soil water is extracted from layer 2 at
the full ET rate. This extraction from layer 2 also continues until half of the available water has been taken.
Third, the ET demand is applied equally to both layers until one of them reaches wilting point content.
Finally, the ET demand is applied to the remaining layer until it also reaches wilting point content. Soil water
below the wilting point content is never used for ET.
Since this method allows for the extraction of infiltrated water, this method can be used for both
event and continuous simulations.
Required Parameters
Parameters that are required to utilize the layered Green and Ampt method include those which were
previously mentioned for the “standard” Green and Ampt method in addition to: Layer 1 and 2 thicknesses [in
or mm], field capacity [in/in or mm/mm], wilting point [in/in or mm/mm], layer 1 maximum seepage rate [in/
hr or mm/hr], layer 2 maximum percolation rate [in/hr or mm/hr], and dry duration [days].
The layer 1 and 2 thicknesses define the bulk depth of soil and are typically estimated using soil maps.
The field capacity content specifies the point where the soil naturally stops seeping under gravity while the
wilting point content specifies the amount of water remaining in the soil when plants are no longer capable
of transpiring infiltrated water. Both parameters are typically estimated using predominant soil texture and
literature values.
The dry duration sets the amount of time that must pass after a storm event in order to recalculate the initial
condition. An initial estimate of 12 hours has been found to be reasonable for the dry duration. However,
this parameter should be calibrated using observed data.
Finally, the percentage of the subbasin which is directly connected impervious area can be specified.
Directly connected impervious areas are surfaces where runoff is conveyed directly to a waterway or
stormwater collection system. These surfaces differ from disconnected impervious areas where runoff
encounters permeable areas which may infiltrate some (or all) of the runoff prior to reaching a waterway or
stormwater collection system. No loss calculations are carried out on the specified percentage of the
subbasin; all precipitation that falls on that portion of the subbasin becomes excess precipitation and
subject to direct runoff.
The values presented here are meant as initial estimates. This is the same for all sources of similar data
including Engineer Manual 1110-2-1417 Flood-Runoff Analysis45 and the Introduction to Loss Rate
Tutorials46. Regardless of the source, these initial estimates must be calibrated and validated.
45 https://www.publications.usace.army.mil/Portals/76/Publications/EngineerManuals/EM_1110-2-1417.pdf?ver=VFC-
A5m2Q18fxZsnv19U8g%3d%3d
46 https://www.hec.usace.army.mil/confluence/display/HMSGUIDES/Introduction+to+the+Loss+Rate+Tutorials
47 https://doi.org/10.1016/j.jhydrol.2021.126490
where ft (mm/hr or in/hr) is the potential infiltration rate at time t, Ft (mm or in) is the cumulative infiltration
at time t, Fc (mm or in) is the initial deficit, m (1/hr) is the infiltration rate decay factor with respect to
cumulative infiltration, and Keff (mm/hr or in/hr) is the constant infiltration rate or effective hydraulic
conductivity.
In HEC-HMS, the LC method was further modified to allow removal of soil moisture via ET in the same way
as in the deficit and constant method (see page 132). Between precipitation events, the soil layer will lose
moisture as the canopy extracts infiltrated water. Unless a canopy method is selected, no soil water
extraction will occur. This method may also be used in combination with a surface method that will hold
water on the land surface. The water in surface storage can infiltrate into the soil layer and/or be removed
through ET.
Infiltration
The LC model lets the potential infiltration rate f start at an initial value f0 (mm/hr or in/hr) and decrease
linearly as a function of cumulative infiltration (Ft) until reaching a constant rate Keff when cumulative
infiltration is equal to initial deficit Fc. Due to the linear relationship, only m and Fc need to be defined in
addition to Keff. Compared to other simple loss methods such as the initial and constant or curve number
model, the LC method has the advantage that it does not use an initial abstraction term and will simulate
runoff from the start of a rainfall event if precipitation intensity for a given time step exceeds potential
infiltration rate.
Event-Based Simulation
The LC model accounts for a single, hypothetical soil layer, hereafter referred to as the active soil layer. The
soil layer has a maximum capacity to hold water. Figure 1 below shows a conceptual representation of the
linear deficit and constant loss method when the active soil layer is not completely saturated, i.e. the layer
contains less water than the maximum storage capacity. The deficit, measured in mm or in, is the amount of
water required at any point in time to bring the active layer to saturation. During event-based simulation
(Figure 1, left), water will infiltrate into the soil at a rate determined by the initial deficit, decay factor, and
cumulative infiltration since the onset of the storm. If at any point in time the precipitation rate exceeds the
potential infiltration rate, the difference (infiltration excess) will become runoff. If the precipitation rate at a
given time is equal to or less than the potential infiltration rate, all rainfall infiltrates into the soil.
Continuous Simulation
The LC method also allows for continuous simulation (see Figure 1, right) when used in combination with a
canopy method that allows extraction of water from the soil due to evapotranspiration. Continuous
simulation requires the specification of another loss parameter, the maximum deficit. This value can be
interpreted as the porosity multiplied by the thickness of the active layer and is measured in millimeters or
inches.
For continuous simulation, the modeler must select a canopy method (under subbasin elements) and specify
an evapotranspiration (ET) method (under meteorologic models). ET removes water from the active soil layer
between and, depending on user setting, during storm events. The potential evapotranspiration rate is taken
from the meteorologic model, where a variety of methods are available for representing that process. The ET
rate is used as specified by the meteorologic model without any modification. There is no further
evapotranspiration after the water in the soil layer is reduced to zero. ET will start again as soon as water is
present in the soil layer. Unless a canopy and ET method are selected, no soil water extraction will occur.
The canopy method also allows the modeler to simulate interception, the portion of precipitation intercepted
by vegetation that never reaches the ground.
Percolation
Once the active layer has saturated (the deficit is equal to zero), the potential infiltration rate becomes equal
to the constant rate. Water will percolate out of the bottom of the active soil layer at a rate equal to the actual
infiltration rate (see Figure 2). Percolation water is lost from the system. Percolation will continue as long as
the soil layer is at maximum storage capacity, and precipitation continues. The linear deficit and constant
method should therefore not be used for systems were:
• The water table is close to the surface, and the vadose zone could saturate completely during the
analysis period; or
• An impermeable layer is present at a depth sufficiently shallow that that a perched aquifer could form
during the analysis period.
In both cases, there would be no percolation once the active layer is saturated, and all additional precipitation
would become runoff.
Required Parameters
Parameters required to utilize this method within HEC-HMS include the initial deficit (mm or in), maximum
deficit (mm or in), constant rate (mm/hr or in/hr), decay factor (1/hr) and directly connected impervious area
(percent).
The initial deficit is the soil moisture deficit of the active soil layer at the onset of a storm event. The
potential infiltration rate decreases linearly with cumulative infiltration until the initial deficit is satisfied.
Once satisfied, the potential infiltration rate becomes constant (see constant rate below).
Table 1: Proposed values for initial deficit, sand, loamy sand, and sandy loam texture classes.
0.02 48 (36-72)
0.06 36 (23-66)
0.10 23 (10-54)
0.14 10 (0-41)
0.18 0 (0-28)
0.22 * 0 (0-16)
Sand 31 (19-44)
The decay factor is the rate of infiltration potential decay with respect to cumulative infiltration. The default
value is -3, but users can change the decay factor values within the range of 0 to -8. Sensitivity analysis has
shown that the LC loss method is substantially more sensitive to changes in the initial deficit parameter
compared to decay factor, and that the latter may be held constant at -3 (paper under review).
Finally, the percentage of the subbasin comprised of directly connected impervious area can be specified.
Directly connected impervious areas are surfaces where runoff is conveyed directly to a waterway or
stormwater collection system. These surfaces differ from disconnected impervious areas where runoff
encounters permeable areas which may infiltrate some (or all) of the runoff prior to reaching a waterway or
stormwater collection system. No loss calculations are carried out on the specified percentage of the
subbasin; all precipitation that falls on that portion of the subbasin becomes excess precipitation and
subject to direct runoff.
The Soil Conservation Service (SCS) Curve Number (CN) model estimates precipitation excess as a function
of cumulative precipitation, soil cover, land use, and antecedent moisture, using the following equation:
33)
where Pe = accumulated precipitation excess at time t; P = accumulated rainfall depth at time t; Ia = the
initial abstraction (initial loss); and S = potential maximum retention, a measure of the ability of a watershed
to abstract and retain storm precipitation. Until the accumulated rainfall exceeds the initial abstraction, the
precipitation excess, and hence the runoff, will be zero. From analysis of results from many small
experimental watersheds, the SCS developed an empirical relationship of Ia and S:
34)
35)
48 https://www.hec.usace.army.mil/confluence/display/HMSGUIDES/Applying+the+Linear+Deficit+and+Constant+Loss+Method
36)
CN values range from 100 (for water bodies) to approximately 30 for permeable soils with high infiltration
rates. Publications from the Soil Conservation Service (1971, 1986) provide further background and details
on use of the CN model.
Since no means for extracting infiltrated water is included, this method should only be used for
event simulation.
Required Parameters
Parameters that are required to utilize the SCS curve number method include a curve number and directly
connected impervious area [percent]. Optionally, Ia [in or mm] can be entered as well.
The curve number that is entered should be a “composite” curve number that represents all of the different
soil group and land use combinations in the subbasin. This value should not include any impervious area
that will be specified separately as the percentage of impervious area. Typically, curve numbers are derived
from soils maps. Ia defines the amount of precipitation that must fall before excess precipitation results. If
this value is not entered, it will be automatically calculated using:
37)
The CN for a watershed can be estimated as a function of land use, soil type, and antecedent watershed
moisture, using tables published by the SCS. For convenience, Appendix A of this document includes CN
tables developed by the SCS and published in Technical Report 55 (commonly referred to as TR-55). With
these tables and knowledge of the soil type and land use, the single-valued CN can be found. For example,
for a watershed that consists of a tomato field on sandy loam near Davis, CA, the CN shown in Table 2-2b of
the TR-55 tables is 78. (This is the entry for straight row crop, good hydrologic condition, B hydrologic soil
group.) This CN is entered directly in the appropriate input form. For a watershed that consists of several
soil types and land uses, a composite CN is calculated as:
38)
in which CNcomposite = the composite CN used for runoff volume computations; i = an index of watersheds
subdivisions of uniform land use and soil type; CNi = the CN for subdivision i; and Ai = the drainage area of
subdivision i.
Finally, the percentage of the subbasin which is directly connected impervious area can be specified.
Directly connected impervious areas are surfaces where runoff is conveyed directly to a waterway or
stormwater collection system. These surfaces differ from disconnected impervious areas where runoff
encounters permeable areas which may infiltrate some (or all) of the runoff prior to reaching a waterway or
stormwater collection system. No loss calculations are carried out on the specified percentage of the
subbasin; all precipitation that falls on that portion of the subbasin becomes excess precipitation and
subject to direct runoff.
The values presented here are meant as initial estimates. This is the same for all sources of similar data
including Engineer Manual 1110-2-1417 Flood-Runoff Analysis50 and the Introduction to Loss Rate
Tutorials51. Regardless of the source, these initial estimates must be calibrated and validated.
Required Parameters
Parameters that are required to utilize the Smith Parlange method include the initial water content [in/in or
mm/mm], the residual water content [in/in or mm/mm], the saturated water content [in/in or mm/mm],
bubbling pressure [in or mm], the pore size distribution, the saturated hydraulic conductivity [in/hr or mm/hr],
and directly connected impervious area [percent]. An optional temperature time series may be specified. If a
temperature time series is selected, a beta zero parameter must also be specified.
The initial water content refers to the initial saturation of the soil at the beginning of a simulation and should
be determined through model calibration. The residual water content specifies the amount of water
remaining in the soil after all drainage by gravity has ceased. It should be specified in terms of volume ratio
and is commonly estimated using the predominant soil texture. The saturated water content specifies the
maximum water holding capacity in terms of volume ratio and is often assumed to be equivalent to the total
porosity of the soil.
49 https://www.hec.usace.army.mil/confluence/hmsdocs/hmstrm/cn-tables
50 https://www.publications.usace.army.mil/Portals/76/Publications/EngineerManuals/EM_1110-2-1417.pdf?ver=VFC-
A5m2Q18fxZsnv19U8g%3d%3d
51 https://www.hec.usace.army.mil/confluence/display/HMSGUIDES/Introduction+to+the+Loss+Rate+Tutorials
The values presented here are meant as initial estimates. This is the same for all sources of similar data
including Engineer Manual 1110-2-1417 Flood-Runoff Analysis52 and the Introduction to Loss Rate
Tutorials53. Regardless of the source, these initial estimates must be calibrated and validated.
52 https://www.publications.usace.army.mil/Portals/76/Publications/EngineerManuals/EM_1110-2-1417.pdf?ver=VFC-
A5m2Q18fxZsnv19U8g%3d%3d
53 https://www.hec.usace.army.mil/confluence/hmsdocs/hmsguides/applying-loss-methods-within-hec-hms/introduction-to-the-loss-
rate-tutorials
The SMA model represents the watershed with a series of storage layers, as illustrated above. Rates of
inflow to, outflow from, and capacities of the layers control the volume of water lost or added to each of
these storage components. Current storage contents are calculated during the simulation and vary
continuously both during and between storms. The different storage layers in the SMA model are:
• Surface-interception storage. Surface depression storage is the volume of water held in shallow
surface depressions. Inflows to this storage come from precipitation not captured by canopy
interception and in excess of the infiltration rate. Outflows from this storage can be due to infiltration
and to ET. Any contents in surface depression storage at the beginning of the time step are available
for infiltration. If the water available for infiltration exceeds the infiltration rate, surface interception
storage is filled. Once the volume of surface interception is exceeded, this excess water contributes
to surface runoff.
• Soil-profile storage. The soil profile storage represents water stored in the top layer of the soil. Inflow
is infiltration from the surface. Outflows include percolation to a groundwater layer and ET. The soil
profile zone is divided into two regions, the upper zone and the tension zone. The upper zone is
defined as the portion of the soil profile that will lose water to ET and/or percolation. The tension
zone is defined as the area that will lose water to ET only. The upper zone represents water held in
the pores of the soil. The tension zone represents water attached to soil particles. ET occurs from the
upper zone first and tension zone last. Furthermore, ET is reduced below the potential rate occurring
from the tension zone, as shown in the figure below. This represents the natural increasing resistance
in removing water attached to soil particles. ET can also be limited to the volume available in the
upper zone during specified winter months, depicting the end of transpiration by annual plants.
• Groundwater storage. Groundwater layers in the SMA represent horizontal interflow processes. The
SMA model can include either one or two such layers. Water percolates into groundwater storage
from the soil profile. The percolation rate is a function of a user-specified maximum percolation rate
and the current storage in the layers between which the water flows. Losses from a groundwater
Flow Component
The SMA model computes flow into, out of, and between the storage volumes. This flow can take the form
of:
• Precipitation, which is an input to the system of storages. Precipitation first contributes to the canopy
interception storage. If the canopy storage fills, the excess amount is then available for infiltration.
• Infiltration, which refers to the water that enters the soil profile from the ground surface. Water
available for infiltration during a time step comes from precipitation that passes through canopy
interception, plus water already in surface storage.
The volume of infiltration during a time interval is a function of the volume of water available for infiltration,
the state (fraction of capacity) of the soil profile, and the maximum infiltration rate specified by the model
user. For each interval in the analysis, the SMA model computes the potential infiltration volume, PotSoilInfl,
as:
39)
where MaxSoilInfl = the maximum infiltration rate; CurSoilStore = the volume in the soil storage at the
beginning of the time step; and MaxSoilStore = the maximum volume of the soil storage. The actual
infiltration rate, ActInfil, is the minimum of PotSoilInfil and the volume of water available for infiltration. If the
water available for infiltration exceeds this calculated infiltration rate, the excess then contributes to surface
interception storage.
The above figure illustrates the relationship of these, using an example with MaxSoilInfil = 0.5 in/hr and
MaxSoilStore = 1.5 in. As illustrated, when the soil profile storage is empty, potential infiltration equals the
maximum infiltration rate, and when the soil profile is full, potential infiltration is zero.
• Percolation, which refers to the movement of water downward from the soil profile, through the
groundwater layers, and into a deep aquifer.
In the SMA model, the rate of percolation between the soil-profile storage and a groundwater layer or
between two groundwater layers depends on the volume in the source and receiving layers. The rate is
greatest when the source layer is nearly full and the receiving layer is nearly empty. Conversely, when the
40)
where PotSoilPerc = the potential soil percolation rate; MaxSoilPerc = a user-specified maximum percolation
rate; CurSoilStore = the calculated soil storage at the beginning of the time step; MaxSoilStore = a user-
specified maximum storage for the soil profile; CurGwStore = the calculated groundwater storage for the
upper groundwater layer at the beginning of the time step; and MaxGwStore = a user-specified maximum
groundwater storage for groundwater layer 1.
The potential percolation rate computed with Equation 22 is multiplied by the time step to compute a
potential percolation volume. The available water for percolation is equal the initial soil storage plus
infiltration. The minimum of the potential volume and the available volume percolates to groundwater layer
1.
A similar equation is used to compute PotGwPerc, the potential percolation from groundwater layer 1 to layer
2:
41)
where MaxPercGw = a user-specified maximum percolation rate; CurGwStore = the calculated groundwater
storage for the groundwater layer 2; and MaxGwStore = a user-specified maximum groundwater storage for
layer 2. The actual volume of percolation is computed as described above.
For percolation directly from the soil profile to the deep aquifer in the absence of groundwater layers, for
percolation from layer 1 when layer 2 is not used, or percolation from layer 2, the rate depends only on the
storage volume in the source layer. In those cases, percolation rates are computed as
42)
and
43)
• Surface runoff, which is the water that exceeds the infiltration rate and overflows the surface storage.
This volume of water is direct runoff.
• Groundwater flow, which is the sum of the volumes of groundwater flow from each groundwater layer
at the end of the time interval. The rate of flow is computed as:
44)
where GwFlowt and GwFlowt+1 = groundwater flow rate at beginning of the time interval t and t+1,
respectively; ActSoilPerc = actual percolation from the soil profile to the groundwater layer; PotGwiPerc =
potential percolation from groundwater layer i; RoutGwiStore = groundwater flow routing coefficient from
groundwater storage i; TimeStep = the simulation time step; and other terms are as defined previously. The
volume of groundwater flow that the watershed releases, GwVolume, is the integral of the rate over the
model time interval. This is computed as
This volume may be treated as inflow to a linear reservoir model to simulate baseflow, as described in
the Linear Reservoir Model54 section.
• Evapotranspiration (ET), which is the loss of water from the canopy interception, surface depression,
and soil profile storages. In the SMA model, potential ET demand currently is computed from monthly
pan evaporation depths, multiplied by monthly-varying pan correction coefficients, and scaled to the
time interval.
The potential ET volume is satisfied first from canopy interception, then from surface interception, and finally
from the soil profile. Within the soil profile, potential ET is first fulfilled from the upper zone, then the tension
zone. If potential ET is not completely satisfied from one storage in a time interval, the unsatisfied potential
ET volume is filled from the next available storage.
When ET is from interception storage, surface storage, or the upper zone of the soil profile, actual ET is
equivalent to potential ET. When potential ET is drawn from the tension zone, the actual ET is a percentage
of the potential, computed as:
46)
where ActEvapSoil = the calculated ET from soil storage; PotEvapSoil = the calculated maximum potential
ET; and MaxTenStore = the user specified maximum storage in the tension zone of soil storage. The
function, f(·), in Equation 8 is defined as follows:
• As long as the current storage in the soil profile exceeds the maximum tension zone storage
(CurSoilStore/MaxTenStore > 1), water is removed from the upper zone at a onetoone rate,
the same as losses from canopy and surface interception.
• Once the volume of water in the soil profile zone reaches the tension zone, f(·) is determined
similar to percolation. This represents the decreasing rate of ET loss from the soil profile as
the amount of water in storage (and therefore the capillary force) decreases.
Flow into and out of storage layers is computed for each time step in the SMA model. The order of
computations in each time step depends upon occurrence of precipitation or ET, as follows:
• If precipitation occurs during the interval, ET is not modeled. Precipitation contributes first to canopy-
interception storage. Precipitation in excess of canopy-interception storage, combined with water
already in surface storage, is available for infiltration. If the volume available is greater than the
available soil storage, or if the calculated potential infiltration rate is not sufficient to deplete this
volume in the determined time step, the excess goes to surface-depression storage. When surface-
depression storage is full, any excess is surface runoff.
Infiltrated water enters soil storage, with the tension zone filling first. Water in the soil profile, but not in the
tension zone, percolates to the first groundwater layer. Groundwater flow is routed from the groundwater
layer 1, and then any remaining water may percolate to the groundwater layer 2. Percolation from layer 2 is to
a deep aquifer and is lost to the model.
• If no precipitation occurs, ET is modeled. Potential ET is satisfied first from canopy storage, then
from surface storage. Finally, if the potential ET is still not satisfied from surface sources, water is
54 https://www.hec.usace.army.mil/confluence/hmsdocs/hmstrm/baseflow/linear-reservoir-model
Required Parameters
Parameters that are required to utilize this method within HEC-HMS include the amounts of storage within
the soil, groundwater 1, and groundwater 2 layers that are initially filled [percent], the maximum infiltration
rate [in/hr or mm/hr], directly connected impervious area [percent], the maximum soil storage [in or mm],
tension storage [in or mm], the maximum soil percolation rate [in/hr or mm/hr], the maximum groundwater
layer 1 storage [in], the maximum groundwater layer 1 percolation rate [in/hr or mm/hr], the groundwater
layer 1 coefficient [hr], the maximum groundwater layer 2 storage [in], the maximum groundwater layer 2
percolation rate [in/hr or mm/hr], and the groundwater 2 layer coefficient [hr].
The amount of initial storage refers to the initial saturation of each layer at the beginning of a simulation and
should be determined through model calibration. The maximum infiltration rate sets the upper bound on
infiltration from the surface storage into the soil. This is the upper bound on infiltration; the actual infiltration
in a particular time interval is a linear function of the surface and soil storage, if a surface method is
selected. Without a selected surface method, water will always infiltrate at the maximum rate. Soil storage
represents the total storage available in the soil layer. Tension storage specifies the amount of water
storage in the soil that does not drain under the effects of gravity. Percolation from the soil layer to the
upper groundwater layer will occur whenever the current soil storage exceeds the tension storage. Water in
tension storage is only removed by ET. By definition, tension storage must be less than soil storage. The
soil percolation sets the upper bound on percolation from the soil storage into the upper groundwater layer.
The actual percolation rate is a linear function of the current storage in the soil and the current storage in the
upper groundwater layer. The maximum groundwater layer 1 storage represents the total storage in the
upper groundwater layer. The groundwater layer 1 percolation rate sets the upper bound on percolation from
the upper groundwater into the lower groundwater layer. The groundwater layer 2 layer percolation rate sets
the upper bound on deep percolation out of the system. The aforementioned parameters are typically
estimated using the predominant soil texture and literature values. The groundwater layer 1 and
groundwater layer 2 coefficients are used as the time lag on a linear reservoir for transforming water in
storage to lateral outflow.
The values presented here are meant as initial estimates. This is the same for all sources of similar data
including Engineer Manual 1110-2-1417 Flood-Runoff Analysis55 and the Introduction to Loss Rate
Tutorials56. Regardless of the source, these initial estimates must be calibrated and validated.
55 https://www.publications.usace.army.mil/Portals/76/Publications/EngineerManuals/EM_1110-2-1417.pdf?ver=VFC-
A5m2Q18fxZsnv19U8g%3d%3d
56 https://www.hec.usace.army.mil/confluence/hmsdocs/hmsguides/applying-loss-methods-within-hec-hms/introduction-to-the-loss-
rate-tutorials
Initial and Constant • "Mature" method that has been • Difficult to apply to ungaged
used successfully in thousands of areas due to lack of direct
studies throughout the U.S. physical relationship of
parameters and watershed
• Easy to set up and use.
properties.
• Parameters can be related to
• Method may be too simple to
predominant soil textures and
predict losses within event, even
estimated using multiple
if it does predict total losses well.
literature sources.
• Does not allow for continuous
• Method is parsimonious; it
simulation.
includes only a few parameters
necessary to explain the variation • Does not allow for surface
of runoff volume. storage to occur prior to soil
saturation.
Deficit and Constant • Similar to advantages of the Initial • Similar to disadvantages of the
and Constant method. Initial and Constant method.
• Method is scalable in that it
allows for continuous simulation
(but is not required for use).
Green and Ampt • Parameters can be related to • Not widely used, so less mature.
predominant soil textures and
• Not as much experience in
estimated using multiple
professional community as
literature sources.
simpler methods.
• Predicted values are in
• Less parsimonious than simpler
accordance with classical
methods.
unsaturated flow theory (good for
ungaged watersheds). • Does not allow for continuous
simulation.
• Allows for surface storage to
occur prior to soil saturation.
SCS Curve Number • Simple, predictable, and stable • Predicted values not in
method. accordance with classical
unsaturated flow theory
• Relies on only one parameter,
(infiltration rate will approach
which varies as a function of soil
zero during a storm of long
group, land use and treatment,
duration rather than a constant
surface condition, and antecedent
rate).
moisture condition.
• Developed with data from small
• Features readily understood and
agricultural watersheds in
well-documented.
midwestern U.S., so applicability
• Well established method widely elsewhere is uncertain.
accepted for use in U.S. and
• Default initial abstraction (0.2*S)
abroad.
does not depend upon storm
• Parameters can be related to characteristics or timing.
predominant soil group/land use
• Rainfall intensity is not
and estimated using multiple
considered when computing
literature sources.
losses (i.e., the same loss volume
will be calculated for 1 in rainfall
distribution over 1 hour or 1 day).
• Does not allow for continuous
simulation.
• Does not allow for surface
storage to occur prior to soil
saturation.
Soil Moisture • Parameters can be estimated for • Not widely used, so less mature,
Accounting ungaged watersheds from not as much experience in
information about soils. professional community.
• Predicted values are in • Features not widely understood.
accordance with classical
• Less parsimonious than simple
unsaturated flow theory (good for
empirical methods.
ungaged watersheds).
• Allows for continuous simulation.
• Allows for surface storage to
occur prior to soil saturation.
Canopy Interception
• Dynamic Canopy
• Simple Canopy
With each method, canopy interception is found for each computation time interval and is then subtracted
from the precipitation depth for that interval. The remaining depth is referred to as canopy overflow. This
depth is considered uniformly distributed over a subbasin or grid cell, depending upon the chosen method, so
it represents a volume of runoff. This runoff volume is then passed onto the surface and/or loss (infiltration)
methods.
Required Parameters
Simple Canopy
Required Parameters
Parameters that are required to utilize this method within HEC-HMS include the initial storage [%] or depth [in
or mm], crop coefficient [unitless], maximum storage [in or mm], evapotranspiration Coincidence method,
and uptake method.
The initial storage or depth defines the starting saturation of the canopy. This parameter is a function of the
antecedent canopy moisture content at the beginning of the simulation. It may be estimated in the same
manner as the initial abstraction for loss models.
The crop coefficient is a ratio applied to the potential evapotranspiration (computed in the Meteorologic
Model) when computing the amount of water to actually extract from the soil. This canopy is typically
initially estimated using land use estimates but it should be calibrated using observed data.
The maximum storage defines the maximum volume of water that can be held within the canopy. This value
is typically initially estimated using land use estimates but it should be calibrated using observed data.
The initial storage must be less than or equal to the maximum storage. Both parameters are
specified as effective depths (e.g., inches or millimeters).
The Evapotranspiration Coincidence method defines when infiltrated water will be extracted from the surface
and/or soil. The Only Dry Periods method will result in evapotranspiration only occurring during time steps
with no precipitation and/or snowmelt. The Wet and Dry Periods method will allow for evapotranspiration
during periods of both precipitation/snowmelt and no precipitation/snowmelt.
The Wet and Dry Periods method can improve simulated results when using a long
computational time interval (e.g., 1-day) or during a snowmelt simulation.
The Uptake method defines if and how water will be extracted from the surface and/or soil. The Simple
method extracts water at a rate equivalent to the potential evapotranspiration and can be used with
the Deficit Constant or Soil Moisture Accounting loss methods. The Tension Reduction method can be used
with the Soil Moisture Accounting Method and extracts water at the potential evapotranspiration rate from
the gravity zone but reduces the rate when extracting from the tension zone.
No water is extracted from the soil unless the Simple or Tension Reduction method is selected.
Dynamic Canopy
Simple Canopy
Surface Storage
• Dynamic Surface
• Simple Surface
With each method, surface storage is considered uniformly distributed over a subbasin or grid cell,
depending upon the chosen method, so it represents an equivalent depth (or volume).
A gridded implementation of both the Dynamic Surface and Simple Surface methods are also included within
the program. These methods presume a subbasin is composed of regularly spaced cells with uniform length
and width. These methods permit the user to specify initial conditions and parameters for each grid cell
separate from the neighboring cells. All other surface methods simulate the entire subbasin with one set of
initial conditions and parameters.
Required Parameters
Required Parameters
Dynamic Surface
Simple Surface
Transform
This chapter describes the models that simulate the process of direct runoff of excess precipitation on a
watershed. This process refers to the "transformation" of precipitation excess into point runoff. The program
provides two options for these transform methods:
• Empirical models (also referred to as system theoretic models). These are the "traditional" unit
hydrograph models. The system theoretic models attempt to establish a causal linkage between
runoff and excess precipitation without detailed consideration of the internal processes. The
equations and the parameters of the model have limited physical significance. Instead, they are
selected through optimization of some goodness-of-fit criterion.
• A conceptual model. The conceptual models included in the program are the Kinematic Wave model
of overland flow and the Two-Dimensional (2D) Diffusion Wave model. They represent, to the extent
possible, all physical mechanisms that govern the movement of the excess precipitation over the
watershed land surface (and in small collector channels in the watershed, in the case of the
Kinematic Wave transform).
where Qn = storm hydrograph ordinate at time n ; = rainfall excess depth in time interval m to
(m+1) ; M = total number of discrete rainfall pulses; and = unit hydrograph ordinate at time (n-
m+1) . and are expressed as flow rate and depth respectively, and has dimensions of
flow rate per unit depth. Use of this equation requires the implicit assumptions:
1. The excess precipitation is distributed uniformly spatially and is of constant intensity throughout a
time interval .
3. The direct runoff hydrograph resulting from a given increment of excess is independent of the time of
occurrence of the excess and of the antecedent precipitation. This is the assumption of time-
invariance.
4. Precipitation excesses of equal duration are assumed to produce hydrographs with equivalent time
bases regardless of the intensity of the precipitation.
A synthetic unit hydrograph relates the parameters of a parametric unit hydrograph model to watershed
characteristics. By using the relationships, it is possible to develop a unit hydrograph for watersheds or
conditions other than the watershed and conditions originally used as the source of data to derive the UH.
For example, a synthetic unit hydrograph model may relate the unit hydrograph peak of the simple triangular
unit hydrograph to the drainage area of the watershed. With the relationship, an estimate of the unit
hydrograph peak for any watershed can be made given an estimate of the drainage area. If the time of unit
hydrograph peak and total time base of the unit hydrograph is estimated in a similar manner, the unit
hydrograph can be defined "synthetically" for any watershed. That is, the unit hydrograph can be defined in
the absence of the precipitation and runoff data necessary to derive the UH. Chow, Maidment, and Mays
(1988) suggest that synthetic unit hydrograph fall into three categories:
1. Those that relate unit hydrograph characteristics (such as unit hydrograph peak and peak time) to
watershed characteristics. The Snyder unit hydrograph is such a synthetic UH.
2. Those that are based upon a dimensionless UH. The SCS unit hydrograph is such a synthetic UH.
48)
When using this method, all ordinates of the unit hydrograph are explicitly defined by the user. Modifications
to the effective duration of the specified unit hydrograph are required when the rate at which precipitation is
applied (e.g. 15-minutes) differs from the effective duration of the derived unit hydrograph (e.g. 6-hours). S-
curves (or S-graphs) are used to change the effective duration of unit hydrographs (Morgan & Hullinghorst,
1939). Within HEC-HMS, a cubic spline (along with a number of passes) can be used to smooth the S-curve.
1. Collect data for an appropriate observed storm runoff hydrograph and the causal precipitation. This
storm selected should result in approximately one unit of excess, should be uniformly distributed
over the watershed, should be uniform in intensity throughout its entire duration, and should be of a
duration sufficient to ensure that the entire watershed is responding. This duration, T, is the duration
of the unit hydrograph that will be found.
2. Estimate losses and subtract these from the precipitation. Estimate baseflow and separate this from
the runoff.
3. Calculate the total volume of direct runoff and convert this to equivalent uniform depth over the
watershed area.
4. Divide the direct runoff ordinates by the equivalent uniform depth. The result is the unit hydrograph.
Chow, Maidment, and Mays (1988) present matrix algebra, linear regression, and linear programming
alternatives to this approach. With any of these approaches, the unit hydrograph derived is appropriate only
for analysis of other storms of duration T. To apply the unit hydrograph to storms of different duration, the
unit hydrograph for these other durations must be derived. If the other durations are integral multiples of T,
the new unit hydrograph can be computed by lagging the original unit hydrograph, summing the results, and
dividing the ordinates to yield a hydrograph with volume equal one unit. Otherwise, the S-hydrograph method
can be used. This is described in detail in texts by Chow, Maidment, and Mays (1988), Linsley, Kohler, and
Paulhus (1982), Bedient and Huber (1992), and others.
49)
where C = a coefficient that describes the hydraulic efficiency of the stream channel, L = length along the
stream centerline from the outlet to the watershed boundary [miles or kilometers], Lca = length along the
stream centerline from the outlet to a point on the stream nearest the centroid of the watershed [mi or km],
and S = slope of the main watercourse [feet/mile or km/m]. The exponents, m and p, must be derived from
gaged data within the region of interest. Computationally, the S-curve is scaled by tlag and successive
differences are taken along the curve to compute a unit hydrograph. The resultant unit hydrograph is then
used to route excess precipitation to the watershed outlet.
Required Parameters
Parameters that are required to utilize this method within HEC-HMS include the percentage curve (which is a
paired data object), a lag time [hours] (when using the Standard method), and/or the aforementioned C, L,
Lca, and S coefficients and m and p exponents (when using the Regression method).
• Time of concentration (Tc), which is equivalent to the time it takes for excess precipitation to travel
from the hydraulically-most remote point of the watershed to the outlet,
• Watershed storage coefficient (R), which is equivalent to attenuation due to storage effects
throughout the watershed (Kull & Feldman, 1998), and
50)
where At = cumulative watershed area contributing at time t and A = total watershed area. This typical time-
area relationship was derived from an elliptically-shaped watershed. Through the use of this simplified time-
area histogram, only Tc and R are required to completely define the instantaneous unit hydrograph for a
watershed. As described within Modified Clark Unit Hydrograph, through the use of GIS and the Modified
Clark method, watershed-specific time-area histograms can be efficiently created and used.
After translation, attenuation is incorporated using a linear reservoir model which begins with the continuity
equation :
51)
where dS/dt = time rate of change of water in storage at time t; It = average inflow to storage at time t; and Ot
= outflow from storage at time t. With the linear reservoir model, storage at time t can be related to outflow :
52)
where R = a constant linear reservoir parameter. Combining and solving Equation 2 and Equation 3 using a
finite difference approximation yields :
53)
where CA and CB = routing coefficients. The coefficients are then calculated according to :
54)
55)
where Ot-1 = outflow from the previous time step. Solving Equation 4 and Equation 6 recursively yields values
of . However, if the inflow ordinates in Equation 4 are runoff from a unit of excess, the values of are,
in fact, a unit hydrograph. As the solution is recursive, outflow will theoretically continue for an infinite
duration. Within HEC-HMS, computation of the unit hydrograph ordinates continues until the volume of the
outflow exceeds 0.995 inches or mm. The unit hydrograph ordinates are then adjusted using a depth-
weighted consideration to produce a unit hydrograph with a volume exactly equal to one unit of depth.
Additional discussion regarding this method, including means by which Variable Clark
parameters can be estimated for ungaged locations in California, can be found here: Bartles and
Meyersohn (2023)57. A tutorial demonstrating the usage of Variable Tc and R relationships can
be found here: Applying the Variable Clark Unit Hydrograph Method58.
56)
57)
where L is the length of the hydraulically longest flow path [mi or km], S is the watercourse slope of the
longest flow path [ft/mi or m/km], Kb is the resistance coefficient, and i is the average excess precipitation
intensity [in/hr or mm/hr]. This approach should only be utilized for simulations within Maricopa County, AZ.
Required Parameters
Parameters that are required to utilize the “Standard” Clark method within HEC-HMS include Tc [hours], R
[hours], and an optional time-area curve. Tc and R are commonly estimated using watershed characteristics
and regression equations. If the “Variable Parameter” method is chosen, an index excess precipitation
[inches], excess precipitation vs. Tc relationship, and excess precipitation vs. R relationship is required. The
57 https://www.hec.usace.army.mil/confluence/hmsdocs/hmstr/files/76908661/139730202/1/1684268892597/
Bartles_Meyersohn_CA_Unit_Hydrograph_Regression_Equations.pdf
58 https://www.hec.usace.army.mil/confluence/hmsdocs/hmsguides/hec-hms-example-applications/hec-hms-examples-for-typical-dsod-
applications/w6-applying-the-variable-clark-unit-hydrograph-method
Estimating Parameters
The Clark method employs several parameters including Tc, R, and a time-area histogram. Some of these
parameters have physical meaning while others do not. Due to that fact, differing methods have been used
in practice to initially estimate these parameters. Also, modifications to these parameters have been found
to be warranted for use within extreme event simulations to account for non-linear routing phenomena that
have been observed. Initial parameter estimates are typically made using GIS and various terrain, land use,
hydrography, and watershed boundary layers.
The most commonly used terrain data is distributed by the USGS as the National Elevation Dataset (NED).
The NED includes multiple layers at varying horizontal resolutions. Typically, a horizontal resolution of 1/3rd
arc second (approximately 10 meters) is appropriate for use within hydrologic applications that encompass
100 to 1,000 square miles. However, the most appropriate horizontal resolution for each application is
specific to the study and/or watershed physical characteristics.
The most commonly used hydrography and watershed boundary data is also distributed by the USGS as the
National Hydrograph Dataset (NHD) and the Watershed Boundary Dataset (WBD). The NHD and WBD also
include multiple versions at varying resolutions. The NED, NHD, and WBD can be accessed through the
USGS National Map: “https://viewer.nationalmap.gov/basic/”.
Time of Concentration
Tc can be estimated for a watershed in multiple ways. The most commonly used methods include:
• Using regional regression equations which were developed from observed data in a similar region.
Travel time, Tt, refers to the amount of time necessary for runoff to move from one location to another within
a watershed and can be computed using the following:
58)
where Tt = travel time [hours], L = flow length [feet], 3600 = conversion factor from seconds to hours, and V =
average velocity [feet/second]. An average velocity can be estimated using multiple approaches including
simplistic nomographs (Natural Resources Conservation Service, 1999), Kinematic Wave Theory (Hydrologic
Engineering Center, 1993), and/or Manning’s equation.
However, as runoff moves down gradient, predominant flow regimes tend to change due to factors like
channel shape, slope, roughness, and contributing drainage area. For instance, runoff that begins as sheet
59 https://www.hec.usace.army.mil/confluence/hmsdocs/hmsguides/estimating-clark-unit-hydrograph-parameters
59)
where Tsheet = sum of travel time in sheet flow segments, Tshallow = sum of travel time in shallow flow
segments (e.g. streets, gutters, shallow rills, etc), Tchannel = sum of travel time in open channel flow
segments. Equation 7 assumes that Tc is derived from the longest flow path within the watershed.
Sheet flow can be conceptualized as flow that moves over planar surfaces with flow depths that are less
than 0.1 feet (Natural Resources Conservation Service, 1999). An approximation of the Kinematic Wave
equations can be used to estimate sheet flow travel time [hours]:
60)
where N = overland flow roughness coefficient, L = flow length [feet], P2-10 = ½ annual exceedance probability
24-hour duration rainfall [inches], and Sf = friction slope [ft/ft]. The overland flow roughness coefficient is
typically much larger than an equivalent Manning’s roughness coefficient for the same land use/channel
material. Typical overland flow roughness coefficients for multiple land uses are presented in Technical
Document 10 (Hydrologic Engineering Center, 1993) and (Natural Resources Conservation Service, 1999).
After a short distance, sheet flow transitions to shallow concentrated flow. The distance over which sheet
flow occurs is commonly assumed to be less than 300 feet but can vary depending upon site-specific
conditions (Natural Resources Conservation Service, 1999). Relationships presented in TR-55 can be used to
estimate the average velocity for shallow concentrated flow:
61)
where Vshallow = average velocity in shallow flow segments [ft/s] and S = watercourse slope [ft/ft]. Once
Vshallow has been calculated, the travel time of shallow concentrated flow can be estimated. The point at
which shallow concentrated flow transitions to open channel flow is typically assumed to exist where
evidence of channels can be obtained from field surveys, maps, or aerial photographs. However, this
transition can vary depending upon site-specific conditions.
The average velocity for open channel flow can be estimated using Mannings Equation and a normal depth
assumption. Manning’s roughness coefficients for common channel and overbank materials are presented
in numerous sources including Chow (1959). Once Vchannel has been calculated, the travel time of channel
flow can also be estimated. Finally, once all travel times have been computed, Tc can be estimated. An
example application of this methodology is shown in Figure below.
Finally, similar to the Snyder method, Tc can also be estimated through multi-linear regression analyses using
various watershed physical characteristics and combinations. An example of this approach is shown within
Figure below. The use of the results from regression analyses has the added advantage of allowing
parameter estimation within ungaged watersheds. Typical relationships derived to estimate Tc [hours] follow
the form:
62)
where C and X are parameters derived from gaged data in the same region, L = length along the stream
centerline from the outlet to the watershed boundary [miles], Lca = length along the stream centerline from
the outlet to a point on the stream nearest the centroid of the watershed [miles], and S10-85 = stream slope
between points at 10- and 85-percent of the total distance [ft/mi].
When large standing bodies of water exist within the watershed of interest, flood waves may move
downstream more rapidly than predicted within the aforementioned approach. As such, the application of
the aforementioned approach may result in an over prediction of Tc. This potential over prediction can be
addressed in one or more ways including: 1) refinement of initial estimates through model calibration and
validation, 2) adding one or more additional flow regime transitions where flood waves are expected to travel
through the standing bodies of water at a faster rate, 3) reducing the length over which open channel flow is
assumed to occur, and/or 4) further discretizing the watershed (i.e. subdividing).
Though R has units of time, there is only a qualitative meaning for it in a physical sense. Clark indicated that
R can be computed as the flow at the inflection point on the falling limb of the hydrograph divided by the time
derivative of flow (Clark, 1945). This parameter is commonly estimated through multi-linear regression
analyses in conjunction with Tc using the following:
63)
where X = a coefficient that is determined through regional analyses. Smaller values of X result in short,
steeply rising unit hydrographs and may be representative of urban watersheds. Larger values of X result in
broad, slowly rising unit hydrographs and may be representative of flat, swampy watersheds. Values for X
have been shown to vary due to factors like predominant channel shape, slope, and roughness; however, this
coefficient has been found to be fairly constant on a regional basis (Hydrologic Engineering Center, 1988),
(U.S. Army Corps of Engineers, 1994), and (Hydrologic Engineering Center, 2001). Regional regression
equations for estimating watershed storage coefficients for the multiple hydrologic regions have been
developed for California.
Time-Area Histogram
Studies at HEC have shown that a smooth function fitted to a typical time-area relationship can oftentimes
adequately represent the temporal distribution of flow (i.e. translation) for most watersheds. A default time-
area relationship is included within HEC-HMS and is shown in Equation 1. If a site-specific time-area
histogram is required, it can be developed by demarcating lines of equal travel time, which are called
“isochrones”, to divide the watershed. Then, the watershed area encompassed by each isochrone and the
As previously mentioned, when large standing bodies of water exist within the watershed of interest, flood
waves may move downstream more rapidly than predicted within the aforementioned approach. As such,
the temporal distribution of flow may be over predicted. This potential over prediction can be addressed in
one or more ways including: 1) refinement of initial parameter estimates through model calibration and
validation, 2) further discretizing the watershed (i.e. subdividing), and/or 3) modifying the time-area
histogram to reduce the travel time through the standing body of water.
ModClark Model
A distributed parameter model is one in which spatial variability of characteristics and processes are
considered explicitly. The modified Clark (ModClark) model is such a model (Kull and Feldman, 1998; Peters
and Easton, 1996). This model accounts explicitly for variations in travel time to the watershed outlet from all
regions of a watershed.
64)
Required Parameters
Parameters that are required to utilize the ModClark method within HEC-HMS include Tc [hours], R [hours],
and a gridded representation of the watershed. This gridded representation must use one of the following
systems:
• Hydrologic Rainfall Analysis Project (HRAP), which is based on the Polar Stereographic map
projection and is most commonly used by the National Weather Service
• Standard Hydrologic Grid (SHG), which is defined for the conterminous United States and is based on
the Albers Equal Area Conic map projection
• SHG grid resolutions of 10, 20, 50, 100, 200, 500, 1000, 2000, 5000, and 10000 meters are
supported.
• Universal Transverse Mercator (UTM), which uses a transverse Mercator projection and divides the
Earth into 60 zones, each being six degrees longitude in width (Hydrologic Engineering Center, 2013)
The gridded representation of the watershed must contain the following four items:
• Area
HEC-HMS version 4.4 (or later) should be used to create this gridded file. An example of this
process can be found here: Applying Gridded Precipitation to a Non-Georeferenced Project -
Structured Discretization60.
An example of this gridded data, using the SHG system with 2000 meter x 2000 meter grid cells, is shown
below.
60 https://www.hec.usace.army.mil/confluence/hmsdocs/hmsguides/working-with-gridded-boundary-condition-data/applying-gridded-
precipitation-to-a-non-georeferenced-project-structured-discretization
65)
Conceptually, tp is the difference in time between the centroid of excess precipitation and the time of qp, as
shown below.
61 https://www.hec.usace.army.mil/confluence/hmsdocs/hmstr/files/76908661/139730202/1/1684268892597/
Bartles_Meyersohn_CA_Unit_Hydrograph_Regression_Equations.pdf
62 https://www.hec.usace.army.mil/confluence/hmsdocs/hmsguides/hec-hms-example-applications/hec-hms-examples-for-typical-dsod-
applications/w6-applying-the-variable-clark-unit-hydrograph-method
66)
where tR = lag of the desired unit hydrograph [hours]. For the standard case, Snyder found that qp can be
computed as:
67)
where Cp = a coefficient derived from gaged data in the same region. For the non-standard case, the peak
discharge per unit area of a desired unit hydrograph, qpR, can be computed as:
68)
In the case of a standard unit hydrograph, Equation 1 and Equation 3 can then be solved to determine tp and
qp. In the non-standard case, Equation 2 and Equation 4 can be used to determine tp and qp. As a final step,
a curve with a unit depth of runoff must be fit through the previously computed ordinates. Snyder proposed
a relationship with which the total time base of the unit hydrograph could be defined.
Instead of this relationship, within HEC-HMS, an equivalent Clark synthetic unit hydrograph is
determined through an optimization routine and utilized in subsequent precipitation-runoff
computations.
Estimating Parameters
Snyder collected rainfall and runoff data from gaged watersheds, derived the unit hydrograph as described
earlier, parameterized these unit hydrograph, and related the parameters to measurable watershed
characteristics. For the unit hydrograph lag, he proposed:
69)
where Ct = basin coefficient; L = length of the main stream from the outlet to the divide; Lc = length along the
main stream from the outlet to a point nearest the watershed centroid; and C = a conversion constant (0.75
for SI and 1.00 for foot-pound system). The parameter Ct of Equation 34 and Cp of Equation 32 are best
found via calibration, as they are not physically-based parameters. Bedient and Huber (1992) report that Ct
typically ranges from 1.8 to 2.2, although it has been found to vary from 0.4 in mountainous areas to 8.0
along the Gulf of Mexico. They report also that Cp ranges from 0.4 to 0.8, where larger values of Cp are
associated with smaller values of Ct.
Alternative forms of the parameter predictive equations have been proposed. For example, the Los Angeles
District, USACE (1944) has proposed to estimate tp as:
70)
where S = overall slope of longest watercourse from point of concentration to the boundary of drainage
basin; and N = an exponent, commonly taken as 0.33. Others have proposed estimating tp as a function of
tc , the watershed time of concentration (Cudworth, 1989; USACE, 1987). Time of concentration is the time of
flow from the most hydraulically remote point in the watershed to the watershed outlet, and may be
estimated with simple models of the hydraulic processes. Various studies estimate tp as 50-75% of tc.
71)
in which tr = duration of excess precipitation (or computational time step) and tp = the basin lag which is
defined as the time difference between the center of mass of excess precipitation and the peak of the unit
hydrograph. Furthermore, the peak discharge of the unit hydrograph, Qp [cubic feet / second] can be related
to the watershed area, A [square miles], and Tp [hours] using the following relationship:
72)
where PRF is a constant which is usually termed the “peak rate factor”. Given tp, Equation (33) can be solved
to determine Tp. Then, given a PRF, Equation 2 can be solved to find Qp. The entire unit hydrograph can then
be found from the dimensionless curvilinear form using multiplication.
The standard dimensionless SCS curvilinear unit hydrograph is created by setting the PRF equal to
approximately 484. However, the PRF constant has been shown to vary from about 600 in steep terrain to
100 or less in flat areas. Various dimensionless unit hydrographs with predefined peak rate factors are
presented in the NRCS National Engineering Handbook (2007). A change in the peak rate factor causes a
change in the percent of runoff occurring before Tp, which is typically not uniform across all watersheds
because it depends on flow length, ground slope, and other properties of the watershed. By changing PRF,
alternate unit hydrographs can be computed for watersheds with varying topography and other conditions
that effect runoff.
Required Parameters
Parameters that are required to utilize the SCS method within HEC-HMS include a PRF and a lag time
[minutes]. Research has shown that tp can be related to the watershed time of concentration, Tc, using
(Natural Resources Conservation Service, 1999):
73)
Estimating Parameters
The SCS UH lag can be estimated via calibration for gaged headwater subwatersheds. Time of
concentration is a quasi-physically based parameter that can be estimated as:
74)
where = sum of travel time in sheet flow segments over the watershed land surface; = sum of
travel time in shallow flow segments, down streets, in gutters, or in shallow rills and rivulets; and =
sum of travel time in channel segments. Identify open channels where cross section information is available.
75)
where V = average velocity; R = the hydraulic radius (defined as the ratio of channel cross-section area to
wetted perimeter); S = slope of the energy grade line (often approximated as channel bed slope); and C =
conversion constant (1.00 for SI and 1.49 for foot-pound system.) Values of n, which is commonly known as
Manning's roughness coefficient, can be estimated from textbook tables, such as that in Chaudhry (1993).
Once velocity is thus estimated, channel travel time is computed as:
76)
where L = channel length. Sheet flow is flow over the watershed land surface, before water reaches a
channel. Distances are short—on the order of 10-100 meters (30-300 feet). The SCS suggests that sheet-flow
travel time can be estimated as:
77)
in which N = an overland-flow roughness coefficient; L = flow length; = 2-year, 24-hour rainfall depth, in
inches; and S = slope of hydraulic grade line, which may be approximated by the land slope. This estimate is
based upon an approximate solution of the kinematic wave equations, which are described later in this
chapter. The table below shows values of N for various surfaces. Sheet flow usually turns to shallow
concentrated flow after 100 meters. The average velocity for shallow concentrated flow can be estimated as:
78)
From this, the travel time can be estimated with Equation above.
Surface Description N
Cultivated soils:
Grass:
Dense grasses, including species such as weeping love grass, bluegrass, 0.24
buffalo grass, blue grass, and native grass mixtures
Bermudagrass 0.41
Range 0.13
Woods 1
Notes:
1
When selecting N, consider cover to a height of about 0.1 ft. This is the only part of the plant cover that will
obstruct sheet flow.
79)
where Sf = energy gradient (or friction slope) [ft/ft or m/m], S0 = channel slope [ft/ft or m/m], V = velocity [ft/
sec or m/sec], y = hydraulic depth [ft or m], x = distance along the flow path [ft or m], t = time [sec]; g =
acceleration due to gravity [ft/sec2 or m/sec2], ( ) = pressure gradient [ft/ft or m/m], (V/g)( )=
2 2 2 2
convective acceleration [ft/sec or m/sec ], and (1/g)( ) = local acceleration [ft/sec or m/sec ]. Sf can
be approximated using Manning’s equation:
80)
81)
82)
where B = top width [ft or m], A( ) = prism storage, VB( ) = wedge storage, B( ) = rate of rise, and q =
3 3
lateral inflow per unit length [ft /sec/ft or m /sec/m]. Simplifying to shallow flow over a planar surface
reduces Equation 4 to:
83)
84)
Equation 6 represents the kinematic wave approximation of the equations of motion. HEC-HMS represents
the overland flow component as a wide, rectangular channel of unit width such that m = 5/3, and:
85)
The kinematic wave method allows for the representation of variable scales of complexity within a
watershed. A simple representation is shown in Figure below where one or two overland planes and a
channel are included.
A complex representation is shown in Figure below where overland planes, subcollectors, collectors, and a
channel are included. For a detailed discussion of this method, the reader is directed to HEC (1993).
The availability of a circular channel shape here does not imply that HEC-HMS can be used for
analysis of pressure flow in a pipe system; it cannot. Note also that the circular channel shape
only approximates the storage characteristics of a pipe or culvert. Because flow depths greater
than the diameter of the circular channel shape can be computed with the kinematic-wave
model, the user must verify that the results are appropriate.
• The resulting algebraic equations are solved to find unknown hydrograph ordinates.
The overland-flow plane initial condition sets A, the area in Equation 6, equal to zero, with no inflow at the
upstream boundary of the plane. The initial and boundary conditions for the kinematic wave channel model
are based on the upstream hydrograph. Boundary conditions, either precipitation excess or lateral inflows,
are constant within a time step and uniformly distributed along the element.
Kinematic wave parameters for various channel shapes (USACE, 1998)
Circular Section
Triangular Section
Square Section
Rectangular Section
Trapezoidal Section
86)
Equation 8 is the so-called standard form of the finite-difference approximation. The indices of the
approximation refer to positions on a space-time grid, as shown in Figure below. That grid provides a
convenient way to visualize the manner in which the solution scheme solves for unknown values of A at
various locations and times. The index i indicates the current location at which A is to be found along the
length, L, of the channel or overland flow plane. The index j indicates the current time step of the solution
scheme. Indices i-1, and j-1 indicate, respectively, positions and times removed a value and from the
current location and time in the solution scheme.
With the solution scheme proposed, the only unknown value in Equation 8 is the current value at a given
location, . All other values of A are known from either a solution of the equation at a previous location
and time, or from an initial or boundary condition. The program solves for the unknown as :
87)
88)
This standard form of the finite difference equation is applied when the following stability factor, R, is less
than 1.00 (see Alley and Smith, 1987):
89)
or
90)
91)
where is the only unknown. This is referred to as the conservation form. Solving for the unknown
yields:
92)
93)
• Subcollector channels: these are small feeder pipes or channels, with principle dimension generally
less that 18 inches, that convey water from street surfaces, rooftops, lawns, and so on. They might
service a portion of a city block or housing tract, with area of 10 acres. Flow is assumed to enter the
channel uniformly along its length. The average contributing area for each subcollector channel must
be specified. Column 2 of the table below shows information that must be provided about the
subcollector channels.
• Collector channels: these are channels, with principle dimension generally 18-24 inches, which collect
flows from subcollector channels and convey it to the main channel. Collector channels might service
an entire city block or a housing tract, with flow entering laterally along the length of the channel. As
with the subcollectors, the average contributing area for each collector channel is required. Column 2
of the table below shows information that must be provided about the collector channels.
• The main channel: this channel conveys flow from upstream subwatersheds and flows that enter
from the collector channels or overland flow planes. Column 3 the table below shows information
that must be provided about the main channel.
The choice of elements to describe any watershed depends upon the configuration of the drainage system.
The minimum configuration is one overland flow plane and the main channel, while the most complex would
include two planes, subcollectors, collectors, and the main channel. The planes and channels are described
by representative slopes, lengths, shapes, and contributing areas. Publications from HEC (USACE, 1979;
USACE, 1998) provide guidance on how to choose values and give examples. The roughness coefficients for
both overland flow planes and channels commonly are estimated as a function of surface cover, using, for
example, Table 17, for overland flow planes and the tables in Chow (1959) and other texts for channel n
values.
Required Parameters
Parameters that are required to utilize the kinematic wave method within HEC-HMS include length [ft or m],
slope [ft/ft or m/m], an overland flow roughness coefficient, percentage of the total area, and number of
routing steps for each overland plane. Optional parameters for subcollectors, collectors, and channel
elements may be used as well.
Basic Concepts
The 2D Diffusion Wave Transform method explicitly routes excess precipitation throughout a subbasin
element using a combination of the continuity and momentum equations. Unlike unit hydrograph transform
methods, this transform method can be used to simulate the non-linear movement of water throughout a
subbasin when exposed to large amounts of excess precipitation (Minshall, 1960). This Transform Method
can be combined with all Canopy, Surface, and Loss methods that are currently within HEC-HMS.
However, only the None, Linear Reservoir, and Constant Monthly Baseflow Methods can be used
with this transform method.
2D Mesh
The 2D Diffusion Wave Method represents the subbasin using a 2D mesh which is comprised of both grid
cells and cell faces. Grid cells do not have to have a flat bottom and cell faces do not have to be straight
lines with a single elevation. Instead, each grid cell and cell face is comprised of hydraulic property tables
that are developed using the details of the underlying terrain. This type of model is often referred to as a
“high resolution subgrid model” (Casulli, 2008). The term “subgrid” implies the use of a detailed underlying
terrain (subgrid) to develop the geometric and hydraulic property tables that represent the grid cells and the
cell faces. Currently, users must create a 2D mesh (and any associated connections) within HEC-RAS
(version 5.0.7 or newer) and then import to HEC-HMS63. In the future, users will be able to create and modify
both 2D meshes and boundary conditions entirely within HEC-HMS. The 2D mesh preprocessor within HEC-
RAS creates: 1) an elevation-volume relationship for each grid cell and 2) cross sectional information (e.g.
elevation-wetted perimeter, area, roughness, etc) for each cell face. The net effects of using a subgrid model
such as this are fewer computations, faster run times, greater stability, and improved accuracy. For more
information related to the development of a 2D mesh, users are referred to the HEC-RAS 2D Modeling User's
Manual64.
The 2D Diffusion Wave Transform can only be used with Unstructured or File-Specified Discretizations. An
Unstructured Discretization can be created by importing a 2D mesh from an HEC-RAS Unsteady Plan HDF file
using the File | Import | HEC-RAS HDF File option. Unsteady Plan HDF files have extensions of ".p##.hdf"
where "p##" corresponds to the specific plan of interest. When importing a 2D mesh from an HEC-RAS
Unsteady Plan HDF file, any accompanying boundary conditions for the selected 2D mesh (except for
precipitation time series) will be imported and used to create new 2D Connections with the same
parameterization. If a File-Specified Discretization is used, the backing file must be in an HDF 5 format and
created using either HEC-RAS or HEC-HMS.
2D Engine
HEC’s 2D engine solves the St. Venant Equations using physically measurable characteristics to route water
on the overland surface (U.S. Army Corps of Engineers, 2022). This engine makes use of an implicit finite
volume algorithm which allows for advantages such as:
63 https://www.hec.usace.army.mil/confluence/display/HMSUM/.Importing+HEC-RAS+HDF+Files+v4.8
64 https://www.hec.usace.army.mil/software/hec-ras/documentation/HEC-RAS%205.0%202D%20Modeling%20Users%20Manual.pdf
Required Parameters
Parameters that are required to utilize the 2D Diffusion Wave method within HEC-HMS include implicit
weighting factor, water surface tolerance [ft or m], volume tolerance [ft or m], maximum iterations, time step
method, use warm up period, and number of cores. If the Adaptive Time Step method is selected, additional
parameters are required including the maximum Courant number and maximum time step [seconds]. If the
Fixed Time Step method is selected, the maximum time step [sec] must is also required. If the warm up
period option is enabled, additional parameters are required including the warm up period [hours] and warm
up period fraction.
A tutorial describing a simple example application of this transform method, including parameter
estimation and calibration, can be found here: Creating a Simple 2D Flow Model within HEC-
HMS66.
A tutorial describing a complex example application of this transform method, including
parameter estimation and calibration, can be found here: Creating a Complex 2D Flow model
within HEC-HMS67.
65 https://www.hec.usace.army.mil/confluence/rasdocs
66 https://www.hec.usace.army.mil/confluence/hmsdocs/hmsguides/using-2d-flow-within-hec-hms/creating-a-simple-2d-flow-model-
within-hec-hms
67 https://www.hec.usace.army.mil/confluence/hmsdocs/hmsguides/using-2d-flow-within-hec-hms/creating-a-complex-2d-flow-model-
within-hec-hms
User-Specified Unit • Well established and • Requires the use of observed data
Hydrograph documented method. to derive an empirical unit
hydrograph.
• Difficult to calibrate derived unit
hydrographs.
• Difficult to apply to ungaged areas
due to lack of direct physical
relationship of parameters and
watershed properties.
• Shortening the duration of excess
precipitation is difficult and can
lead to numerical oscillations.
Clark • "Mature" method that has been • Default time-area histogram may
used successfully in thousands be inappropriate for use within
of studies throughout the U.S. some watersheds (though, a user-
specified time-area histogram can
• Well established, widely
be used).
accepted for use, easy to set
up and use. • Cannot be used with gridded
snowmelt processes.
• Parameters can be
regionalized and related to • When using the Variable Clark
measurable basin option, requires development of
characteristics. Variable Tc and R curves.
• Parameters can be varied with
excess-precipitation rate for
use within extreme event
simulation when using the
Variable Clark option.
Transform References
Alley, W.M. and Smith, P.E. (1987). Distributed routing rainfall-runoff model, Open file report 82-344. U.S.
Geological Survey, Reston, VA.
Bartles, M. (2017). Improved Applications of Unit Hydrograph Theory within HEC-HMS. World Environmental
and Water Resources Congress. Sacramento, CA: ASCE.
Bedient, P.B., and Huber, W.C. (1992). Hydrology and floodplain analysis. Addison-Wesley, New York, NY.
Casulli, V. (2008). A high-resolution wetting and drying algorithm for free-surface hydrodynamics. Numerical
Methods in Fluids, 391-408.
Chaudhry, H.C. (1993). Open-channel hydraulics. Prentice Hall, NJ.
Chow, V.T. (1959). Open channel flow. McGraw-Hill, New York, NY.
Chow, V.T., Maidment, D.R., and Mays, L.W. (1988). Applied hydrology. McGraw-Hill, New York, NY.
Clark, C.O. (1945). "Storage and the unit hydrograph." Transactions, ASCE, 110, 1419-1446.
Cudworth, A.G. (1989). Flood hydrology manual. US Department of the Interior, Bureau of Reclamation,
Washington, DC.
Dooge, J.C.I. (1959). "A general theory of the unit hydrograph." Journal of Geophysical Research, 64(2),
241-256.
Institute of Hydrology. (1999). Flood Estimation Handbook (FEH) - Procedures for Flood Frequency Estimation.
Wallingford, Oxfordshire, United Kingdom: Institute of Hydrology.
Kull, D., and Feldman, A. (1998). "Evolution of Clark's unit graph method to spatially distributed runoff."
Journal of Hydrologic Engineering, ASCE, 3(1), 9-19.
Leclerc, G. and Schaake, J.C. (1973). Methodology for assessing the potential impact of urban development
on urban runoff and the relative efficiency of runoff control alternatives, Ralph M. Parsons Lab. Report 167.
Massachusetts Institute of Technology, Cambridge, MA.
Linsley, R.K., Kohler, M.A., and Paulhus, J.L.H. (1982). Hydrology for engineers. McGraw-Hill, New York, NY.
Peters, J. and Easton, D. (1996). "Runoff simulation using radar rainfall data." Water Resources Bulletin,
AWRA, 32(4), 753-760.
Ponce, V.M. (1991). "The kinematic wave controversy." Journal of hydraulic engineering, ASCE,. 117(4),
511-525.
Baseflow
As water is infiltrated to the subsurface, some volume can be lost to deep aquifer storage. However, some
volume is only temporarily stored and will return relatively quickly to the surface. The combination of this
baseflow and direct runoff results in a total runoff hydrograph.
• Constant Monthly68
• Recession69
• Bounded Recession70
• Linear Reservoir71
• Nonlinear Boussinesq72
68 https://www.hec.usace.army.mil/confluence/hmsdocs/hmstrm/baseflow/constant-monthly-model
69 https://www.hec.usace.army.mil/confluence/hmsdocs/hmstrm/baseflow/recession-model
70 https://www.hec.usace.army.mil/confluence/hmsdocs/hmstrm/baseflow/bounded-recession-model
71 https://www.hec.usace.army.mil/confluence/hmsdocs/hmstrm/baseflow/linear-reservoir-model
72 https://www.hec.usace.army.mil/confluence/hmsdocs/hmstrm/baseflow/nonlinear-boussinesq-model
Unless parameters are carefully chosen, this method is not guaranteed to conserve mass (e.g.,
precipitation losses < baseflow volume).
This method is primarily intended for continuous simulation in subbasins where the baseflow is easily
approximated by a constant flow for each month.
Required Parameters
Parameters that are required to utilize this method within HEC-HMS include the rate of baseflow for all
twelve months throughout the year [ft3/sec or m3/sec]. These constant monthly baseflow rates are best
estimated empirically using measurements of channel flow when storm runoff is not occurring. In the
absence of such records, field inspection may help establish the average flow. For large watersheds with
contribution from groundwater flow and for watersheds with year-round precipitation, the contribution may
be significant and should not be ignored. On the other hand, for most urban channels and for smaller
streams in the western and southwestern US, baseflow contributions may be negligible.
Recession Model
73 https://www.hec.usace.army.mil/confluence/hmsdocs/hmsguides/applying-baseflow-methods-in-hec-hms/applying-the-constant-
monthly-baseflow-method
94)
where Q0 = initial baseflow at time zero [ft3/sec or m3/sec] and k = recession constant. Within HEC-HMS, k is
defined as the ratio of the baseflow at time t to the baseflow one day earlier and must be positive and less
than one.
Required Parameters
Parameters that are required to utilize this method within HEC-HMS include the initial baseflow type and
value, recession constant, and threshold type and value. The initial discharge type can be specified as either
a discharge rate [ft3/sec or m3/sec] or a discharge rate per area [ft3/sec/mi2 or m3/sec/km2]. The discharge
rate method is most appropriate when there is observed streamflow data at the outlet of the subbasin for
determining the initial flow in the channel. The discharge rate per area method is better suited when regional
information is available. The threshold type can be specified as either a ratio to peak or a threshold
discharge [ft3/sec or m3/sec]. If the threshold type is set to ratio to peak, the baseflow will be reset when the
current flow divided by the peak flow falls to the specified value. If the threshold type is set to a threshold
discharge, the baseflow will be reset when the receding limb of the hydrograph falls to the specified value,
regardless of the peak flow during the previous storm event.
The recession constant, k, depends upon the source of baseflow. If k = 1.0, the baseflow contribution will be
constant, with Qt = Q0. Otherwise, to model the exponential decay typical of natural undeveloped watersheds,
k must be less than 1.0. The following table shows typical values proposed by Pilgrim and Cordery (1992)
for basins ranging in size from 120 to 6500 square miles (300 to 16,000 square kilometers) in the U.S.,
eastern Australia, and several other regions.
Groundwater 0.95
Interflow 0.8-0.9
The recession constant can be estimated if gaged flow data are available. Flows prior to the start of direct
runoff can be plotted and an average of ratios of ordinates spaced one day apart can be computed. This is
simplified if a logarithmic axis is used for the flows as the recession model will plot as a straight line.
74 https://www.hec.usace.army.mil/confluence/hmsdocs/hmsguides/applying-baseflow-methods-in-hec-hms/applying-the-recession-
baseflow-method
Unless parameters are carefully chosen, this method is not guaranteed to conserve mass (e.g.,
precipitation losses < baseflow volume).
Required Parameters
Parameters that are required to utilize this method within HEC-HMS include those which were previously
mentioned for the recession method in addition to limiting baseflow values for all twelve months of the year.
Unlike the other baseflow methods contained within HEC-HMS, this method is guaranteed to
conserve mass (i.e., the baseflow volume cannot exceed precipitation losses).
The volume of infiltrated water is used as inflow to the Linear Reservoir method. Inflow can be partitioned to
each layer in addition to deep aquifer recharge. As such, during periods of high infiltration, more baseflow
will be generated. Conversely, during periods of little to no infiltration, less baseflow will be generated. When
three groundwater reservoirs are used within this method, the system can be conceptualized as shown in the
following figure.
As described within the Soil Moisture Accounting Loss Method (SMA) section75, when the Linear Reservoir
method is used in conjunction with the SMA infiltration method, special behavior is produced. The lateral
outflow from the SMA groundwater layer 1 is connected as inflow to the Linear Reservoir groundwater layer
1. The lateral outflow from the SMA groundwater layer 2 is connected as inflow to the Linear Reservoir
groundwater layer 2. The percolation out of the SMA groundwater layer 2 is connected as inflow to the
Linear Reservoir groundwater layer 3. Partition fractions are not used for the Linear Reservoir groundwater 1
and 2 layers because their inflow is determined by the respective lateral outflow from the SMA groundwater
layers. However, a partition fraction should be used with the Linear Reservoir groundwater layer 3 in order to
define how much percolation from SMA groundwater layer 2 is lost to deep aquifer recharge how much goes
towards inflow to the Linear Reservoir groundwater layer 3.
The Linear Reservoir Baseflow method can be used with any loss method; it DOES NOT require
the use of the SMA loss method.
Required Parameters
Parameters that are required to utilize this method within HEC-HMS include the number of reservoirs/layers,
the initial baseflow type and value, the partition fraction, the routing coefficient [hours], and the number of
routing steps for each layer. The initial discharge type can be specified as either a discharge rate [ft3/sec or
m3/sec] or a discharge rate per area [ft3/sec/mi2 or m3/sec/km2]. Using the discharge rate method is
appropriate when there is observed streamflow data at the outlet of the subbasin for determining the initial
flow in the channel. The discharge rate per area method is better suited when regional information is
available. However, the same method must be used for specifying the initial condition for all layers. The
partition fraction is used to determine the amount of inflow going to each layer. Each fraction must be
greater than 0.0 and the sum of the fractions must be less than or equal to 1.0. If the sum of the fractions is
less than 1.0, the remaining volume will be removed from the system (i.e. deep aquifer recharge). If the sum
of the fractions is exactly equal to 1.0, then all percolation will become baseflow and there will be no deep
aquifer recharge. The routing coefficient is the time constant for each layer. Similar to the estimation of
parameters for the Clark unit hydrograph transform, this parameter can be estimated using measurable
watershed characteristics. The number of routing steps can be used to subdivide the routing through each
layer and is related to the amount of attenuation during the routing. Minimum attenuation is achieved when
only one routing step is selected. Attenuation of baseflow increases as the number of steps increases.
75 https://www.hec.usace.army.mil/confluence/hmsdocs/hmstrm/infiltration-and-runoff-volume/soil-moisture-accounting-loss-model
The Nonlinear Boussinesq baseflow method is similar to the Recession baseflow method but assumes that
the channel overlies an unconfined aquifer which is itself underlain by a horizontal impermeable layer
(Szilagyi & Parlange, 1998). Through the use of the one-dimensional Boussinesq equation, an assumption
that capillarity above the water table can be neglected, and the Dupuit-Forcheimer approximation, it is
possible to parameterize the method using measurable field data. This method is intended primarily for
event simulation. However, it does have the ability to automatically reset after each storm event and
consequently may be used for continuous simulation.
Unless parameters are carefully chosen, this method is not guaranteed to conserve mass (e.g.,
precipitation losses < baseflow volume).
Required Parameters
Parameters that are required to utilize this method within HEC-HMS include the initial baseflow type and
value, threshold type and value, the characteristic subsurface flow length [ft or m], saturated hydraulic
conductivity of the aquifer [in/hr or mm/hr], and drainable porosity of the aquifer [ft/ft or m/m].
The initial discharge type can be specified as either a discharge rate [ft3/sec or m3/sec] or a discharge rate
per area [ft3/sec/mi2 or m3/sec/km2]. The discharge rate method is most appropriate when there is
observed streamflow data at the outlet of the subbasin for determining the initial flow in the channel. The
discharge rate per area method is better suited when regional information is available. The threshold type
can be specified as either a ratio to peak or a threshold discharge [ft3/sec or m3/sec]. If the threshold type is
set to ratio to peak, the baseflow will be reset when the current flow divided by the peak flow falls to the
specified value. If the threshold type is set to a threshold discharge, the baseflow will be reset when the
receding limb of the hydrograph falls to the specified value, regardless of the peak flow during the previous
storm event. The characteristic subsurface flow length corresponds to the mean distance from the subbasin
boundary to the stream, which can be estimated using GIS. The saturated hydraulic conductivity of the
aquifer can be estimated from field tests or from the predominant soil texture. An upper limit of the
drainable porosity of the aquifer corresponds to the total porosity minus the residual porosity. The actual
drainable porosity depends on site-specific conditions.
76 https://www.hec.usace.army.mil/confluence/hmsdocs/hmsguides/applying-baseflow-methods-in-hec-hms/applying-the-linear-
reservoir-baseflow-method
Baseflow References
Chow, V.T., Maidment, D.R., and Mays, L.W. (1988). Applied hydrology. McGraw-Hill, New York, NY.
Linsley, R.K., Kohler, M.A., and Paulhus, J.L.H. (1982). Hydrology for engineers. McGraw-Hill, New York, NY.
Pilgrim, D.H, and Cordery, I. (1992). "Flood runoff." D.R. Maidment, ed., Handbook of hydrology, McGraw-Hill,
New York, NY.
Szilagyi, J., & Parlange, M. B. (1998). Baseflow Separation Based on Analytical Solutions of the Boussinesq
Equation. Journal of Hydrology, 251 - 260.
Channel Flow
As the total runoff from subbasins reaches defined channels, the depth of water increases and the
predominant flow regime begins to transition to open channel flow. At this point, open channel flow
approximations are used to represent translation and attenuation effects as flood waves move
downgradient. This section describes the models of channel flow that are included in the program; these are
also known as routing models. Each of these models computes a downstream hydrograph, given an
upstream hydrograph as a boundary condition. Each does so by solving the continuity and momentum
equations. This chapter presents a brief review of the fundamental equations, simplifications, and solutions
to alternative models. The routing models that are included are appropriate for many, but not all, flood runoff
studies. The latter part of this chapter describes how to pick the proper model.
95)
where = energy gradient (also known as the friction slope); = bottom slope; V = velocity; y =
hydraulic depth; x = distance along the flow path; t = time; g = acceleration due to gravity; = pressure
gradient; (V/g) = convective acceleration; and (1/g)( ) = local acceleration.
The continuity equation accounts for the volume of water in a reach of an open channel, including that
flowing into the reach, that flowing out of the reach, and that stored in the reach. In one-dimension, the
equation is:
96)
where B = water surface width; and q = lateral inflow per unit length of channel. Each of the terms in this
equation describes inflow to, outflow from, or storage in a reach of channel, a lake or pond, or a reservoir.
Henderson (1966) described the terms as A( ) = prism storage; VB( ) = wedge storage; and B(
) = rate of rise.
The momentum and continuity equations are derived from basic principles, assuming:
• Velocity is constant, and the water surface is horizontal across any channel section.
• All flow is gradually varied, with hydrostatic pressure prevailing at all points in the flow. Thus vertical
accelerations can be neglected.
• Channel boundaries are fixed; erosion and deposition do not alter the shape of a channel cross
section.
Water is of uniform density, and resistance to flow can be described by empirical formulas, such as
Manning's and Chezy's equation.
Approximations
Although the solution of the full equations is appropriate for all one-dimensional channel-flow problems, and
necessary for many, approximations of the full equations are adequate for typical flood routing needs. These
approximations typically combine the continuity equation (Equation 2) with a simplified momentum equation
that includes only relevant and significant terms. Henderson (1966) illustrates this with an example for a
steep alluvial stream with an inflow hydrograph in which the flow increased from 10,000 cfs to 150,000 cfs
and decreased again to 10,000 cfs within 24 hours. The following table shows the terms of the momentum
97)
If this simplified momentum equation is combined with the continuity equation, the result is the kinematic
wave approximation, which is described here: Kinematic Wave Channel Routing Model77.
So (bottom slope) 26
• Diffusion wave approximation. This approximation is the basis of the Muskingum-Cunge routing
model, which is described here: Muskingum-Cunge Model (see page 212).
98)
99)
Solution Schemes
In HEC-HMS, the various approximations of the continuity and momentum equations are solved using the
finite difference method. In this method, finite difference equations are formulated from the original partial
differential equations. For example, from the momentum equation is approximated as ,a
difference in velocity in successive time steps , and is approximated as , a difference in
velocity at successive locations spaced at . Substituting these approximations into the partial differential
equations yields a set of algebraic equations. Depending upon the manner in which the differences are
computed, the algebraic equations may be solved with either an explicit or an implicit scheme. With an
explicit scheme, the unknown values are found recursively for a constant time, moving from one location
77 https://www.hec.usace.army.mil/confluence/hmsdocs/hmstrm/channel-flow/kinematic-wave-channel-routing-model
• A description of the channel. All routing models that are included in the program require a description
of the channel. In some of the models, this description is implicit in parameters of the model. In
others, the description is provided in more common terms: channel width, bed slope, cross-section
shape, or the equivalent. The 8-point cross-section configuration is one of the cross section shapes
available to describe the channel. The 8 pairs of x, y (distance, elevation) values are described
spatially in the figure below. Coordinates 3 and 6 represent the left and right banks of the channel,
respectively. Coordinates 4 and 5 are located within the channel. Coordinates 1 and 2 represent the
left overbank and coordinates 7 and 8 represent the right overbank.
• Energy-loss model parameters. All routing models incorporate some type of energy-loss model. The
physically-based routing models, such as the kinematic-wave model and the Muskingum-Cunge
model use Manning's equation and Manning's roughness coefficients (n values). Other models
represent the energy loss empirically.
• Initial conditions. All routing models require initial conditions: the flow (or stage) at the downstream
cross section of a channel prior to the first time period. For example, the initial downstream flow
could be estimated as the initial inflow, the baseflow within the channel at the start of the simulation,
or the downstream flow likely to occur during a hypothetical event.
• Boundary conditions. The boundary conditions for routing models are the upstream inflow, lateral
inflow, and tributary inflow hydrographs. These may be observed historical events, or they may be
computed with the precipitation-runoff models included in the program.
• Kinematic Wave78
78 https://www.hec.usace.army.mil/confluence/hmsdocs/hmstrm/channel-flow/kinematic-wave-channel-routing-model
• Modified Puls81
• Muskingum82
• Muskingum-Cunge83
• Normal Depth84
• Straddle Stagger85
The following sections detail their unique concepts and uses.
100)
As shown within the previous equation, the energy gradient is assumed to be equal to the bottom slope.
As such, this method is only appropriate for use in steep channels (i.e. 10 ft/mi or greater) and
does not recreate backwater effects.
Required Parameters
The parameters that are required to utilize this method within HEC-HMS are the initial condition, the reach
length [ft or m], the bottom slope [ft/ft or m/m], Manning’s n roughness coefficient, the number of
subreaches, an index method and value, and a cross-section shape and parameters/dimensions. An optional
invert can also be specified.
Two options for specifying the initial condition are included: outflow equals inflow and specified discharge
[ft3/sec or m3/sec]. The first option assumes that the initial outflow is the same as the initial inflow to the
reach from the upstream elements which is equivalent to the assumption of a steady-state initial condition.
The second option is most appropriate when there is observed streamflow data at the end of the reach.
79 https://www.hec.usace.army.mil/confluence/hmsdocs/hmstrm/channel-flow/lag-model
80 https://www.hec.usace.army.mil/confluence/hmsdocs/hmstrm/channel-flow/lag-and-k-model
81 https://www.hec.usace.army.mil/confluence/hmsdocs/hmstrm/channel-flow/modified-puls-model
82 https://www.hec.usace.army.mil/confluence/hmsdocs/hmstrm/channel-flow/muskingum-model
83 https://www.hec.usace.army.mil/confluence/hmsdocs/hmstrm/channel-flow/muskingum-cunge-model
84 https://www.hec.usace.army.mil/confluence/hmsdocs/hmstrm/channel-flow/normal-depth-model
85 https://www.hec.usace.army.mil/confluence/hmsdocs/hmstrm/channel-flow/straddle-stagger-model
86 https://www.hec.usace.army.mil/confluence/hmsdocs/hmstrm/transform/kinematic-wave-transform-model
Lag Model
This method does not include any representation of attenuation or diffusion processes.
Consequently, it is best suited to short stream segments with a predicable travel time that
doesn't vary with changing conditions.
101)
where = outflow hydrograph ordinate at time t; = inflow hydrograph ordinate at time t; and lag = time
by which the inflow ordinates are to be lagged.
The lag model is a special case of other models, as its results can be duplicated if parameters of
those other models are carefully chosen. For example, if X = 0.50 and K = in the Muskingum
model, the computed outflow hydrograph will equal the inflow hydrograph lagged by K.
A tutorial describing an example application of this channel routing method, including parameter
estimation and calibration, can be found here: Applying the Lag Routing Method87.
102)
where dS/dt = time rate of change of water in storage at time t; It = average inflow to storage at time t; and Ot
= outflow from storage at time t.
The lack of wedge storage means that the method should only be used for slowly varying flood
waves. Also, this method does not account for complex flow conditions such as backwater
effects and/or hydraulic structures.
Required Parameters
The parameters that are required to utilize this method within HEC-HMS are the initial condition, a lag
method and value or function, and a K method and value or function. Two options for specifying the initial
condition are included: outflow equals inflow and specified discharge [ft3/sec or m3/sec]. The first option
assumes that the initial outflow is the same as the initial inflow to the reach from the upstream elements
which is equivalent to the assumption of a steady-state initial condition. The second option is most
appropriate when there is observed streamflow data at the end of the reach. Two options for specifying a
87 https://www.hec.usace.army.mil/confluence/hmsdocs/hmsguides/applying-reach-routing-methods-within-hec-hms/applying-the-lag-
routing-method
88 https://www.hec.usace.army.mil/confluence/hmsdocs/hmstrm/channel-flow/muskingum-model
103)
This simplification assumes that the lateral inflow is insignificant, and it allows width to change with respect
to location. Rearranging this equation and incorporating a finite-difference approximation for the partial
derivatives yields:
104)
where = average upstream flow (inflow to reach) during a period ; = average downstream flow
(outflow from reach) during the same period; and = change in storage in the reach during the period.
Using a simple backward differencing scheme and rearranging the result to isolate the unknown values
yields:
105)
in which and = inflow hydrograph ordinates at times t-1 and t, respectively; and = outflow
hydrograph ordinates at times t-1 and t, respectively; and and = storage in reach at times t-1 and t,
respectively. At time t, all terms on the right-hand side of this equation are known, and terms on the left-hand
side are to be found. Thus, the equation has two unknowns at time t: and . A functional relationship
between storage and outflow is required to solve Equation 3. Once that function is established, it is
substituted into Equation 3, reducing the equation to a nonlinear equation with a single unknown, . This
equation is solved recursively by the program, using a trial-and-error procedure. Note that at the first time t,
the outflow at time t-1 must be specified to permit recursive solution of the equation; this outflow is the
initial outflow condition for the storage routing model.
If the storage vs. discharge relationships are carefully constructed using a hydraulic model that
includes bridges and/or other hydraulic structures, this method can simulate backwater effects
and the impacts of hydraulic structures so long as the effects/impacts are fully contained within
the reach.
Water-surface profiles can be computed with a hydraulic model for a range of discharges. Hydraulic
modeling applications like HEC-RAS (USACE, 2023) include automated tools that can be used to define a
relationship of storage to flow between two channel cross sections using computed water-surface profiles.
The following figure illustrates a set of water-surface profiles between cross section A and cross section B
of a channel. These profiles were computed for a set of steady flows, , , , and . For each
profile, the volume of water in the reach, , can be computed, using solid geometry principles. In the
simplest case, if the profile is approximately planar, the volume can be computed by multiplying the average
cross-section area bounded by the water surface by the reach length. Otherwise, another numerical
integration method can be used. If each computed volume is associated with the steady flow with which the
profile is computed, the result is a set of points on the required storage-outflow relationship. This procedure
can be used with existing or with proposed channel configurations. For example, to evaluate the impact of a
proposed channel project, the channel cross sections can be modified, water surface profiles recalculated,
Storage-outflow relationships can also be determined using historical observations of flow and stage.
Observed water surface profiles, obtained from high water marks, can be used to define the required storage-
outflow relationships, in much the same manner that computed water-surface profiles are used. Each
observed discharge-elevation pair provides information for establishing a point of the relationship. Sufficient
stage data over a range of floods is required to establish the storage-outflow relationship in this manner. If
only a limited set of observations is available, these values may be better suited to calibrate a water-surface
profile-model for the channel reach of interest. Then the calibrated model can be used to establish the
storage-outflow relationship as described above.
Finally, storage-outflow relationships can be calibrated using observed inflow and outflow hydrographs for
the reach of interest. Observed inflow and outflow hydrographs can be used to compute channel storage by
an inverse process of flood routing. When both inflow and outflow are known, the change in storage can be
computed using Equation 3. Then, the storage-outflow function can be developed empirically. Note that
tributary inflow, if any, must also be accounted for in this calculation. Inflow and outflow hydrographs also
can be used to find the storage-outflow function by trial-and-error. In that case, a candidate function is
defined and used to route the inflow hydrograph. The computed outflow hydrograph is compared with the
observed hydrograph. If the match is not adequate, the function is adjusted, and the process is repeated.
89 https://www.hec.usace.army.mil/confluence/hmsdocs/hmstrm/transform/kinematic-wave-transform-model
The number of steps affects the computed attenuation of the hydrograph. As the number of routing steps
increases, the amount of attenuation decreases. The maximum attenuation corresponds to one step; this is
used commonly for routing though ponds, lakes, wide, flat floodplains, and channels in which the flow is
heavily controlled by downstream conditions. Strelkoff (1980) suggests that for locally-controlled flow,
typical of steeper channels:
107)
where yo = normal depth associated with baseflow in the channel. Engineer Manual 1110-2-1417 Flood-
Runoff Analysis (U.S. Army Corps of Engineers, 1994) indicates that this parameter, however, is best
determined by calibration, using observed inflow and outflow hydrographs.
Muskingum Model
This method begins with the following form of the continuity equation:
108)
109)
where It-1 and It = inflow to the reach at times t-1 and t, respectively, Ot-1 and Ot = outflow from the reach at
times t-1 and t, respectively, and St-1 and St = storage within the reach at times t-1 and t, respectively.
The volume of prism storage is the outflow rate, O, multiplied by the travel time through the reach, K. The
volume of wedge storage is a weighted difference between inflow and outflow multiplied by the travel time
K. Thus, the Muskingum method defines total storage as:
110)
111)
where K = travel time of the flood wave through routing reach; and X = dimensionless weight (0 X 0.5).
The quantity is a weighted discharge. If storage in the channel is
controlled by downstream conditions, such that storage and outflow are highly correlated, then X = 0.0. In
that case, Equation 4 resolves to a linear reservoir model:
112)
If X = 0.5, equal weight is given to inflow and outflow, and the result is a uniformly progressive wave that
does not attenuate as it moves through the reach.
If Equation 2 is substituted into Equation 4 and the result is rearranged to isolate the unknown values at time
t, the result is:
113)
HEC-HMS solves Equation 6 recursively to compute Ot given inflow (It and It-1), an initial condition (Ot=0), K,
and X.
Required Parameters
Parameters that are required to utilize this method within HEC-HMS include the initial condition, K [hours], X,
and the number of subreaches. Two options for specifying the initial condition are included: outflow equals
inflow and specified discharge [ft3/sec or m3/sec]. The first option assumes that the initial outflow is the
same as the initial inflow to the reach from the upstream elements which is equivalent to the assumption of
a steady-state initial condition. The second option is most appropriate when there is observed streamflow
data at the end of the reach. In either case, the initial storage will be computed from the first inflow to the
reach and corresponding storage vs. discharge function.
K is equivalent to the travel time through the reach. Initial estimates of this parameter can be made using
observed streamflow data or through approximations of flood wave celerity. One such approximation is
Seddon’s Law (Ponce, 1983):
where c = flood wave celerity [ft/s or m/s], B = top width of the water surface [ft or m], and dQ/dy = slope of
the discharge vs. stage relationship (i.e. rating curve). K can then be estimated using:
115)
A tutorial describing an example application of this loss method, including parameter estimation
and calibration, can be found here: Applying the Muskingum Routing Method90.
Muskingum-Cunge Model
116)
117)
90 https://www.hec.usace.army.mil/confluence/hmsdocs/hmsguides/applying-reach-routing-methods-within-hec-hms/applying-the-
muskingum-routing-method
118)
where c = flood wave celerity, μ = hydraulic diffusivity, and qL = lateral inflow. Flood wave celerity and
hydraulic diffusivity can be expressed as:
119)
120)
121)
Using a finite difference approximation of the partial derivatives in Equation 3 and combination with Equation
6 yields:
122)
123)
124)
125)
126)
Within the previously mentioned Muskingum method, the X parameter is a dimensionless coefficient that
lacks a strong physical meaning. Cunge (1969) evaluated the numerical diffusion that is produced through
the use of Equation 6 and set this equal to the physical diffusion represented by Equation 3. This yielded the
following representations for K and X (Ponce & Yevjevich, 1978):
128)
Since c, Q, and B can change during the passage of a flood wave, the coefficients C1, C2, C3, and C4 also
change. As such, the C1, C2, C3, and C4 coefficients are recomputed each time and distance step (Δt and Δx)
using the algorithm proposed by Ponce (1986).
Required Parameters
The parameters that are required to utilize this method within HEC-HMS are the initial condition, the reach
length [ft or m], the friction slope [ft/ft or m/m], Manning’s n roughness coefficient, a space-time interval
method and value, an index method and value, and a cross-section shape and parameters/dimensions. An
optional invert can also be specified.
Two options for specifying the initial condition are included: outflow equals inflow and specified discharge
[ft3/sec or m3/sec]. The first option assumes that the initial outflow is the same as the initial inflow to the
reach from the upstream elements which is equivalent to the assumption of a steady-state initial condition.
The second option is most appropriate when there is observed streamflow data at the end of the reach.
The reach length should be set as the total length of the reach element while the friction slope should be set
as the average friction slope for the entire reach. If the friction slope varies significantly throughout the
stream represented by the reach, it may be necessary to use multiple reaches with different slopes. If no
information is available to estimate the friction slope, the bed slope can be used as an approximation. The
Manning's n roughness coefficient should be set as the average value for the whole reach. This value can be
estimated using “reference” streams with established roughness coefficients or through calibration.
The choices of space and time steps (Δx and Δt) are critical to ensure accuracy and stability. Three options
for specifying a space-time method are provided within HEC-HMS: 1) Auto DX Auto DT, 2) Specified DX Auto
DT, and 3) Specified DX Specified DT. When the Auto DX Auto DT method is selected, space and time
intervals that attempt to maintain numerical stability will automatically be selected. Δt is selected as the
minimum of either the user-specified time step or the travel time through the reach (rounded to the nearest
multiple or divisor of the user-specified time step). Once Δt is computed, Δx is computed as:
129)
When the Specified DX Auto DT method is selected, the specified number of subreaches will be used while
automatically varying the time interval to take as long a time interval as possible while also maintaining
numerical stability. When the Specified DX Specified DT method is selected, the specified number of
subreaches and subintervals (rounded to the nearest multiple or divisor of the user-specified time step) will
be used throughout the entire simulation.
Upon completion of a simulation, the minimum and maximum celerity of the routed hydrograph will be
displayed as notes. Also, a reference space step, Δxref, will be computed using methodology presented in
Engineer Manual 1110-2-1417 Flood-Runoff Analysis (U.S. Army Corps of Engineers, 1994):
130)
where Q0 = reference flow, which is computed from the inflow hydrograph as:
131)
A tutorial describing an example application of this loss method, including parameter estimation
and calibration, can be found here: Applying the Muskingum-Cunge Routing Method91.
Basic Concepts
The Normal Depth method is very similar to the aforementioned Modified Puls method. Specifically, storage
within a reach is assumed to be primarily dependent upon outflow. However, the Normal Depth method
automatically develops storage vs. discharge relationships using Manning’s equation, a normal depth
assumption, and user-defined channel properties. This method allows for more efficient parameterization,
but also loses the ability to simulate backwater effects and the impacts of hydraulic structures since
hydraulic simulations are no longer used to develop storage vs. discharge relationships.
91 https://www.hec.usace.army.mil/confluence/hmsdocs/hmsguides/applying-reach-routing-methods-within-hec-hms/applying-the-
muskingum-cunge-routing-method
Basic Concepts
The Straddle Stagger method (or progressive average lag method) uses empirical representations of
translation and attenuation processes to route water through a reach. Specifically, inflow to the reach is
lagged in time and then averaged over a specified duration to produce the final outflow.
Required Parameters
The parameters that are required to utilize this method within HEC-HMS are the initial condition, lag
[minutes], and duration [minutes]. Two options for specifying the initial condition are included: outflow
equals inflow and specified discharge [ft3/sec or m3/sec]. The first option assumes that the initial outflow is
the same as the initial inflow to the reach from the upstream elements which is equivalent to the assumption
of a steady-state initial condition. The second option is most appropriate when there is observed streamflow
data at the end of the reach. The lag parameter specifies the travel time through the reach; inflow to the
Modified Puls • Can simulate backwater effects • Method is less parsimonious than
and impacts of hydraulic simpler methods; it requires many
structures. more parameters.
• Requires hydraulic simulations to
derive accurate storage vs. outflow
relationships; consequently, this
method can be difficult to
parameterize and calibrate.
Muskingum • "Mature" method that has been • Method may be too simple to
used successfully in thousands accurately predict floodwave
of studies throughout the U.S. translation and attenuation.
• Method is parsimonious; it • Only appropriate for use in
requires only a few parameters. moderately steep streams (bed
slopes > 2 ft/mi).
• Cannot simulate variable translation
and attenuation.
• Cannot simulate backwater effects or
impacts of hydraulic structures.
Erosion Methods
A Subbasin Element represents a drainage area where precipitation induces surface runoff, influenced by
burn or unburn conditions. Within this catchment, erosion occurs due to various physical processes, notably
in post-fire scenarios. Raindrops initiate erosion by impacting the ground, dislodging soil particles, which are
carried by overland flow. This flow also imparts erosive energy to the terrain, potentially further disrupting the
topsoil layer. As overland flow intensifies, it becomes channeled into rills, concentrating erosive energy and
exacerbating surface erosion. The extent of erosion closely correlates with precipitation rate, land surface
slope, and surface condition. Occasionally, soil eroded from higher up in the catchment may settle before
reaching the subbasin outlet.
All Surface Erosion Methods for the subbasin element share certain simulation features. Each method
calculates the total sediment load transported out of the subbasin during a storm, repeating this process for
each storm within the simulation time window. The computed sediment load is then distributed into a time-
Modified USLE92
Build-up Wash-off93
2D Sediment Transport98
Another common feature among these methods is the treatment of Grain Size Distribution. Initially, all
methods compute the overall Sediment Discharge, encompassing all grain sizes. Subsequently, a Gradation
Curve specifies the proportion of the total sediment discharge allocated to each grain size class or subclass.
Users define and select a gradation curve for each subbasin, allowing for distinctions in erosion, deposition,
and resuspension processes within each subbasin. These processes are often collectively represented by an
Enrichment Ratio.
2D Sediment Transport
A brief description of the Finite-Volume discretization of the total-load transport equation is provided here
without any derivation or details. For a more comprehensive understanding, including information on
advection schemes, gradient operators, and related aspects, please refer to the "HEC-RAS 2D Sediment
Transport Technical Reference Manual" by Sánchez et al. (2019) (2D Sediment Manual99). This reference is
relevant because HEC-HMS shares the same 2D Sediment Transport engine as HEC-RAS. The 2D Transport
Module in both applications employs explicit and implicit Finite-Volume methods to solve generic Advection-
Diffusion equations. The final form of the discretized total-load advection-diffusion equation is given by
92 https://www.hec.usace.army.mil/confluence/display/HMSTRM/Modified+USLE?src=contextnavpagetreemode
93 https://www.hec.usace.army.mil/confluence/display/HMSTRM/Build-up+Wash-off?src=contextnavpagetreemode
94 https://www.hec.usace.army.mil/confluence/display/HMSTRM/LA+Debris+Equation+1?src=contextnavpagetreemode
95 https://www.hec.usace.army.mil/confluence/display/HMSTRM/LA+Debris+Equations+2-5?src=contextnavpagetreemode
96 https://www.hec.usace.army.mil/confluence/display/HMSTRM/USGS+Emergency+Assessment+Debris+Model?
src=contextnavpagetreemode
97 https://www.hec.usace.army.mil/confluence/display/HMSTRM/USGS+Long-Term+Debris+Model?src=contextnavpagetreemode
98 https://www.hec.usace.army.mil/confluence/display/HMSTRM/2D+Sediment+Transport?src=contextnavpagetreemode
99 https://www.hec.usace.army.mil/confluence/display/RAS/2D+Sediment+Manual
Required Parameters
Parameters that are required to utilize this method within HEC-HMS include the total load scaling factor,
critical mobility scaling factor, sheet & splash erodibility coefficient, sediment total roughness factor, and
adaptation coefficient.
A tutorial using the 2D Sediment Transport simulation can be found here: TBD.
For comprehensive details regarding input parameters, kindly consult the "HEC-RAS 2D Sediment Transport
Technical Reference Manual (2D Sediment Manual100)" authored by Sánchez et al. in 2019.
Build-up Wash-off
The Build-Up Wash-Off (BUWO) method is a hydrological modeling approach used to simulate the
accumulation and removal of pollutants, such as sediment, nutrients, and contaminants, from impervious
surfaces in urban areas during rainfall events. This method is commonly employed in the field of stormwater
management and urban water quality assessment. It helps estimate the quantity and quality of runoff from
urban areas and the subsequent pollutant loads entering receiving water bodies. Build up may be a function
of time, traffic flow, dry fallout and street sweeping. During a storm event, the material is then washed off
into the drainage system. Although the bulid-up Wash-off option is conceptually appealing, the reliability and
credibility of simulation may be difficult to establish without local data for calibration and validation (Huber
and Dickinson, 1988).
The Michaelis-Menten Build-Up Equation is a mathematical model used to describe the accumulation of
pollutants on impervious surfaces in urban areas over time. In the context of urban stormwater management
and water quality modeling, the Michaelis-Menten Build-Up Equation is used to estimate how a specific
pollutant, such as sediment, heavy metals, or nutrients, accumulates on impervious surfaces (e.g., roads,
parking lots) as a function of time, especially during dry weather periods. The equation is often used in
conjunction with the Build-Up Wash-Off (BUWO) method to model the buildup and subsequent wash-off of
pollutants during rainfall events. The general form of the Michaelis-Menten Build-Up Equation is as follows
(Huber and Dickinson, 1988 and Neitsch, Arnold, Kiniry, and Williams, 2009) :
The Huber-Dickinson equation is a commonly used mathematical model for simulating the wash-off of
pollutants from impervious surfaces in urban areas during rainfall events. It's named after its developers,
W.C. Huber and R.E. Dickinson, who introduced the model in the 1988 publication titled "Stormwater
Management Model User's Manual, Version III." The Huber-Dickinson equation is particularly associated with
the United States Environmental Protection Agency's (EPA) Storm Water Management Model (SWMM),
which is widely used for stormwater management and urban hydrology modeling. The equation estimates
the wash-off of pollutants as a function of various factors, including rainfall characteristics, land use, and the
pollutant load on impervious surfaces. The general form of the Huber-Dickinson Wash-Off equation is as
follows (Huber and Dickinson, 1988 and Neitsch, Arnold, Kiniry, and Williams, 2009):
100 https://www.hec.usace.army.mil/confluence/display/RAS/2D+Sediment+Manual
Required Parameters
Parameters that are required to utilize this method within HEC-HMS include the initial time, half time,
maximum solid amount, density, sweeping percentage, efficiency percentage, interval, and wash-off
coefficient.
Street sweeping parameters exhibit variability and are typically tailored to local conditions, accounting for
factors like equipment types, cleaning frequencies, and the efficiency of pollutant removal during the street
sweeping process.
LA Debris Equation 1
The Los Angeles District Debris Method - Equation 1 (Gatwood, Pedersen, and Casey, 2000) is employed to
simulate events in watersheds ranging from 0.1 mi² to 3 mi² in size, where peak flow data is unavailable .
This equation was derived from a comprehensive dataset comprising 349 observations collected across 80
watersheds in Southern California. All the factors included in this equation demonstrated statistical
significance at a confidence level of 0.99. It is worth noting that the LA Debris Method Equation 1 exhibits its
highest efficacy in arid or semi-arid regions, precisely the same geographical area where it was originally
developed.
The Fire Factor (FF) can be approximated using the Factor Factor Curve (watersheds ranging from 0.1 mi² to
3 mi²) provided below, which illustrates a scenario of 100% combustion. An illustration of how to calculate
the Fire Factor in cases of partial combustion can be found in the Los Angeles Debris Method Manual
(Gatwood, Pedersen, and Casey, 2000).
A tutorial using the Los Angeles District Debris Method - Equation 1 in an event simulation can be
found here: Applying Debris Yield Methods in HEC-HMS101.
HEC-HMS initially assigns a default value of 1.0 to the Adjustment-Transposition (A-T) factor. However, it's
essential to fine-tune and verify this value by taking into account the disparities in geomorphological
characteristics between the specific watershed under consideration and the original watershed (San Gabriel
Mountains, CA) from which the regression equation was originally derived.
The Flow Rate Threshold parameter was introduced as an independent variable to segment storm events for
continuous simulation. It establishes the lower boundary for direct runoff flow rate, marking the
commencement of a debris flow event when the direct runoff exceeds this threshold. Conversely, the event
concludes when the direct runoff drops below the specified threshold. This parameter assumes particular
significance in the calibration process, especially for continuous simulations.
101 https://www.hec.usace.army.mil/confluence/display/HMSGUIDES/Applying+Debris+Yield+Methods+in+HEC-HMS
The Fire Factor (FF) can be approximated using the Factor Factor Curve (watersheds ranging from 3.0 mi² to
200.0 mi²) provided below, which illustrates a scenario of 100% combustion. An illustration of how to
calculate the Fire Factor in cases of partial combustion can be found in the Los Angeles Debris Method
Manual (Gatwood, Pedersen, and Casey, 2000).
A tutorial using the Los Angeles District Debris Method - Equations 2 - 5 can be found
here: Hydrologic Modeling and Debris Flow Estimation for Post Wildfire Conditions102.
HEC-HMS initially assigns a default value of 1.0 to the Adjustment-Transposition (A-T) factor. However, it's
essential to fine-tune and verify this value by taking into account the disparities in geomorphological
characteristics between the specific watershed under consideration and the original watershed (San Gabriel
Mountains, CA) from which the regression equation was originally derived.
The Flow Rate Threshold parameter was introduced as an independent variable to segment storm events for
continuous simulation. It establishes the lower boundary for direct runoff flow rate, marking the
commencement of a debris flow event when the direct runoff exceeds this threshold. Conversely, the event
concludes when the direct runoff drops below the specified threshold. This parameter assumes particular
significance in the calibration process, especially for continuous simulations.
102 https://www.hec.usace.army.mil/confluence/display/HMSGUIDES/
Hydrologic+Modeling+and+Debris+Flow+Estimation+for+Post+Wildfire+Conditions
Modified USLE
The MUSLE, or Modified Universal Soil Loss Equation (Williams, 1975), is a mathematical model used in soil
science and hydrology to estimate soil erosion. It was developed as an extension and modification of the
original Universal Soil Loss Equation (USLE). The MUSLE takes into account factors such as land use,
topography, soil erodibility, and climate conditions to predict the potential erosion rate in a particular
area. The modifications to the original USLE equation changed the formulation to calculate erosion from
surface runoff instead of precipitation. The other components of the original formulation remained the same.
The method works best in agricultural environments where it was developed. However, some users have
adapted it to construction and urban environments.
Required Parameters
Parameters that are required to utilize this method within HEC-HMS include the erodibility factor, topographic
factor, cover factor, and practice factor.
A tutorial using the MUSLE in an event simulation can be found here: TBD.
The Threshold parameter was introduced as an independent variable to segment storm events for
continuous simulation. It establishes the lower boundary for direct runoff flow rate, marking the
commencement of a sediment flow event when the direct runoff exceeds this threshold. Conversely, the
event concludes when the direct runoff drops below the specified threshold. This parameter assumes
particular significance in the calibration process, especially for continuous simulations.
This Fire Factor (FF) equation has been meticulously designed to cater specifically to the continuous, long-
term simulation, allowing for the dynamic incorporation of the recovery process following a fire event, in
tandem with the influence of rainfall events over time. In the course of this research, a comprehensive fire
factor (FF) was ingeniously formulated by taking into account several key parameters. These parameters
include the extent of watershed burned, the temporal elapsed since the fire incident, and the count of
antecedent precipitation events surpassing a predefined threshold value since the fire occurrence.
Furthermore, it's important to note that the Fire Factor (FF) equation is applicable in conjunction with LA
Debris Equation 1 for continuous simulation. Nonetheless, an essential consideration arises when extending
its usage to LA Debris Equations 2-5, particularly when dealing with areas exceeding 3.0 mi². This is due to
the fact that the original equation was established using the Fire Factor Curve within the range of 0.1 mi² to
3.0 mi². Therefore, for reliable application beyond this range, the equation necessitates calibration with
empirically measured data.
Required Parameters
Parameters that are required to utilize this method within HEC-HMS include the maximum 1-hour
precipitation [inches or millimeters], Threshold Maximum 1-hour Rainfall Intensity (TMRI) [inched/hour or
millimeters/hour], Total Rainfall Amount per Event [inches or millimeters], Total Minimum Rainfall
Amount (TMRA) [inches or millimeters], relief ratio [ft/mi or m/km], and non-dimensional fire factor.
A tutorial using the Multi-Sequence Debris Prediction Method (MSDPM) in an event simulation
can be found here: Applying Debris Yield Methods in HEC-HMS103.
HEC-HMS initially assigns a default value of 1.0 to the Adjustment-Transposition (A-T) factor. However, it's
essential to fine-tune and verify this value by taking into account the disparities in geomorphological
characteristics between the specific watershed under consideration and the original watershed (San Gabriel
Mountains, CA) from which the regression equation was originally derived.
The Flow Rate Threshold parameter was introduced as an independent variable to segment storm events for
continuous simulation. It establishes the lower boundary for direct runoff flow rate, marking the
103 https://www.hec.usace.army.mil/confluence/display/HMSGUIDES/Applying+Debris+Yield+Methods+in+HEC-HMS
Total Minimum Rainfall Amount (TMRA): TMRA is directly linked to the sediment transport capacity
required to channel sediment toward the concentration point. However, it's crucial to recognize that not all
rainfall events possess the capacity to facilitate substantial sediment transport. Once sediment entrainment
occurs, an additional level of energy is necessary to transport the sediment effectively to the concentration
point. Consequently, another round of screening was conducted to identify rainfall events capable of
providing this essential energy.
The critical total rainfall amount, represented as TMRA, was established individually for each watershed by
analyzing the interplay between TMRA and TMRI, as visually depicted in the figure below. This pivotal
threshold signifies the minimal rainfall accumulation required to facilitate substantial sediment transport
within the watershed. The initial TMRA value for each watershed can be extracted directly from the TMRA
and TMRI graph below. For further refinement and calibration, precision and accuracy are enhanced through
the calibration process, which incorporates outlet flow gage data.
Required Parameters
Parameters that are required to utilize this method within HEC-HMS include the maximum 15-minute
precipitation [inches or millimeters], watershed relief [feet or meters], and watershed area burned at
moderate and high severity [mi² or km²] .
The Flow Rate Threshold parameter was introduced as an independent variable to segment storm events for
continuous simulation. It establishes the lower boundary for direct runoff flow rate, marking the
commencement of a debris flow event when the direct runoff exceeds this threshold. Conversely, the event
concludes when the direct runoff drops below the specified threshold. This parameter assumes particular
significance in the calibration process, especially for continuous simulations.
Required Parameters
Parameters that are required to utilize this method within HEC-HMS include the maximum 60-minute
precipitation [inches or millimeters], watershed relief [feet or meters], and watershed area burned by the most
recent wildfire [mi² or km²] .
A tutorial using the USGS Long-Term Debris Method in an event simulation can be found
here: Applying Debris Yield Methods in HEC-HMS105.
104 https://www.hec.usace.army.mil/confluence/display/HMSGUIDES/Applying+Debris+Yield+Methods+in+HEC-HMS
105 https://www.hec.usace.army.mil/confluence/display/HMSGUIDES/Applying+Debris+Yield+Methods+in+HEC-HMS
The Flow Rate Threshold parameter was introduced as an independent variable to segment storm events for
continuous simulation. It establishes the lower boundary for direct runoff flow rate, marking the
commencement of a debris flow event when the direct runoff exceeds this threshold. Conversely, the event
concludes when the direct runoff drops below the specified threshold. This parameter assumes particular
significance in the calibration process, especially for continuous simulations.
106 https://www.hec.usace.army.mil/confluence/display/RASSED1D/Sediment+Transport+Potential
107 https://www.hec.usace.army.mil/confluence/display/RAS/Ackers+and+White
108 https://www.hec.usace.army.mil/confluence/display/RAS/Engelund-Hansen
109 https://www.hec.usace.army.mil/confluence/display/RAS/Laursen-Copeland
110 https://www.hec.usace.army.mil/confluence/pages/viewpage.action?pageId=30805747
111 https://www.hec.usace.army.mil/confluence/display/RAS/Toffaleti
112 https://www.hec.usace.army.mil/confluence/display/RAS/Wilcock-Crowe
113 https://www.hec.usace.army.mil/confluence/display/RAS/Yang?src=contextnavpagetreemode
Sediment Delivery Ratio (only for NC/CO ̶ Pak and Lee (2012)
HEC-HMS)
Notes: non-cohesive (NC) or cohesive (CO). Method is excess shear (ES), stream power (SP), or regression
(RE).
A cohesive transport potential method (Krone Parthenaides) can also be selected. When selected, transport
of cohesive sediment is computed in addition to the non-cohesive sediment. More information can be
founded in HEC-HMS manual (Cohesive Transport115).
Transport potential functions for calculating the amount of sediment that can be carried by the stream flow.
For a more comprehensive understanding, including information on algorithms that translate hydrodynamics
into transport, please refer to the "HEC-RAS Sediment Manual" by Gibson and Sánchez (2020) (Sediment
Manual116). This reference is relevant because HEC-HMS shares the same Sediment Transport engine as
HEC-RAS.
114 https://www.hec.usace.army.mil/confluence/display/RAS/Krone+and+Parthenaides+Methods
115 https://www.hec.usace.army.mil/confluence/display/RAS/Cohesive+Transport
116 https://www.hec.usace.army.mil/confluence/display/RAS/Sediment+Manual
• Fisher's Dispersion
• Linear Reservoir
• Muskingum
• Uniform Equilibrium
• Volume Ratio
The following sections detail their unique concepts and uses.
Fisher's Dispersion
The Fisher's Dispersion Method is based on an analysis of advection and diffusion of sediment within a
reach (Fisher et al., 1979). This is the most detailed of the sediment routing methods and requires more data
than the other available methods. Advection and diffusion, represented as Travel and Dispersion parameters,
need to be specified for each grain size class. This permits large-grained sediments to move slower than
fine-grained sediments. For each time interval, sediment from the upstream elements are added to the
sediment already in the reach. After erosion or deposition is calculated, the remaining available sediment is
translated in the reach by a Travel Time and attenuated through a diffusion process. The advection and
diffusion of sediment are linked to the velocity of water in the reach which is calculated during the flow
routing.
Required Parameters
Parameters that are required to utilize this method within HEC-HMS include the dispersion coefficient [ft²/s
or m²/s], travel time [hour].
Dispersion Coefficient must be specified for each grain class (clay, silt, sand, gravel). The Dispersion
coefficient indicates the diffusion of the particles during transit through the reach and is dependent on the
channel geometry. The dispersion coefficient can vary over several orders of magnitude and often must be
adjusted during calibration. Some guidance is available for estimating the dispersion coefficient from
Kashefipour and Falconer (2002). The Travel Time must also be specified for each grain class and is often
close to the travel time for water in the reach. When the AGU 20 grain size classification is used, the same
dispersion and retention values are used for all subclasses of grain classes.
Required Parameters
Parameters that are required to utilize this method within HEC-HMS include the retention parameters for
each grain size.
Estimating the retention parameters for each grain size typically involves a calibration process based on
observed hydrological data.
Muskingum
The Muskingum Method employs a straightforward mass conservation approach to manage sediment and
debris routing within a stream reach. During each time interval, the method calculates the available sediment,
considering both upstream sediment inputs and local erosion or deposition within the reach. This available
sediment, segmented by grain size, is then routed using Muskingum routing parameters: the Attenuation
Coefficient (X) and Travel Time (K). These parameters facilitate the movement of sediment with varying
grain sizes at different speeds and approximate its attenuation as it traverses the reach.
The Muskingum sediment routing method shares similarities with the Muskingum flow routing method
(Muskingum Model (see page 210)) in that both employ a mass conservation approach for routing. In the
context of sediment routing, this method effectively controls time lag with the travel time parameter (K) and
regulates attenuation using the dimensionless weight factor (X, typically ranging from 0 to 0.5).
Required Parameters
Parameters that are required to utilize this method within HEC-HMS include the attenuation coefficients and
travel times for each grain size.
A tutorial using the Muskingum sediment routing method can be found here: Task 4: Debris Flow
Modeling using Debris Channel Routing Method117
Estimating the Muskingum routing method's K and X parameters typically involves a calibration process
based on observed hydrological data.
Uniform Equilibrium
The Uniform Equilibrium Method operates on the assumption that sediment is instantaneously transported
through the reach without any temporal lag. It represents the simplest approach because it does not account
for any delay in the sediment's passage through the reach.
Here's how this method generally works:
1. Sediment Inflow: Sediment enters the reach from upstream elements, such as tributaries or other
contributing sources.
2. Transport Capacity Assessment: The method calculates the transport capacity for each grain size to
determine whether the stream is experiencing sediment deposition or erosion.
3. Sediment Constraints: It also considers any constraints on sediment deposition and erosion within
the reach.
4. Immediate Routing: The remaining sediment, after considering all factors, is routed instantly during
the same time interval, regardless of the flow velocity.
In essence, the Uniform Equilibrium Method simplifies sediment transport calculations by assuming that
sediment moves through the reach without delay, making it a straightforward approach within sediment
routing modeling.
Volume Ratio
The Volume Ratio sediment routing method is a technique used in hydrology and hydraulic engineering to
estimate the proportion of sediment transported from one location to another within a river or stream
network. Specifically, it calculates the sediment volume ratio between two adjacent reaches or sub-reaches.
The Volume Ratio Method directly pairs the sediment transport to the streamflow. For each time interval,
117 https://www.hec.usace.army.mil/confluence/display/HMSGUIDES/
Task+4%3A+Debris+Flow+Modeling+using+Debris+Channel+Routing+Method
An additional trap efficiency method is needed to account for reservoir volume reduction based on the
sediment siltation volume. One candidate method is the Brune’s trap efficiency method (Brune, 1953) utilized
by Kansas City District. By adding Brune’s trap efficiency method in HEC-HMS, USACE local district offices
Figure 1. Trap efficiency as related to capacity-inflow ratio, type of reservoir, and method of operation
(Brune, 1953)
where:
l and b are the length and width of the settling zone, respectively;
v is the water velocity through the pond;
A is the surface area of the pond; and
Q is the in- or outflowing discharge.
Figure 1. Settling conditions in an ideal rectangular settling basin (Camp, T.R. 1945)
The critical settling velocity is therefore equal to the overflow rate of the pond. For an ideal rectangular pond,
the fraction of particles trapped with vs less than vc is given by the TE:
Figure 1. Reservoir Volume Reduction Process: (a) Chen's Trap Efficiency Method (b) Brune's Trap Efficiency
Method
When users choose the "Reservoir Capacity Method" within the HEC-HMS user interface, they are offered a
selection between two deposition shape options: "V-Shape" and "Elongated Taper." This choice of deposition
shape for each grain size initiates modifications to the Elevation-Storage and Elevation-Area relation curves,
and these alterations are visually depicted in Figure 2.
Figure 2. Elevation-Storage Curve after Siltation (Left Figure: Delta Deposition (coarse material (Sand &
Gravel)/large and long reservoir) Right Figure: Deposited Muddy Lake Deposition at Dead Storage Zone (fine
material (Clay & Silt / Small and V-shaped Reservoir))
However, the resulting equation is circular because fall velocity is function of the drag coefficient CD, which
is a function of the Reynolds number, which is itself a function of fall velocity. This self-referential quality of
the force balance requires either an approximation of the drag coefficient/Reynolds number or an iterative
solution. The fall velocity options in HEC-RAS are detailed in Chapter 12, pages 12-30 to 12-32, but a few brief
comments on how each of these methods attempts to solve this equation (fall velocity dependence on fall
velocity) are given below.
Rubey assumes a Reynolds number to derive a simple, analytical function for fall velocity. Toffaleti
developed empirical, fall velocity curves that, based on experimental data, which HEC-RAS reads and
interpolates directly. Van Rijn uses Rubey as an initial guess and then computes a new fall velocity from
experimental curves based on the Reynolds number computed from the initial guess. Finally, Report 12 is an
iterative solution that uses the same curves as Van Rijn but uses the computed fall velocity to compute a
new Reynolds number and continues to iterate until the assumed fall velocity matches the computed within
an acceptable tolerance.
Fall velocity is also dependent upon particle shape. The aspect ratio of a particle can cause both the driving
and resisting forces in Figure 2 9 to diverge from their simple spherical derivation. All of the equations
assume a shape factor or build one into their experimental curve. Only Report 12 is flexible enough to
compute fall velocity as a function of shape factor. Therefore, HEC-RAS exposes shape factor as a user input
variable but only uses it if the Report 12 method is selected.
For a more comprehensive understanding, including information on algorithms that translate hydrodynamics
into transport, please refer to the "Sediment Transport Capacity119". This reference is relevant because HEC-
HMS shares the same Sediment Transport engine as HEC-RAS.
Comprehensive information on fall velocity methods is provided in the HEC-RAS Hydraulic Reference Manual,
as illustrated below.
The suspension of a sediment particle is initiated once the bed-level shear velocity approaches the same
magnitude as the fall velocity of that particle. The particle will remain in suspension as long as the vertical
components of the bed-level turbulence exceed that of the fall velocity. Therefore, the determination of
suspended sediment transport relies heavily on the particle fall velocity.
Within HEC-RAS, the method for computing fall velocity can be selected by the user. Three methods are
available and they include Toffaleti (1968), Van Rijn (1993), and Rubey (1933). Additionally, the default can
be chosen in which case the fall velocity used in the development of the respective sediment transport
function will be used in RAS. Typically, the default fall velocity method should be used, to remain consistent
118 https://www.hec.usace.army.mil/confluence/display/RASSED1D/Fall+Velocity
119 https://www.hec.usace.army.mil/confluence/display/RAS1DTechRef/Sediment+Transport+Capacity
Toffaleti
Toffaleti (1968) presents a table of fall velocities with a shape factor of 0.9 and specific gravity of 2.65.
Different fall velocities are given for a range of temperatures and grain sizes, broken up into American
Geophysical Union standard grain size classes from Very Fine Sand (VFS) to Medium Gravel (MG). Toffaleti's
fall velocities are presented in Table below.
Van Rijn
Van Rijn (1993) approximated the US Inter-agency Committee on Water Resources' (IACWR) curves for fall
velocity using non-spherical particles with a shape factor of 0.7 in water with a temperature of 20oC. Three
equations are used, depending on the particle size:
where
ν = kinematic viscosity [L2/T]
d = grain size [L]
d∗ = d(Rg)1/3ν−2/3 = dimensionless grain size [-]
R = ρs/ρw−1 = submerged specific gravity [-]
Rubey (1933) developed an analytical relationship between the fluid, sediment properties, and the fall
velocity based on the combination of Stoke's law (for fine particles subject only to viscous resistance) and
an impact formula (for large particles outside the Stoke's region). This equation has been shown to be
adequate for silt, sand, and gravel grains. Rubey suggested that particles of the shape of crushed quartz
grains, with a specific gravity of around 2.65, are best applicable to the equation. Some of the more cubic, or
uniformly shaped particles tested, tended to fall faster than the equation predicted. Tests were conducted in
water with a temperature of 16∘ Celsius.
in which
where
ν = kinematic viscosity [L2/T]
g = gravitational constant (~9.81 m/s2) [L/T2]
d = grain size diameter [L]
R = ρs/ρw−1 = submerged specific gravity [-]
ρs = particle density [M/L3]
Hindered Settling
Hindered settling is the condition in which the settling velocity of particles or flocs is reduced due to a high
concentration of particles. Hindered settling is primarily produced by particle collisions and the upward water
flow equal to the downward sediment volume flux. Hindered settling occurs to both cohesive and
noncohesive particles. However, the hindered settling correction described here only applies to noncohesive
particles. When the sediment concentration is high (approximately larger than 3,000 mg/l), the settling of
particles is reduced due to return flow, particle collisions, increased mixture viscosity, increased buoyancy,
and wake formation. This process is referred to as hindered settling.
120 https://www.hec.usace.army.mil/confluence/display/RAS2DSEDTR/Hindered+Settling
Kumbhakar
When using other particle settling velocities hindered settling is considered using a modified form of
Kumbhakar (2017).
where
ωn = Hindered settling velocity in mixture
ω0 = Settling velocities of particles in clear fluid
c = Volumetric concentration of suspended sediment particle
R = Particle Reynolds number
Δp = Submerged specific weight
121 https://www.hec.usace.army.mil/confluence/download/attachments/156797234/
Gatwood_2000_Los%20Angeles%20District%20Debris%20Method.pdf?api=v2&modificationDate=1694874917182&version=1
122 https://www.hec.usace.army.mil/confluence/display/RAS1DTechRef/HEC-RAS+Hydraulic+Reference+Manual
123 https://www.hec.usace.army.mil/confluence/display/RAS/2D+Sediment+Manual
124 https://www.hec.usace.army.mil/confluence/display/RAS/Sediment+Manual
2. Soil loss methods (with the Green and Ampt method being the most commonly used)
4. Burn-to-unburn ratios that can be applied to calibrated unburned watersheds to reflect burned
conditions
• 5
2
%
b
u
r
n
e
d
at
m
o
d
er
at
e
s
e
v
er
it
y
• 3
4
%
b
u
r
n
e
d
at
lo
w
s
e
v
er
it
y
• 1
3
%
u
n
b
u
r
n
e
d
• B
u
s
h
-
1
5
m
e
a
s
u
re
m
e
nt
s
S
e
pt
e
m
b
er
2
0
2
0
Pr Los Augus Not • J 95% burned Green Di Satur Not Available S Not
ad Angel t 2009 Availa a at low (18%), And ff ated a Menti
ha es Statio ble n Medium Ampt us Hydra n oned
n, Natio n Fire 1 (42%) to high io ulic d
N. nal 8, (35%) n Cond y
R.; Fores 2 severity W uctivit L
Fl t, 0 av y o
oy Upper 1 e a
• H
d, Arroy 0 m
ig
I. o
• F h
Ev Seco
e =
en b 0.
t 2 1
B 7,
as • M
2 e
ed 0
P di
2 u
os 0
t- m
Fir =
e 0.
H 2
yd • L
rol o
og w
ic =
al 0.
3
Manni
ng's
Roug
hness
• H
ig
h
=
0.
1
5
M • M
od e
eli di
ng u
of m
th =
e 0.
U 1
pp 8
er
Ar • L
ro o
yo w
Se =
co 0.
2
W
at
er
sh
ed
in
S
ou
th
er
n
C
ali
fo
rni
a.
W
at
er
20
21
,
13
,
23
03
.
ht
tp
s:
//
do
i.o
rg
/
10
.3
39
0/
w
13
16
23
03125
125 http://doi.org/10.3390/w13162303
Eb Multip Multip Multip Not Not available Not N Satur Saturated Hydraulic N Not
el, le le le availa availa ot ated Conductivity (mm/ o availa
Br locati fires meas ble ble av Hydra hr) t ble
ia ons analy ureme ail ulic a
• Mean
n within zed nts a Cond v
(geometric) =
A., Weste analy bl uctivit ai
51.6
an rn U.S zed e y la
d and • Stdev bl
• M
Jo Austr (geometric) = e
e
hn alia 2.59
a
A.
n • Coefficient of
M
( Variation
oo
g (geometric) =
dy
e 0.05
.
o
"S • Mean
m
yn (arithmetic) =
et
th 78.2
ri
es
c) • Stdev
is
= (arithmetic) =
of
0. 82.5
S
7
oil • Coefficient of
9
-h Variation
yd (arithmetic) =
ra 1.05
uli Sorptivity (
c )
Pr
op • Mean
er (geometric) =
tie 5.93
s
• Stdev
ad
(geometric) =
In
12.9
filt
ra • Coefficient of
tio Variation
n (geometric) =
Ti 2.18
m
• Mean
es
(arithmetic) = 11
ca
• M • Coefficient of
e Variation
a (geometric) =
n 2.39
(
• Mean
a
(arithmetic) =
ri
0.43
th
m • Stdev
et (arithmetic) =
ic 0.01
)
• Coefficient of
=
Variation
0.
(arithmetic) =
1
0.03
3
Wetti
ng
Front
• M
e
a
n
(
g
e
o
m
et
ri
c)
=
0.
0
2
• M
e
a
n
(
a
ri
th
m
et
ic
)
=
0.
0
3
• M
e
a
n
(
g
e
o
m
et
ri
c)
=
1.
0
8
• M
e
a
n
(
a
ri
th
m
et
ic
)
=
1.
0
5
C South Not Not Not Not available Not N • In • Infiltration rate P Not
ov weste availa availa availa availa ot fil (Constant Rate, o availa
in rn ble ble ble ble av tr cm/hr): 2.5 n ble
gt Unite ail at d
on d a io er
, State bl n o
W. s e r s
W. at a
; e Pi
Sa ( n
ck C e
et o F
t, n o
S. st re
S. a st
19 n s
90 t
. r
Fir at
e e)
ef =
fe 0.
ct 3
s 6
on
po
nd
er
os
a
pi
ne
so
ils
an
d
th
eir
m
an
ag
e
m
Diversion Modeling
A diversion is modeled in the same manner as a stream bifurcation by using a simple one-dimensional
approximation of the continuity equation. In that case:
132)
in which = average flow passing downstream in the main channel during time interval t ; = average
main channel flow just upstream of the diversion control structure during the interval; and = average
flow into the by-pass channel during the interval.
in which = the functional relationship of main channel flow and diversion channel flow. The
relationship can be developed with historical measurements, a physical model constructed in a laboratory, or
a mathematical model of the hydraulics of the structure. For example, flow over the weir in the above Figure
can be computed with the weir equation:
134)
in which O = flow rate over the weir; C = dimensional discharge coefficient that depends upon the
configuration of the weir; L = effective weir width; H = total energy head on crest. This head is the difference
in the weir crest elevation and the water-surface elevation in the channel plus the velocity head, if
appropriate. The channel water-surface elevation can be computed with a model of open channel flow, such
as HEC-RAS126. For more accurate modeling, a two-dimensional flow model can be used to develop the
relationship.
126 https://www.hec.usace.army.mil/confluence/display/RASDOCS
The reservoir outlet may consist of a single culvert, as shown in the Figure below. It may also consist of
separate conduits of various sizes or several inlets to a chamber or manifold that leads to a single outlet
pipe or conduit. The rate of release from the reservoir through the outlet and over the spillway depends on
the characteristics of the outlet (in this case, a culvert), the geometric characteristics of the inlet, and the
127 https://www.hec.usace.army.mil/confluence/display/RASDOCS
135)
in which = average inflow during time interval; = average outflow during time interval; =
storage change. With a finite difference approximation, this can be written as:
136)
in which = index of time interval; and = the inflow values at the beginning and end of the time
interval, respectively; and = the corresponding outflow values; and and = corresponding
storage values. This equation can be rearranged as follows:
137)
All terms on the right-hand side are known. The values of and are the inflow hydrograph ordinates,
perhaps computed with models described earlier in the manual. The values of and are known at the
time interval. At = 0, these are the initial conditions, and at each subsequent interval, they are known
from calculation in the previous interval. Thus, the quantity can be calculated with the
above Equation. For an impoundment, storage and outflow are related, and with this storage-outflow
relationship, the corresponding values of and can be found. The computations can be repeated
for successive intervals, yielding values , , ... , the required outflow hydrograph ordinates.
In the Figure above, (a) is the pond outlet-rating function; this relates outflow to the water-surface elevation
in the pond. The relationship is determined with appropriate weir, orifice, or pipe formulas, depending on the
design of the outlet. In the case of the configuration of the simple dentation structure shown in a Figure
above, the outflow is approximately equal to the inflow until the capacity of the culvert is exceeded. Then
water is stored and the outflow depends on the head. When the outlet is fully submerged, the outflow can be
computed with the orifice equations:
138)
in which = flow rate; = dimensional discharge coefficient that depends upon the configuration of the
opening to the culvert; = the cross-sectional area of the culvert, normal to the direction of flow; = total
energy head on outlet. This head is the difference in the downstream water-surface elevation and the
upstream (pond) water-surface elevation.
Modeling Reservoirs
This chapter describes how the reservoir element in HEC-HMS is used for modeling reservoirs or other types
of water storage features, such as detention or retention ponds or natural lakes. Reservoirs have many uses,
including flood mitigation, water supply, hydropower, and recreation. Reservoirs functionally change the
hydrograph on a stream, allowing for inflowing water to be stored and then released at altered times and
rates. Some reservoirs are operable and releases can be controlled, while others, such as detention ponds,
may use an uncontrolled culvert or weir as a release structure. The primary objective of modeling reservoirs
in HEC-HMS is to include how the reservoir changes the hydrologic response in a watershed.
128 https://www.publications.usace.army.mil/portals/76/publications/engineermanuals/em_1110-2-1603.pdf
129 https://efotg.sc.egov.usda.gov/references/public/TX/tr60amend1.pdf
130 https://www.usbr.gov/tsc/techreferences/mands/mands-pdfs/SmallDams.pdf
131 https://www.hec.usace.army.mil/software/hec-ras/
132 https://www.hec.usace.army.mil/software/hec-ressim/
133 https://www.usbr.gov/tsc/techreferences/mands/mands-pdfs/SmallDams.pdf
134 https://efotg.sc.egov.usda.gov/references/public/TX/tr60amend1.pdf
135 https://www.publications.usace.army.mil/portals/76/publications/engineermanuals/em_1110-2-1603.pdf
136 https://www.hec.usace.army.mil/confluence/display/RASUM/HEC-RAS+User%27s+Manual
The reservoir outlet may consist of a single culvert, as shown in Figure 2. It may also consist of separate
conduits of various sizes or several inlets to a chamber or manifold that leads to a single outlet pipe or
conduit. The rate of release from the reservoir through the outlet and over the spillway depends on the
characteristics of the outlet (in this case, a culvert), the geometric characteristics of the inlet, the
characteristics of the spillway, and the tailwater condition. The reservoir can also have an auxiliary spillway
that releases to a different stream.
Defining Routing
Outflow from an impoundment that has a horizontal water surface can be computed with the so-called level-
pool routing model (also known as the Modified Puls routing model). That model discretizes time, breaking
the total analysis period into equal intervals of duration Δt. It then recursively solves the following one-
dimensional approximation of the continuity equation:
in which is the average inflow during time interval; is the average outflow during time interval;
is the storage change. With a finite difference approximation, this can be written as:
in which is the index of time interval; and are the inflow values at the beginning and end of the
time interval, respectively; and are the corresponding outflow values; and and are the
corresponding storage values. This equation can be rearranged as follows:
All terms on the right-hand side are known. The values of and are the inflow hydrograph ordinates,
perhaps computed with models described earlier in the manual. The values of and are known at the
th
time interval. At = 0, these are the initial conditions, and at each subsequent interval, they are known
from calculation in the previous interval. Thus, the quantity can be calculated with the
equation above. For an impoundment, storage and outflow are related, and with this storage-outflow
relationship, the corresponding values of and can be found. The computations can be repeated
for successive intervals, yielding values , , ... , the required outflow hydrograph ordinates.
Figure 3(a) is the pond outlet-rating function; this relates outflow to the water-surface elevation in the pond.
The relationship is determined with appropriate weir, orifice, or pipe formulas, depending on the design of the
outlet. In the case of the configuration of Figure 2, the outflow is approximately equal to the inflow until the
capacity of the culvert is exceeded. Then water is stored and the outflow depends on the head. When the
outlet is fully submerged, the outflow can be computed with the orifice equations:
in which O = flow rate; K = dimensional discharge coefficient that depends upon the configuration of the
opening to the culvert; A = the cross-sectional area of the culvert, normal to the direction of flow; H = total
energy head on outlet; and g is the gravitational constant. This head is the difference in the downstream
water-surface elevation and the upstream (pond) water-surface elevation.
Figure 3(b) is the spillway rating function. In the simplest case, this function can be developed with the weir
equation. For more complex spillways, refer to EM 1110-2-1603 (1965), to publications of the Soil
Conservation Service (1985), and to publications of the Bureau of Reclamation (1977) for appropriate rating
procedures.
Figure 3(a) and (b) are combined to yield Figure 3(c), which represents the total outflow when the reservoir
reaches a selected elevation.
where is the incremental storage between two reservoir elevations (elev1 and elev2) and their respective
sectional areas are A1 and A2. A limitation to this approach is that it does not work well for very large
reservoirs, where the level pool assumption is not realistic.
The reservoir can have one or more inflows and computed outflows through one or more outlets.
Assumptions include a level pool.
Additional Release
In most situations a dam can be properly configured by defining outlet structures such as spillways,
uncontrolled outlets, etc. The total outflow from the reservoir can be calculated automatically using the
physical properties entered for each of the included structures. However, some reservoirs may have an
additional release beyond what is represented by the various physical structures. In many cases this
additional release is a schedule of managed releases achieved by operating spillway gates.
Currently the only method for making an additional release is the Gage Release method. The modeler can
specify the additional releases based on a gage reading (time series of discharge). This release is subtracted
from the reservoir pool during the iterative calculation, in order to include the controlled releases the releases
from the other outlet structures are determined.
Dam Break
Sometimes it is of interest to model a scenario in which there is a dam failure. Two types of dam failure can
be modeled in HEC-HMS, overtop and piping. For both types of breach methods, a trigger method,
development time, and progression method are used to define when the failure initiates, how long it takes to
attain maximum breach opening, and how the breach develops during the development time. Typically dam
breaks would be modeled in HEC-HMS only for periodic assessments. At a certain size, RAS would be more
meaningful, since it includes the ability to manage a nonlevel pool.
In order to model a dam break, the Outflow Structures or Rule-Based Operations routing method must be
used. Only one dam break can be included in the reservoir.
Trigger Method
There are three methods for triggering the initiation of the failure: elevation, duration at elevation, and
specific time. For the Elevation method, the breach will begin forming as soon as the reservoir reaches a
specified elevation. For the Duration at Elevation method, the reservoir elevation must remain at or above a
specified elevation for a specified length of time in order to initiate the breach. For the Specific Time method,
the breach will begin opening at the specified time regardless of the reservoir pool elevation.
Once the breach has been triggered, the development of the breach is determined using a selected
progression method over the development time. The development time defines the total time (in hours) for
the breach to form, from initiation to reaching the maximum breach size.
Overtop Breach
The overtop dam break is designed to represent failures caused by overtopping of the dam. These failures
are most common in earthen dams but may also occur in concrete arch, concrete gravity, or roller
compacted dams as well. The failure begins when appreciable amounts of water begin flowing over or
around the dam face. The flowing water will begin to erode the face of the dam.
The method begins the failure at a point on the top of the dam (or below the top) and expands it in a
trapezoidal shape until it reaches the maximum size. The maximum breach size is defined using the top and
bottom elevation, bottom width, and side slopes.
The bottom elevation defines the elevation of the bottom of the trapezoidal opening in the dam face when
the breach is fully developed. The bottom width defines the width of the bottom of the trapezoidal opening in
the dam face when the breach is fully developed.
Flow through the expanding breach is modeled using the weir flow equations (Singh 1996, Wu 2016):
where:
Q = Discharge over dam breach (m3/s)
C1 = 1.7 (discharge coefficient for the rectangular portion of the trapezoid)
C2 = 1.35 (discharge coefficient for the triangular portions of the trapezoid)
b = Bottom width of the breach
m = Side slope (H:V)
H = Upstream energy head above dam breach
The size of the dam breach at each timestep is determined using the trigger, progression, and definition of
the maximum dam breach size. The invert elevation is changing, along with the head and the length. The
computed discharge, Q, is adjusted for submergence if there is a Tailwater condition configured.
Piping Breach
The piping dam break is designed to represent failures caused by piping inside the dam. These failures
typically occur only in earthen dams. The failure begins when water naturally seeping through the dam core
increases in velocity and quantity enough to begin eroding fine sediments out of the soil matrix. If enough
material erodes, a direct piping connection may be established from the reservoir water to the dam face.
Once such a piping connection is formed it is almost impossible to stop the dam from failing. The method
begins the failure at a point in the dam face and expands it as a circular opening. When the opening reaches
the top of the dam, it continues expanding as a trapezoidal shape. Flow through the circular opening is
modeled as orifice flow while in the second stage it is modeled as weir flow.
The piping elevation indicates the point in the dam where the piping failure first begins to form. The piping
coefficient is used to model flow through the piping opening as orifice flow. As such, the coefficient
represents energy losses as water moves through the opening. The piping dam break method is modeled
using the orifice flow equation (Singh 1996):
where:
Q = Discharge
C = User-specified orifice/piping coefficient
A = Cross-sectional area of the orifice, normal to the direction of flow
g = Gravitational constant
H = Total energy head on outlet (measured from the center line of the orifice)
Dam Seepage
Most dams have some water seeping through the face of the dam. The amount of seepage depends on the
elevation of water in the dam, the elevation of water in the tailwater, the integrity of the dam itself, and other
factors. In some situations, seepage from the pool through the dam and into the tailwater can be a
significant source of discharge that must be modeled. Less commonly, water in the main channel
downstream may seep through the levee or dam face and enter the pool. Both of these situations can be
represented using the dam seepage structure. Only one seepage structure can be added to any reservoir, so
all sources and sinks of seepage must be represented collectively.
It is assumed that all reservoir seepage ends up in the river downstream of the reservoir. Seepage into the
reservoir is taken from a global source and only will occur when tailwater is lower than the reservoir
elevation. When water seeps out of the reservoir, the seepage is automatically taken from the reservoir
storage and added to the main tailwater discharge location. This is the mode of seepage when the pool
elevation is greater than the tailwater elevation. Seepage into the reservoir happens when the tailwater
elevation is higher than the pool elevation. In this mode the appropriate amount of seepage is added to
reservoir storage, but it is not subtracted from the tailwater. Dam seepage is also commonly used with pump
stations outside levees. One side must be declared to be tailwater.
Currently the only dam seepage method available is Tabular Seepage. This is similar to modeling an
"Unknown Spillway". The user gives two elevation-discharge curves.
Tabular Seepage
The tabular seepage method uses an elevation-discharge curve to represent seepage. Usually the elevation-
discharge data will be developed through a geotechnical investigation separate from the hydrologic study. A
curve may be specified for inflow seepage from the tailwater toward the pool, and a separate curve can be
specified for outflow seepage from the pool to the tailwater. The same curve may be selected for both
directions if appropriate. If a curve is not selected for one of the seepage directions, then no seepage will be
calculated in that direction.
In order to determine which table to use, the pool elevation elevpool is compared to the tailwater elevation
elevtw. Positive values of the seepage head hs between the pool and tailwater indicate seepage out of the
dam, as long as the pool elevation is above a defined minimum. Negative values of hs indicate seepage from
the tailwater into the pool. Once the direction of seepage is determined, the seepage value is simply a matter
of looking up the seepage from the seepage table.
hs = elevpool – elevtw
Dam Tops
The top of the dam is important when modeling conditions that may cause water to flow over the top of the
dam. Dam tops can be added to reservoirs that use the outflow structure routing method. These represent
the top of the dam, above any spillways, where water goes over the dam top in an uncontrolled manner. In
some cases a dam top can be used to represent an emergency spillway. Up to 10 independent dam tops can
be included in the reservoir. There are two different methods for computing outflow through a dam top: level
or non-level.
(8-6)
where:
Q = Discharge over the dam top.
C = Discharge coefficient; accounts for energy losses as water flows over the dam. Typical values will range
from 2.6 to 3.3, depending upon the shape of the dam.
L = Length of the dam top.
H = Upstream energy head above the dam top.
The crest elevation of the dam top must be specified in order to allow for the determination of head.
The length of the dam top should represent the total width through which water passes, excluding any
amount occupied by spillways.
The discharge coefficient accounts for energy losses as water approaches the dam top and flows over the
dam. Depending on the exact shape of the dam top, typical values range from 1.45 – 1.84 in SI units (2.63 –
3.33 in US Customary units). 1.10 to 1.66 in System International units (2.0 to 3.0 US Customary units). Civil
Engineering Reference Manual says 1.45 – 1.84 2.63 – 3.33
Evaporation
Water losses due to evaporation may be an important part of the water balance for a reservoir, especially in
dry or desert environments. If evaporation should be captured, the model must use the Outflow Structures
routing method (see page 321) with the elevation-area storage method. An error message will appear if you
attempt to model evaporation without using these routing and storage methods. This provides a reservoir
surface area with which evaporation can be determined. An evaporation depth is computed for each time
interval and then multiplied by the current surface area.
Currently the only evaporation option is the Monthly evaporation method. It can be used to specify a
separate evaporation rate for each month of the year, entered as a total depth for the month.
The evaporation depth at timestep , , is a function of the current month's evaporation depth
and the number of timesteps in the current month, :
Outflow Structures
Outlets and other discharge structures can only be modeled individually when using the Outflow Structures
routing method (see page 321). The table below lists available outflow structure types and subtypes, which
include outlets, spillways, gates, pumps, and dam tops, breaks, and seepage. Evaporation is also a specific
type of release that requires the outflow structures routing method.
Table 2.Outflow Structures available in HEC-HMS
Dam Break Dam failures. Allows for defined trigger time and development
Piping Piping occurs through dam and eventually cuts away the dam top
Outlets
Outlets typically represent structures near the bottom of the dam that allow water to exit in a controlled
manner. They are often called gravity outlets because they can only move water when the head in the
reservoir is greater than the head in the tailwater. Up to ten independent outlets can be included in the
reservoir.
Culvert Outlet
The culvert outlet can handle may flow types including pressure flow, so it allows for partially full or
submerged flow through a culvert with a variety of cross-sectional shapes. Culvert flow calculations can be
complicated, but the approach taken by HEC-HMS is shared with RAS – it simplifies the analysis by
considering the flow either Inlet Control or Outlet Control. As described in Chapter 6 of the HEC-RAS
Technical Reference Manual, "Inlet control flow occurs when the flow capacity of the culvert entrance is less
than the flow capacity of the culvert barrel. The control section of a culvert operating under inlet control is
located just inside the entrance of the culvert. The water surface passes through critical depth at or near this
location, and the flow regime immediately downstream is supercritical. For inlet control, the required
upstream energy is computed by assuming that the culvert inlet acts as a sluice gate or as a weir. Therefore,
the inlet control capacity depends primarily on the geometry of the culvert entrance. Outlet control flow
occurs when the culvert flow capacity is limited by downstream conditions (high tailwater) or by the flow
carrying capacity of the culvert barrel. The HEC RAS culvert routines compute the upstream energy required
to produce a given flow rate through the culvert for inlet control conditions and for outlet control conditions.
In general, the higher upstream energy "controls" and determines the type of flow in the culvert for a given
flow rate and tailwater condition. For outlet control, the required upstream energy is computed by performing
an energy balance from the downstream section to the upstream section. The HEC RAS culvert routines
consider entrance losses, friction losses in the culvert barrel, and exit losses at the outlet in computing the
outlet control headwater of the culvert."
HEC-HMS shares the RAS culvert routines, and thus requires the same input data. RAS, however, assumes a
roadway crest, whereas HEC-HMS does not. Figure XXX depicts the relationships between upstream energy
and flow rate for outlet control and inlet control flows.
Settings Options
Direction
The limitations to the culvert calculations are that no upstream or downstream cross sections are used. So
the energy gradeline is calculated assuming a quiescent, level pool above the inlet and a quiescent stilling
basin at the outlet.
Inlet Controlled
Use this method when the culvert outflow is controlled by a high pool elevation in the reservoir. See
equations 6-2 and 6-3 in the RAS Tech Ref Man for the calculations used. HEC-HMS departs from the RAS
Circular X
Semi Circular X
Elliptical X X
Arch X X
High-Profile Arch X
Low-Profile Arch X
Pipe Arch X X
Box X X
Con Span X X
Orifice Outlet
The orifice outlet assumes a large outlet with sufficient submergence for orifice flow conditions to dominate.
It should not be used to represent an outlet that may flow only partially full. The necessary submergence is
typically present for low level reservoir outlets, but it may not be assured for small reservoirs, such as those
on farms. If there is any uncertainty about whether or not the large orifice approach is appropriate, it is best
to use the Culvert approach instead. The downside is that it may take ten to twenty times longer to calculate.
In order to ensure that the outlet is experiencing pressure flow conditions, the inlet of the structure should be
submerged at all times by a depth at least 0.2 times the height of the orifice outlet. The approximate height
is estimated to be the square root of the area. The water elevation is calculated and compared accordingly.
Settings Options
The center elevation specifies the center of the cross-sectional flow area. It is used to compute the head on
the outlet, so no flow will be released until the reservoir pool elevation is above this specified elevation. The
cross-sectional flow area of the outlet must be specified. The orifice assumptions are independent of the
shape of the flow area. The dimensionless discharge coefficient must be entered. This parameter describes
the energy loss as water exits the reservoir through the outlet.
Section 6 of the HEC-RAS Technical Reference Manual includes detailed descriptions and equations used in
culvert hydraulics and flow analysis. Since much of the approach is shared with HEC-HMS, these
descriptions are not repeated here. Refer to the HEC-RAS manual for details on the algorithms. Note that the
difference between the HEC-RAS and the HEC-HMS outlet flow calculations is that HEC-HMS does not
consider flow in upstream or downstream cross sections, thus the equations are simplified by assuming
zero velocity.
Pumps
Some smaller reservoirs such as interior detention ponds or pump stations may use pumps to move water
out of the reservoir and into the tailwater when gravity outlets alone are insufficient. Pumps can only be
included in reservoirs using the Outflow Structures routing method. Up to 10 independent pumps can be
included in the reservoir.
Head-Discharge Pump
The head-discharge pump is designed to represent pumps that are applied in low-head, high-flow situations,
such as the centrifugal type. These pumps are designed for high flow rates against a relatively small head.
There are options for setting a reservoir pool elevation range for pumping and minimum times for the on or
off condition. Figure X depicts the representation of this type of pump in HEC-HMS.
A head-discharge curve is used to describe the capacity of the pump as a function of the total head. Total
head is the head difference due to reservoir pool elevation and tailwater elevation, plus equipment loss. The
head-discharge curve must be defined as an elevation-discharge function, although it actually represents
head rather than elevation. If the pump is determined to be active, the pump discharge is determined by
looking up and interpolating the flow value based on the total head value.
The pump is set to turn on at a specified on-trigger elevation, , and remain on until the pool has
dropped below a specified off-trigger elevation, , at which point it turns off. In addition, it is possible
to constrain the pump such that when it is triggered to turn on, it must stay on for a minimum run time.
The pump operation can also be constrained to stay on or off for a minimum length of time by setting a
minimum run time or minimum rest time. If it is used, once the pump turns on it must remain on the
specified minimum run time even if the reservoir pool elevation drops below the trigger elevation to turn the
pump off. The only exception is if the pool elevation drops below the intake elevation, then the pump will shut
off even though the minimum run time is not satisfied.
So HEC-HMS will check to see if the tailwater or headwater is above the intake.
Spillways
Spillways typically represent structures at the top of the dam that allow water to go over the dam top in an
uncontrolled manner. Up to ten independent spillways can be included in the reservoir. There are three
different methods for computing outflow through a spillway: Broad-Crested, Ogee, and User Specified. The
broad-crested and ogee methods may optionally include gates. If no gates are selected, then flow over the
spillway is unrestricted. When gates are included, the flow over the spillway will be controlled by the gates.
Up to ten independent gates may be included on a spillway. The spillway may release to the main channel or
an auxiliary location.
where:
Q = Discharge over the weir or spillway crest.
C = Discharge coefficient, accounts for energy losses as water enters the spillway, flows through the
spillway, and eventually exits the spillway. Typical values will range from 2.6 to 4.0 depending upon the
shape of the spillway crest.
L = Length of the spillway crest.
H = Upstream energy head above the spillway crest.
HEC-HMS includes a secondary calculation to determine whether tailwater conditions alter the computed
discharge, and the program adjusts accordingly. Details on the application of this equation can be found in
the HEC-RAS Technical Reference Manual. HEC-HMS uses the same approach, with some simplifications.
Broad-Crested Spillway
The broad-crested spillway allows for uncontrolled flow over the top of the reservoir according to the weir
flow assumptions.
The discharge coefficient C accounts for energy losses as water enters the spillway, flows through the
spillway, and eventually exits the spillway. Depending on the exact shape of the spillway, typical values range
from 1.10 to 1.66 in System International units (2.0 to 3.0 US Customary units) (Note: RAS manual says
2.6-3.1).
For each time step within the simulation, the head is estimated using the user-specified crest elevation of the
spillway and the reservoir pool elevation.
Ogee Spillway
The ogee spillway allows for uncontrolled flow over the top of the reservoir according to the weir flow
assumptions. However, the discharge coefficient in the weir flow equation is automatically adjusted when
the upstream energy head is above or below the design head.
The ogee spillway may be specified with concrete or earthen abutments. These abutments should be the
dominant material at the sides of the spillway above the crest. The selected material is used to adjust energy
loss as water passes through the spillway. The spillway can be conceptually represented using one, two, or
no abutments.
The ogee spillway is assumed to have an approach channel that moves water from the main reservoir to the
spillway. If there is such an approach channel, you must specify the depth of the channel, and the energy loss
that occurs between the main reservoir and the spillway. If there is no approach channel, the depth should be
the difference between the spillway crest and the bottom of the reservoir, and the loss should be zero.
The crest elevation and length of the spillway are needed, as are the apron elevation and width.
• gate coefficient
• gate width
• trunnion^trunnion exponent
• head^head exponent
Specified Spillway
The user-specified spillway can be used to represent spillways with flow characteristics that cannot be
represented by the broad-crested or ogee weir assumptions. The user must create an elevation-discharge
curve (Figure X) that represents the spillway discharge as a function of reservoir pool elevation, and HEC-
HMS determines discharge though a basic table lookup approach. At this time there is no ability to include
submergence effects on the specified spillway discharge. Therefore the user-specified spillway method
should only be used for reservoirs where the downstream tailwater stage cannot affect the discharge over
the spillway.
Sluice Gate
A sluice gate moves up and down in a vertical plane above the spillway in order to control flow. The water
passes under the gate as it moves over the spillway. For this reason it is also called a vertical gate or
underflow gate.
The width of the sluice gate must be specified. It should be specified as the total width of an individual gate.
The gate coefficient describes the energy losses as water passes under the gate. Typical values are between
0.5 and 0.7 depending on the exact geometry and configuration of the gate.
The orifice coefficient describes the energy losses as water passes under the gate and the tailwater of the
gate is sufficiently submerged. A typical value for the coefficient is 0.8.
The HEC-RAS Hydraulic Reference Manual describes sluice gate flow calculations as follows:
An example sluice gate with a broad crest is shown in the figure below.
where: H = Upstream energy head above the spillway crest (ZU - Zsp)
C= Coefficient of discharge, typically 0.5 to 0.7
When the downstream tailwater increases to the point at which the gate is no longer flowing freely
(downstream submergence is causing a greater upstream headwater for a given flow), the program switches
to the following form of the equation:
Where: H= ZU - ZD
Submergence begins to occur when the tailwater depth above the spillway divided by the headwater energy
above the spillway is greater than 0.67. Equation 8-5 is used to transition between free flow and fully
submerged flow. This transition is set up so the program will gradually change to the fully submerged Orifice
equation (Equation x) when the gates reach a submergence of 0.80.
Radial Gate
A radial gate rotates above the spillway with water passing under the gate as it moves over the spillway. This
type of gate is also known as a tainter gate.
The width of the radial gate must be specified. It should be specified as the total width of an individual gate.
The gate coefficient describes the energy losses as water passes under the gate. Typical values are between
0.5 and 0.7 depending on the exact geometry and configuration of the gate.
The orifice coefficient describes the energy losses as water passes under the gate and the tailwater of the
gate is sufficiently submerged. A typical value for the coefficient is 0.8. The pivot point for the radial gate is
known as the trunnion. The height of the trunnion above the spillway must be entered. The trunnion exponent
is part of the specification of the geometry of the radial gate. A typical value is 0.16. The gate opening
exponent is used in the calculation of flow under the gate. A typical value is 0.72. The head exponent is used
in computing the total head on the radial gate. A typical value is 0.62.
The HEC-RAS Hydraulic Reference Manual describes radial gate calculations as follows:
An example radial gate with an ogee spillway crest is shown in Figure X. (copied out of RAS)
where:
Q= Flow rate in cfs
C= Discharge coefficient (typically ranges from 0.6 - 0.8)
W= Width of the gated spillway in feet
T= Trunnion height (from spillway crest to trunnion pivot point)
TE= Trunnion height exponent, typically about 0.16 (default 0.0)
B= Height of gate opening in feet
BE= Gate opening exponent, typically about 0.72 (default 1.0)
H= Upstream energy head above the spillway crest ZU - Zsp
HE= Head exponent, typically about 0.62 (default 0.5)
ZU= Elevation of the upstream energy grade line
ZD= Elevation of the downstream water surface
Zsp= Elevation of the spillway crest through the gate
When the downstream tailwater increases to the point at which the gate is no longer flowing freely
(downstream submergence is causing a greater upstream headwater for a given flow), the program switches
to the following form of the equation:
where: H=ZU - ZD
Submergence begins to occur when the tailwater depth divided by the headwater energy depth above the
spillway, is greater than 0.67. Equation 8-2 is used to transition between free flow and fully submerged flow.
This transition is set up so the program will gradually change to the fully submerged Orifice equation when
the gates reach a submergence of 0.80. The fully submerged Orifice equation is shown below:
where:
A= Area of the gate opening.
H= ZU - ZD
C= Discharge coefficient (typically 0.8)
Sluice gate always uses 0.5 head exponent. It appears the same for radial gate.
They are currently rebuilding to allow the user to specify the opening.
New outlet that allows for gates (no orifice or culvert gate)
Table 1.Storage Method and Initial Condition Options for each Routing Method
Elevation-Area Elevation
Specified Release
Elevation-Storage Elevation, Storage
SI Qout
Calibration
Calibrating a hydrologic model is the process of modifying parameters within acceptable ranges to obtain
simulated results that replicate known conditions. Model calibration is necessary to provide some level of
confidence that the simulated results adequately represent the modeled system. To calibrate a model,
observed data, typically flow discharges and stages collected in the field, are compared to the simulated
results. This chapter introduces the concept of calibration, describes the procedure used in manual
calibration and discusses some of the summary statistics used to evaluate the calibrated model. Automated
calibration procedures are discussed in the Optimization (see page 329) chapter.
What is Calibration?
Each model that is included in the program has parameters. The value of each parameter must be specified
to use the model for estimating runoff or routing hydrographs. Earlier chapters identified the parameters and
described how they could be estimated from various watershed and channel properties. For example, the
kinematic-wave direct runoff model (see page 180) has a parameter N that represents overland roughness. This
parameter can be estimated from knowledge of watershed land use.
However, as noted in the Primer on Models (see page 8) chapter, some of the included models have
parameters that cannot be estimated by observation or measurement of channel or watershed
characteristics. The parameter Cp in the Snyder unit hydrograph model is an example of a parameter with no
direct physical meaning. Likewise, the parameter X in the Muskingum routing model (see page 210) cannot be
measured; it is simply a weight that indicates the relative importance of upstream and downstream flow in
computing the storage in a channel reach.
How then can the appropriate values for the parameters be selected? If rainfall and streamflow observations
are available, calibration is the answer. Calibration uses observed hydrometeorological data in a systematic
search for parameters that yield the best fit of the computed results to the observed runoff. This search is
often referred to as optimization.
Rainfall and runoff observations must be from the same storm. The runoff time series should represent
all runoff due to the selected rainfall time series.
The rainfall data must provide adequate spatial coverage of the watershed, as these data will be used
with the methods described in this manual to compute Mean Areal Precipitation (MAP) for the storm.
The volume of the runoff hydrograph should approximately equal the volume of the rainfall hyetograph. If
the runoff volume is slightly less, water is being lost to infiltration, as expected. But if the runoff volume is
significantly less, this may indicate that flow is stored in natural or engineered ponds, or that water is
diverted out of the stream. Similarly, if the runoff volume is slightly greater, baseflow is contributing to the
total flow, as expected. However, if the runoff volume is much greater, this may indicate that flow is
entering the system from other sources, or that the rainfall was not measured accurately.
The duration of the rainfall should exceed the time of concentration of the watershed to ensure that the
entire watershed upstream of the concentration point is contributing to the observed runoff.
The size of the storm selected for calibration should approximately equal the size of the storm the
calibrated model is intended to analyze. For example, if the goal is to predict runoff from a 1%-chance 24-
hour storm of depth 7 inches, data from a storm of duration approximately 24 hours and depth
approximately 7 inches should be used for calibration.
The upstream and downstream hydrograph time series must represent flow for the same period of time.
The duration of the downstream hydrograph should be sufficiently long so that the total volume
represented equals the volume of the upstream hydrograph.
The size of the event selected for calibration should approximately equal the size of the event the
calibrated model is intended to analyze. For example, if the study requires prediction of downstream
flows for an event with depths of 20 feet in a channel, historical data for a event of similar depth should
be used for calibration.
The next step is to select initial estimates of the parameters. As with any search, the better these initial
estimates (the starting point of the search), the quicker the search will yield a solution. Tips for parameter
estimation found in previous sections may be useful here.
Given these initial estimates of the parameters, the models included in the program can be used with the
observed boundary conditions (rainfall or upstream flow and meteorological conditions) to compute the
output, either the watershed runoff hydrograph or a channel outflow hydrograph. At this point, the program
compares the computed hydrograph to the observed hydrograph. For example, it computes the hydrograph
represented with the dashed line in the Figure below and compares it to the observed hydrograph
represented with the solid line. Visual inspection is an important first step - the modeler should try to
reproduce observed peaks, timing, and volume. The goal of this comparison is to judge how well the model
"fits" the real hydrologic system.
The objective of calibration is to minimize the difference between simulated values and observed
(measured) values. Statistical methods are used to quantify how simulated values compare to observed
values. Statistical indices used to evaluate how well the model fits the observed data are discussed in the
Calibration Summary Statistics (see page 325) section. In manual calibration, the modeler iteratively adjusts the
You can follow along a step-by-step calibration process in several tutorials available here: Ideas
and Workflows for Calibrating HEC-HMS Models137
137 https://www.hec.usace.army.mil/confluence/display/HMSGUIDES/Ideas+and+Workflows+for+Calibrating+HEC-HMS+Models
138 https://safe.menlosecurity.com/doc/docview/viewer/
docN434143D8603571cbc23abe03e3469af0589640d867a2f50efa0d35f07bf40be0d0b94099302e
139 https://agupubs.onlinelibrary.wiley.com/doi/epdf/10.1029/1998WR900018
Modified
Kling 143) • Multi-objective alternative to mean squared error and
Gupta
Nash-Sutcliffe Efficiency (NSE)
Efficiency
(MKGE) • Can be decomposed into three terms: (1) correlation
, (2) bias , and (3) variability term
144)
• The value of MKGE gives the lower limit of the three
components ( , , )
145)
Variables :
• = ith observation
• = standard deviation
• The indices and represent simulated and observed runoff values, respectively.
HEC-HMS also reports observed and computed maximum flow, time of peak and total volume. These
measures are also useful in the calibration process.
As a reminder, the following basic statistical measures are useful for this discussion:
• Residual variance = sum of squared differences between the observed and simulated
values =
• Measured data variance = sum of squared differences between the individual observed
values and the mean of the observed value =
• Standard deviation ( ) is the square root of variance
Calibration References
Diskin, M.H. and Simon, E. (1977). "A procedure for the selection of objective functions for hydrologic
simulation models." Journal of Hydrology, 34, 129-149.
Kling, H., Fuchs, M., and Paulin, M. (2012). "Runoff conditions in the upper Danube basin under an ensemble
of climate change scenarios." Journal of Hydrology. 424–425, 264–277.
Legates, David & Mccabe, Gregory. (1999). Evaluating the Use Of “Goodness-of-Fit” Measures in Hydrologic
and Hydroclimatic Model Validation. Water Resources Research. 35. 233-241. 10.1029/1998WR900018.
Moriasi, D. N., J. G. Arnold, M. W. Van Liew, R. L. Bingner, R. D. Harmel, T. L. Veith. 2007. "Model Evaluation
Guidelines for Systematic Quantification of Accuracy in Watershed Simulations.". Transactions of the
ASABE. 50(3): 885-900. DOI: 10.13031/2013.23153). 2007
Moriasi, Daniel & Gitau, Margaret & Pai, Naresh & Daggupati, Prasad. (2015). Hydrologic and Water Quality
Models: Performance Measures and Evaluation Criteria. Transactions of the ASABE (American Society of
Agricultural and Biological Engineers). 58. 1763-1785. 10.13031/trans.58.10715.
Stephenson, D. (1979). "Direct optimization of Muskingum routing coefficients." Journal of Hydrology, 41,
161-165.
140 https://web.ics.purdue.edu/~mgitau/pdf/Moriasi%20et%20al%202015.pdf
Optimization
Search Methods
As noted earlier, the goal of calibration is to identify reasonable parameters that yield the best fit of
computed to observed hydrograph, as measured by one of the objective functions. This corresponds
mathematically to searching for the parameters that minimize the value of the objective function.
As shown in Figure 45, the search is a trial-and-error search. Trial parameters are selected, the models are
exercised, and the error is computed. If the error is unacceptable, the program changes the trial parameters
and reiterates. Decisions about the changes rely on the univariate gradient search algorithm or the Nelder
and Mead simplex search algorithm.
Univariate-Gradient Algorithm
The univariate-gradient search algorithm makes successive corrections to the parameter estimate. That is, if
represents the parameter estimate with objective function at iteration k, the search defines a new
estimate at iteration k+1 as:
in which = the correction to the parameter. The goal of the search is to select so the estimates
move toward the parameter that yields the minimum value of the objective function. One correction does not,
in general, reach the minimum value, so this equation is applied recursively.
The gradient method, as used in the program, is based upon Newton's method. Newton's method uses the
following strategy to define :
in which = the objective function at iteration k; and and = the first and second
derivatives of the objective function, respectively.
>0 —
<0 >0 50
0 -33
If more than a single parameter is to be found via calibration, this procedure is applied successively to each
parameter, holding all others constant. For example, if Snyder's Cp and tp are sought, Cp, is adjusted while
holding tp at the initial estimate. Then, the algorithm will adjust tp, holding Cp at its new, adjusted value. This
successive adjustment is repeated four times. Then, the algorithm evaluates the last adjustment for all
parameters to identify the parameter for which the adjustment yielded the greatest reduction in the objective
function. That parameter is adjusted, using the procedure defined here. This process continues until
additional adjustments will not decrease the objective function by at least 1%.
• Comparison. The first step in the evolution is to find the vertex of the simplex that yields the worst
(greatest) value of the objective function and the vertex that yields the best (least) value of the
objective function. In Figure 50, these are labeled W and B, respectively.
• Reflection. The next step is to find the centroid of all vertices, excluding vertex W; this centroid is
labeled C in Figure 50. The algorithm then defines a line from W, through the centroid, and reflects a
distance WC along the line to define a new vertex R, as illustrated Figure 50.
• Expansion. If the parameter set represented by vertex R is better than, or as good as, the best vertex,
the algorithm further expands the simplex in the same direction, as illustrated in Figure 51. This
defines an expanded vertex, labeled E in the figure. If the expanded vertex is better than the best, the
worst vertex of the simplex is replaced with the expanded vertex. If the expanded vertex is not better
than the best, the worst vertex is replaced with the reflected vertex.
• Contraction. If the reflected vertex is worse than the best vertex, but better than some other vertex
(excluding the worst), the simplex is contracted by replacing the worst vertex with the reflected
vertex. If the reflected vertex is not better than any other, excluding the worst, the simplex is
contracted. This is illustrated in Figure 52. To do so, the worst vertex is shifted along the line toward
the centroid. If the objective function for this contracted vertex is better, the worst vertex is replaced
with this vertex.
• Reduction. If the contracted vertex is not an improvement, the simplex is reduced by moving all
vertices toward the best vertex. This yields new vertices R1 and R2, as shown in Figure 53.
•
in which n = number of parameters; j = index of a vertex, c = index of centroid vertex; and and =
objective function values for vertices j and c, respectively.
in which xi = estimate of parameter i; ci = maximum or minimum value for parameter i; and n = number of
parameters. This "persuades" the search algorithm to select parameters that are nearer the soft-constraint
range. For example, if the search for uniform loss rate leads to a value of 300 mm/hr when a 15 mm/hr soft
constraint was specified, the objective function value would be multiplied by 2(300-15+1) = 572. Even if the
fit was otherwise quite good, this penalty will cause either of the search algorithms to move away from this
value and towards one that is nearer 15 mm/hr.
Table 29.Calibration parameter constraints.
Cp 0.1 1.0
Baseflow Manning's n 0 1
X 0 0.5
Goodness-of-Fit Indices
In optimization, algorithms included in the program search for the model parameters that yield the best value
of an index, also known as objective function. Only one of 11 objective functions included in the program can
be used, depending upon the needs of the analysis. The goal of all four calibration schemes is to find
reasonable parameters that yield the minimum value of the objective function. The objective function
choices for optimization are shown in Table below. They are:
• Sum of absolute errors. This objective function compares each ordinate of the computed hydrograph
with the observed, weighting each equally. The index of comparison, in this case, is the difference in
the ordinates. However, as differences may be positive or negative, a simple sum would allow
• Sum of squared residuals. This is a commonly-used objective function for model calibration. It too
compares all ordinates, but uses the squared differences as the measure of fit. Thus a difference of
10 m3/sec "scores" 100 times worse than a difference of 1 m3/sec. Squaring the differences also
treats overestimates and underestimates as undesirable. This function too is implicitly a measure of
the comparison of the magnitudes of the peaks, volumes, and times of peak of the two hydrographs.
• Percent error in peak. This measures only the goodness-of-fit of the computed-hydrograph peak to
the observed peak. It quantifies the fit as the absolute value of the difference, expressed as a
percentage, thus treating overestimates and underestimates as equally undesirable. It does not
reflect errors in volume or peak timing. This objective function is a logical choice if the information
needed for designing or planning is limited to peak flow or peak stages. This might be the case for a
floodplain management study that seeks to limit development in areas subject to inundation, with
flow and stage uniquely related.
• Peak-weighted root mean square error. This function is identical to the calibration objective function
included in computer program HEC-1 (USACE, 1998). It compares all ordinates, squaring differences,
and it weights the squared differences. The weight assigned to each ordinate is proportional to the
magnitude of the ordinate. Ordinates greater than the mean of the observed hydrograph are assigned
a weight greater than 1.00, and those smaller, a weight less than 1.00. The peak observed ordinate is
assigned the maximum weight. The sum of the weighted, squared differences is divided by the
number of computed hydrograph ordinates; thus, yielding the mean squared error. Taking the square
root yields the root mean squared error. This function is an implicit measure of comparison of the
magnitudes of the peaks, volumes, and times of peak of the two hydrographs.
Table. Objective functions for optimization.
Criterion Equation
Ensemble Modeling
Basic Concepts
In general, an ensemble model is a collection of multiple, diverse models that are created to predict a similar
outcome. An ensemble model can be composed of 2 to many base models which are often referred to as
ensemble members. There are two key components involved in ensemble modeling: 1) the selection of the
ensemble members and 2) the aggregation of the ensemble member predictions (individual model results or
traces) into an ensemble model prediction. In the below figure, an ensemble model comprised of four
ensemble members is depicted. In this example, the ensemble member predictions are aggregated into an
ensemble prediction by a simple, majority vote.
One of the main reasons to use ensemble modeling is to reduce uncertainty in the modelled predictions. As a
general rule of thumb, ensemble models tend to outperform single-algorithm models. The ability to see
multiple ensemble member traces each of which represent a viable, possible outcome is a powerful
modeling feature. It allows modelers and decision-makers the ability to analyze and act on a range of
possible outcomes as opposed to only seeing one possible outcome with a potentially large amount of
uncertainty.
When selecting ensemble members, each member should be performant. Each should be designed and
equipped to accurately model the intended outcome/goal of the overall ensemble model. For example, if the
study goal is to model the 1% AEP (annual exceedance probability) rainfall-runoff event at a given location,
individual models that were only designed/calibrated to predict relatively common, low-flow events should
not be included as ensemble members. Furthermore, each ensemble member should be independent. It is
most desirable to include ensemble members that are constructed in fundamentally different ways and/or
that make different assumptions; independent members lead to less correlated prediction errors.
There are multiple ways to aggregate individual ensemble member results into an overall ensemble model
prediction. One of the simplest and most commonly used methods is to average the ensemble member
results treating each result equally. Weighted average techniques can also be applied by assigning weights
to favor the strongest ensemble members and to disfavor the weakest. There are many advanced statistical
techniques that can be used for bias correction with observed data, each having the same end goal of
leveraging the ensemble members that perform best historically.
• Incremental Precipitation
• Cumulative Precipitation
• Outflow
• Cumulative Outflow
• Reservoir Elevation
• Moisture Deficit
• Air Temperature
• Sediment Load (total amounts and individual amounts for clay, silt, sand, gravel, cobble and boulder)
• Sediment Volume (total amounts and individual amounts for clay, silt, sand, gravel, cobble and
boulder).
The current ensemble model aggregation options in HEC-HMS include the following:
• Mean
• Maximum
• Minimum.
Ensemble Viewer
Additional aggregation options and visualization capabilities are present within the Ensemble
Viewer which is accessible in HEC-HMS via the Tools dropdown on the main menu bar.
The below figure represents an ensemble analysis results plot of SWE. This plot highlights how SWE could
potentially vary based on the precipitation dataset that is applied as a boundary condition. Even though the
precipitation datasets model the same event period, they can yield noticeably different results. The individual
ensemble member results for SWE are shown in the colored lines whereas the aggregated ensemble model
predictions are shown in the black, dotted lines.
There are several tutorials and guides available for the Ensemble Analysis that highlight different
applications such as flood forecasting and climate modeling. They can be found here: Ensemble
Analysis Simulations in HEC-HMS141.
CN Tables
The four pages in this section are reproduced from the SCS (now NRCS) report Urban hydrology for small
watersheds. This report is commonly known as TR-55. The tables provide estimates of the curve number
(CN) as a function of hydrologic soil group (HSG), cover type, treatment, hydrologic condition, antecedent
runoff condition (ARC), and impervious area in the catchment.
TR-55 provides the following guidance for use of these tables:
• Soils are classified into four HSG's (A, B, C, and D) according to their minimum infiltration rate, which
is obtained for bare soil after prolonged wetting. Appendix A \[of TR-55\] defines the four groups and
provides a list of most of the soils in the United States and their group classification. The soils in the
area of interest may be identified from a soil survey report, which can be obtained from local SCS
offices or soil and water conservation district offices.
• There are a number of methods for determining cover type. The most common are field
reconnaissance, aerial photographs, and land use maps.
• Treatment is a cover type modifier (used only in Table 2-2b) to describe the management of
cultivated agricultural lands. It includes mechanical practices, such as contouring and terracing, and
management practices, such as crop rotations and reduced or no tillage.
141 https://www.hec.usace.army.mil/confluence/hmsdocs/hmsguides/ensemble-analysis-simulations-in-hec-hms
• The index of runoff potential before a storm event is the antecedent runoff condition (ARC). The CN
for the average ARC at a site is the median value as taken from sample rainfall and runoff data. The
curve numbers in table 2-2 are for the average ARC, which is used primarily for design applications.
• The percentage of impervious area and the means of conveying runoff from impervious areas to the
drainage systems should be considered in computing CN for urban areas. An impervious area is
considered connected if runoff from it flows directly into the drainage systems. It is also considered
connected if runoff from it occurs as shallow concentrated shallow flow that runs over a pervious
area and then into a drainage system. Runoff from unconnected impervious areas is spread over a
pervious area as sheet flow.
SCS TR-55 Table 2-2a – Runoff curve numbers for urban areas1
Impervious areas:
Urban districts:
Commercial and 85 89 92 94 95
business . . . . . . . . . . . . . . . . . . . . .
Industrial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 81 88 91 93
1/4 acre . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 61 75 83 87
1/3 acre . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 57 72 81 86
1/2 acre . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 54 70 80 85
1 acre . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 51 68 79 84
2 acre . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 46 65 77 82
1
Average runoff condition, and Ia = 0.2S.
2
The average percent impervious area shown was used to develop the composite CN's. Other assumptions
are as follows: impervious areas are directly connected to the drainage system, impervious areas have a CN
of 98, and pervious areas are considered equivalent to open space in good hydrologic condition. CN's for
other combinations of conditions may be computed using figure 2-3 or 2-4.
3
CN's shown are equivalent to those of pasture. Composite CN's may be computed for other combinations
of open space cover type.
4
Composite CN's for natural desert landscaping should be computed using figures 2-3 or 2-4 based on the
impervious area percentage (CN = 98) and the pervious area CN. The pervious area CN's are assumed
equivalent to desert shrub in poor hydrologic condition.
5
Composite CN's to use for the design of temporary measures during grading and construction should be
computed using figure 2-3 or 2-4, based on the degree of development (imperviousness area percentage)
and the CN's for the newly graded pervious areas.
SCS TR-55 Table 2-2b – Runoff curve numbers for cultivated agricultural lands1
Good 74 83 88 90
Good 67 78 85 89
SR + CR Poor 71 80 87 90
Good 64 75 82 85
Good 65 75 82 86
C + CR Poor 69 78 83 87
Good 64 74 81 85
Good 62 71 78 81
C & T + CR Poor 65 73 79 81
Good 61 70 77 80
Good 63 75 83 87
SR + CR Poor 64 75 83 86
Good 60 72 80 84
C Poor 63 74 82 85
Good 61 73 81 84
C + CR Poor 62 73 81 84
Good 60 72 80 83
C&T Poor 61 72 79 82
C & T + CR Poor 60 71 78 81
Good 58 69 77 80
Close- SR Poor 66 77 85 89
seeded
or Good 58 72 81 85
broadcast
legumes or
C Poor 64 75 83 85
rotation
meadow
Good 55 69 78 83
C&T Poor 63 73 80 83
Good 51 67 76 80
1
Average runoff condition, and Ia = 0.2S.
2
Crop residue cover applies only if residue is on at least 5% of the surface throughout the year.
3
Hydrologic condition is based on combination of factors that affect infiltration and runoff, including (a)
density and canopy of vegetative areas, (b) amount of year-round cover, (c) amount of grass or close-seeded
legumes in rotations, (d) percent of residue cover on the land surface (good ≥ 20%), and (e) degree of
surface roughness.
Poor: Factors impair infiltration and tend to increase runoff.
Good: Factors encourage average and better than average infiltration and tend to decrease runoff.
SCS TR-55 Table 2-2c – Runoff curve numbers for other agricultural lands1
Pasture, Poor 68 7 8 8
grassland, or 9 6 9
range –
continuous
Meadow – 30 5 7 7
continuous grass, 8 1 8
protected from
grazing and
generally mowed
for hay.
Good 304 4 6 7
8 5 3
Good 32 5 7 7
8 2 9
Woods.6 Poor 45 6 7 8
6 7 3
Fair 36 6 7 7
0 3 9
Farmsteads – 59 7 8 8
buildings, lanes, 4 2 6
driveways,
and surrounding
lots.
1
Average runoff condition, and Ia = 0.2S.
2
Poor: <50% ground cover or heavily grazed with no mulch.
Fair: 50 to 75% ground cover and not heavily grazed.
Good: >75% ground cover and lightly or only occasionally grazed.
3
Poor: <50% ground cover.
Fair: 50 to 75% ground cover.
Good: >75% ground cover.
4
Actual curve number is less than 30; use CN=30 for runoff computations.
5
CN's shown were computed for areas with 50% woods and 50% grass (pasture) cover. Other combinations
of conditions may be computed from the CN's for woods and pasture.
6 Poor: Forest litter, small trees, and brush are destroyed by heavy grazing or regular burning.
Fair: Woods are grazed but not burned, and some forest litter covers the soil.
Good: Woods are protected from grazing, and litter and brush adequately cover the soil.
SCS TR-55 Table 2-2d – Runoff curve numbers for arid and semiarid rangelands1
Good 62 74 85
Good 41 61 71
Fair 51 63 70
Good 35 47 55
1
Average runoff condition, and Ia = 0.2S.
2
Poor: <30% ground cover (litter, grass, and brush overstory).
Fair: 30 to 70% ground cover.
Good: >70% ground cover.
3
Curve numbers for group A have been developed only for desert shrub.
Glossary
This glossary is a collection of definitions from throughout the Technical Reference Manual plus definitions
of other pertinent hydrology terms. Many of the definitions herein are from the electronic glossaries available
from U.S. Geological Survey142 and the Bureau of Reclamation143.
Additional terms commonly used within USACE Flood Risk Management studies can be found
here: Key USACE Flood Risk Management Terms144.
A
A14: NOAA Atlas 14. A multi-volume document produced by the NWS Hydrometeorological Design Studies
Center that contains precipitation-frequency estimates across the United States. Not all areas are covered by
a volume of A14, most notably the Northwest.
142 https://www.usgs.gov/special-topics/water-science-school/science/water-science-glossary
143 https://www.usbr.gov/library/glossary/
144 https://www.hec.usace.army.mil/publications/TrainingDocuments/TD-40.pdf
Backwater: Water backed up or retarded in its course as compared with its normal or natural condition of
flow. In stream gaging, a rise in stage produced by a temporary obstruction such as ice or weeds, or by the
flooding of the stream below. The difference between the observed stage and that indicated by the stage-
discharge relation, is reported as backwater.
Balanced Hyetograph: a storm temporal pattern in which any nested duration has the same frequency; e.g.
the 1-hour, 3-hour, 6-hour, 12-hour, and 24-hour totals all have 1% AEP.
Bank: The margins of a channel. Banks are called right or left as viewed facing in the direction of the flow.
Bank Storage: The water absorbed into the banks of a stream channel, when the stages rise above the water
table in the bank formations, then returns to the channel as effluent seepage when the stages fall below the
water table.
Bankfull Stage: Maximum stage of a stream before it overflows its banks. Bankfull stage is a hydraulic term,
whereas flood stage implies damage. See also flood stage.
Base Discharge: In the US Geological Survey's annual reports on surface-water supply, the discharge above
which peak discharge data are published. The base discharge at each station is selected so that an average
of about three peaks a year will be presented. See also partial-duration flood series.
Baseflow: The sustained or fair weather flow in a channel due to subsurface runoff. In most streams,
baseflow is composed largely of groundwater effluent. Also known as base runoff.
Basic Hydrologic Data: Includes inventories of features of land and water that vary spatially (topographic and
geologic maps are examples), and records of processes that vary with both place and time. Examples
include records of precipitation, streamflow, ground-water, and quality-of-water analyses. Basic hydrologic
information is a broader term that includes surveys of the water resources of particular areas and a study of
their physical and related economic processes, interrelations and mechanisms.
Basic-Stage Flood Series: See partial duration flood series.
Bifurcation: The point where a stream channel splits into two distinct channels.
Boundary Condition: Known or hypothetical conditions at the boundary of a problem that govern its solution.
For example, when solving a routing problem for a given reach, an upstream boundary condition is necessary
to determine condition at the downstream boundary.
D
DAD: Depth-area-duration, an idealized way of representing the spatial-temporal pattern of a precipitation
event
DDF: Depth-duration-frequency, a generalization of precipitation-frequency analysis results
Dendritic: Channel pattern of streams with tributaries that branch to form a tree-like pattern.
Depression Storage: The volume of water contained in natural depressions in the land surface, such as
puddles.
Detention Basin: Storage, such as a small unregulated reservoir, which delays the conveyance of water
downstream.
Diffusion: Dissipation of the energy associated with a flood wave; results in the attenuation of the flood
wave.
Direct Runoff: The runoff entering stream channels promptly after rainfall or snowmelt. Superposed on base
runoff, it forms the bulk of the hydrograph of a flood. The terms base runoff and direct runoff are time
classifications of runoff. The terms groundwater runoff and surface runoff are classifications according to
source. See also surface runoff
Discharge: The volume of water that passes through a given cross-section per unit time; commonly
measured in cubic feet per second (cfs) or cubic meters per second (m3/s). Also referred to as flow.
In its simplest concept discharge means outflow; therefore, the use of this term is not restricted as to course
or location, and it can be applied to describe the flow of water from a pipe or from a drainage basin. If the
discharge occurs in some course or channel, it is correct to speak of the discharge of a canal or of a river. It
is also correct to speak of the discharge of a canal or stream into a lake, a stream, or an ocean.
Discharge data in US Geological Survey reports on surface water represent the total fluids measured. Thus,
the terms discharge, streamflow, and runoff represent water with sediment and dissolved solids. Of these
terms, discharge is the most comprehensive. The discharge of drainage basins is distinguished as follows:
• Streamflow. The actual flow in streams, whether or not subject to regulation, or underflow.
Each of these terms can be reported in total volumes or time rates. The differentiation between runoff as a
volume and streamflow as a rate is not accepted. See also streamflow and runoff.
Discharge Rating Curve: See stage discharge relation.
Distribution Graph: A unit hydrograph of direct runoff modified to show the proportions of the volume of
runoff that occurs during successive equal units of time.
Diversion: The taking of water from a stream or other body of water into a canal, pipe, or other conduit.
Drainage Area: The drainage area of a stream at a specified location is that area, measured in a horizontal
plane, which is enclosed by a drainage divide.
Drainage Divide: The rim of a drainage basin. See also watershed.
Duration Curve: See flow-duration curve for one type.
E
ET: See evapotranspiration.
Effective Precipitation: That part of the precipitation that produces runoff. Also, a weighted average of
current and antecedent precipitation that is "effective" in correlating with runoff.
ERL: equivalent record length. A measure of information content in a regional analysis based on counting
the number of independent storms in the regionally-pooled observations. Also referred to as "equivalent
independent record length (EIRL)".
Evaporation: The process by which water is changed from the liquid or the solid state into the vapor state. In
hydrology, evaporation is vaporization and sublimation that takes place at a temperature below the boiling
point. In a general sense, evaporation is often used interchangeably with evapotranspiration or ET. See also
total evaporation.
Evaporation Demand: The maximum potential evaporation generally determined using an evaporation pan.
For example, if there is sufficient water in the combination of canopy and surface storage, and in the soil
profile, the actual evaporation will equal the evaporation demand. A soil-water retention curve describes the
relationship between evaporation demand, and actual evaporation when the demand is greater than available
water. See also tension zone.
Evaporation Pan: An open tank used to contain water for measuring the amount of evaporation. The US
National Weather Service class A pan is 4 feet in diameter, 10 inches deep, set up on a timber grillage so that
the top rim is about 16 inches from the ground. The water level in the pan during the course of observation is
maintained between 2 and 3 inches below the rim.
Evapotranspiration: Water withdrawn from a land area by evaporation from water surfaces and moist soils
and plant transpiration.
Event-Based Model: A model that simulates some hydrologic response to a precipitation event. Compare
continuous model.
Exceedance Probability: Hydrologically, the probability that an event selected at random will exceed a
specified magnitude.
Excess Precipitation: The precipitation in excess of infiltration capacity, evaporation, transpiration, and other
losses. Also referred to as effective precipitation.
Excess Rainfall: The volume of rainfall available for direct runoff. It is equal to the total rainfall minus
interception, depression storage, and absorption.
G
Gaging Station: A particular site on a stream, canal, lake, or reservoir where systematic observations of gage
height or discharge are obtained. See also stream-gaging station.
H
Heterogeneous/heterogeneity: having different properties (or the degree to which the properties are
different). May also be called "inhomogeneous/inhomogeneity."
Homogeneous/homogeneity: having the same properties (or the degree to which the properties are similar)
HUC: hydrologic unit code, a unique numeric identifier of watersheds in the United States145
Hydraulic Radius: The flow area divided by the wetted perimeter. The wetted perimeter does not include the
free surface.
Hydrograph: A graph showing stage, flow, velocity, or other property of water with respect to time.
Hydrologic Budget: An accounting of the inflow to, outflow from, and storage in, a hydrologic unit, such as a
drainage basin, aquifer, soil zone, lake, reservoir, or irrigation project.
Hydrologic Cycle: The continuous process of water movement between the oceans, atmosphere, and land.
Hydrology: The study of water; generally focuses on the distribution of water and interaction with the land
surface and underlying soils and rocks.
Hyetograph: Rainfall intensity versus time; often represented by a bar graph.
I
IID: independent and identically distributed
Index Precipitation: An index that can be used to adjust for bias in regional precipitation, often quantified as
the expected annual precipitation.
Infiltration: The movement of water from the land surface into the soil.
Infiltration Capacity: The maximum rate at which the soil, when in a given condition, can absorb falling rain
or melting snow.
Infiltration Index: An average rate of infiltration, in inches per hour, equal to the average rate of rainfall such
that the volume of rain fall at greater rates equals the total direct runoff.
Inflection Point: Generally refers the point on a hydrograph separating the falling limb from the recession
curve; any point on the hydrograph where the curve changes concavity.
Initial Conditions: The conditions prevailing prior to an event. See also to antecedent conditions.
145 https://nas.er.usgs.gov/hucs.aspx
K
Kriging: an interpolation method that relies on Gaussian processes to describe the relationship between
variables across dimensions. Typical application is in 2-dimensional spatial statistics. Kriging is a
complicated topic and this definition does not do it justice.
L
Lag: Variously defined as time from beginning (or center of mass) of rainfall to peak (or center of mass) of
runoff.
Lag Time: The time from the center of mass of excess rainfall to the hydrograph peak. Also referred to as
basin lag.
Loss: The difference between the volume of rainfall and the volume of runoff. Losses include water absorbed
by infiltration, water stored in surface depressions, and water intercepted by vegetation.
L-moment: a descriptor of the shape of a sample or population of data using linear combinations of the
values in the dataset
LMRD: L-moment ratio diagram. A plot of L-skewness vs. L-kurtosis that can be used for characterizing
sample data and probability distributions.
M
Mass Curve: A graph of the cumulative values of a hydrologic quantity (such as precipitation or runoff),
generally as ordinate, plotted against time or date as abscissa. See also double-mass curve and residual-
mass curve.
Maximum Probable Flood: See probable maximum flood.
Meander: The winding of a stream channel.
Model: A physical or mathematical representation of a process that can be used to predict some aspect of
the process.
Moisture: Water diffused in the atmosphere or the ground.
N
NARR: North American Regional Reanalysis; an NCEP reanalysis product for North America
NCEP: National Centers for Environmental Prediction
Non-stationarity: a sample that has properties that are not constant across a dimension; e.g. a time series
with a trend
NWS: National Weather Service
P
Parameter: A variable, in a general model, whose value is adjusted to make the model specific to a given
situation. A numerical measure of the properties of the real-world system.
Parameter Estimation: The selection of a parameter value based on the results of analysis and/or
engineering judgement. Analysis techniques include calibration, regional analysis, estimating equations, and
physically based methods. See also calibration.
Peak: The highest elevation reached by a flood wave. Also referred to as the crest.
Peak Flow: The point of the hydrograph that has the highest flow.
Peakedness: Describes the rate of rise and fall of a hydrograph.
Percolation: The movement, under hydrostatic pressure, of water through the interstices of a rock or soil.
PDS: partial duration series; also called "peaks over threshold." A sample containing all independent
observations of some variable greater than a chosen value.
PF: precipitation-frequency
PFDS: Precipitation Frequency Data Server. NOAA/National Weather Service/Hydrometeorological Design
Studies Center source for information related to precipitation frequency analysis. https://
hdsc.nws.noaa.gov/hdsc/pfds/
Pluvial flooding: inundation caused by precipitation instead of flowing from a river (in contrast
to fluvial flooding). Can be caused by overland flow or infiltration excess.
PMF: probable maximum flood
PMP: probable maximum precipitation
Point-to-area reduction: accounting for the difference between the maximum intensity of rainfall at a point,
and the average intensity over a larger area. Synonyms: depth-area-reduction, area reduction factor
POR: period of record
POT: peaks over threshold (see PDS)
Precipitation: As used in hydrology, precipitation is the discharge of water, in liquid or solid state, out of the
atmosphere, generally upon a land or water surface. It is the common process by which atmospheric water
becomes surface or subsurface water. The term precipitation is also commonly used to designate the
quantity of water that is precipitated. Precipitation includes rainfall, snow, hail, and sleet, and is therefore a
more general term than rainfall.
Q
Quasi-Continuous: a hydrologic modeling technique that mimics continuous modeling in an "event mode" by
randomly selecting the event date and dependent initial conditions, to capture the full range of variability in
hydrologic conditions for design storm modeling. The initial conditions are drawn from a POR continuous
hydrologic simulation.
S
Saturation Zone: The portion of the soil profile where available water storage is completely filled. The
boundary between the vadose zone and the saturation zone is called the water table. Note, that under certain
periods of infiltration, the uppermost layers of the soil profile can be saturated. See vadose zone.
SCS Curve Number: An empirically derived relationship between location, soil-type, land use, antecedent
moisture conditions and runoff. A SCS curve number is used in many event-based models to establish the
initial soil moisture condition, and the infiltration characteristics.
Site: a location where observations of the hydrometeorological variable of interest are taken
Space-for-time substitution: using collections of similar observations of extremes across a geographic
extent to increase the effective number of observations of those extremes
Spatial regression: a regression analysis where the predictor(s) and predictand are linked by being co-
located in space
Snow: A form of precipitation composed of ice crystals.
T
Tension Zone: In the context of the program, the portion of the soil profile that will lose water only to
evapotranspiration. This designation allows modeling water held in the interstices of the soil. See also soil
profile.
Time of Concentration: The travel time from the hydraulically furthermost point in a watershed to the outlet.
Also defined as the time from the end of rainfall excess to the inflection point on the recession curve.
Time of Rise: The time from the start of rainfall excess to the peak of the hydrograph.
Time to Peak: The time from the center of mass of the rainfall excess to the peak of the hydrograph. See
also to lag time.
Tobler's First Law of Geography: “Everything is related to everything else, but near things are more related
than distant things.”
Total Evaporation: The sum of water lost from a given land area during any specific time by transpiration
from vegetation and building of plant tissue; by evaporation from water surfaces, moist soil, and snow; and
by interception. It has been variously termed evaporation, evaporation from land areas, evapotranspiration,
total loss, water losses, and fly off.
Transpiration: The quantity of water absorbed and transpired and used directly in the building of plant tissue,
in a specified time. It does not include soil evaporation. The process by which water vapor escapes from the
living plant, principally the leaves, and enters the atmosphere.
U
Underflow: The downstream flow of water through the permeable deposits that underlie a stream and that
are more or less limited by rocks of low permeability.
Unit Hydrograph: A direct runoff hydrograph produced by one unit of excess precipitation over a specified
duration. For example, a one-hour unit hydrograph is the direct runoff from one unit of excess precipitation
occurring uniformly over one hour.
V
Vadose Zone: The portion of the soil profile above the saturation zone.
Validation: The calibrated model, without any further parameter modifications, is used to compute outputs
which are compared against observed data for independent events that were not considered during model
calibration.
W
Water Year: A 12-month period during which hydrologic quantities are measured. In the United States, a
water year is defined as October 1 through September 30 and is is designated by the calendar year in which it
Uncertainty
Hydrologic models are developed to capture the dynamics of the hydrologic cycle and its various processes.
Hydrologic models aim to depict various hydrologic phenomena by solving mathematical equations, where
these equations are often simplified by making broad assumptions, particularly at larger watershed scales.
However, due to the inherent complexities of the hydrological system, these equations cannot completely
replicate the hydrologic cycle, constrained by limited knowledge, imprecise measurements (both boundary
conditions and observed data), and challenges stemming from heterogeneous environments. Consequently,
uncertainties abound in model predictions. Uncertainty analysis plays a crucial role in quantifying these
unknowns within the system, showing the possible range of model outcomes.
Basic Concepts
Uncertainty Analysis
Uncertainty Analysis is the process of determining the total error in the simulated watershed response, for
example, the flow at the outlet. The simulated flow at the outlet actually depends on many individual
components each with its own error. There is error in the meteorologic data and observed flow data because
it is generally impossible to perfectly measure precipitation and discharge. There is error in the models of the
hydrologic processes because it is generally impossible to include every possible process at the scale it
occurs, for example, animal burrows or plant transpiration. There is error in the model parameter values
because the equations are solved at a scale ranging from meters up to whole subbasins and area-average
values must be used. The error in the whole watershed response includes all of these individual errors and
the complex way in which they interact.
The watershed model is very complex and it may not be possible to attribute the total error to individual
components. An error in the precipitation data may be compensated with a corresponding error in the
infiltration parameter values. An error in the mathematical formulation of the infiltration or transpiration
process may be compensated with a corresponding error in the parameter values. An assumption about
process scale may lead to effective area-averaged parameter values that do not match values that would be
measured in the field. It is usually not possible to determine the exact error in each component or how the
errors are interacting and accumulating throughout the watershed model. The error in an individual model
parameter may be described with a probability distribution. In one case the selected process model is well-
suited to the watershed, the input to the model is accurate, and the parameters can be estimated from field
observations. In this first case there is very little uncertainty in the parameter value. In another case the
model is missing important subtleties of the physical process, the input is poor, and parameter estimation is
difficult. In this second case there is a high degree of uncertainty in the parameter value. In both cases, a
probability distribution can be parameterized to reflect the uncertainty in the parameter, either small or large.
Different model parameters require differently formulated probability distributions and they may change from
Monte Carlo
The Monte Carlo Method is one approach to estimating the uncertainty in the simulated watershed response
given the uncertainty in each of the model parameters. Monte Carlo sampling is a statistical technique used
in hydrology (and many other fields) to model and analyze the uncertainty and variability of complex
systems. The Monte Carlo Method within HEC-HMS works using an automated sampling procedure. Each
sample is created by sampling the model parameters according to their individual probability distribution.
Each sample is simulated to obtain a watershed response corresponding to the sampled parameter values.
All of the responses from all of the samples can be analyzed statistically to evaluate the uncertainty in the
simulated watershed response.
In HEC-HMS, sample size is a required input. Sample size refers to the number of random samples
generated from the probability distributions to perform the Monte Carlo simulations. The larger the sample
size, the aggregate of the samples will more accurately represent the parameterized distribution. As the
sample size increases, the reliability of the Monte Carlo simulation improves, resulting in more stable and
precise statistics, such as the mean. These sample statistics progressively converge to the true population
parameters. However, this comes at the cost of increased computational resources. Small sample sizes, on
the other hand, may not fully capture the characteristics of the distribution which can lead to less accurate or
misleading simulation results.
Simple Distribution
The Simple Distribution selection provides 11 distributions shown in the table below. A user specified option
is also available. The uncertainty within a parameter or rainfall depth can be defined using one of the 11
distribution provided. Once the distribution has been selected and the parameters defined, the selection of a
value from the distribution begins with HEC-HMS generating a pseudo-random number from a uniform
distribution between 0 and 1. This pseudo-random number represents a probability. The next step involves
using this probability along with the inverse cumulative distribution function (inverse CDF) of the target
146 https://www.hec.usace.army.mil/confluence/display/HMSGUIDES/Applying+the+Uncertainty+Analysis+Compute+Option+in+HEC-
HMS
147 https://www.hec.usace.army.mil/confluence/display/HMSGUIDES/Summarizing+Student+Results+from+the+Basic+HEC-
HMS+Final+Project
Function Formula
Beta
Exponential
Gamma
Shape=α, Scale=β
Gumbel
Location = ξ, Scale = α
Kappa
(Simple Distribution only)
is the CDF
Log-normal
Normal
Triangular
Uniform
Lower=a, Upper=b
Weibull
Shape=k, Scale=λ
Monthly Distribution
The Monthly Distribution selection provides 9 distributions shown in the table below. A user specified option
is also available. This method currently only works in conjunction with another outside software (i.e. HEC-
WAT with the Hydrologic Sampler plugin).
Once the distribution has been selected, parameters must be specified for each month (January to
December). When used within HEC-WAT with the Hydrologic Sampler Plugin, the date selection from the
Hydrologic Sampler is passed to HEC-HMS which then uses the distribution parameters for the selected
month to create the distribution. The selection of a model parameter value follows the sample process
described in the Simple Distribution. HEC-HMS generates the seed value associated with the time the
Uncertainty Compute Type was created. However, this initial seed value can be modified if required.
Function Formula
Beta
Exponential
Gamma
Shape=α, Scale=β
Gumbel
Location = ξ, Scale = α
Log-normal
Normal
Triangular
Uniform
Lower=a, Upper=b
Weibull
Shape=k, Scale=λ
An Epsilon Error Term is added to the preliminary parameter value calculated from the regression parameter
and the linear or semi-logarithmic relationship. The epsilon term represents the error in the fitting
relationship between the regression parameter and this parameter. You may choose from 10 analytical
probability distributions to represent the error term (shown in the table below). Based on the selected
distribution, parameter coefficients must be entered to define the distribution. You may also select a
constant value (for example, 0). This means that the relationship between the two variables is deterministic,
with uncertainty in both parameters completely controlled by the sampling of the regression element.
Function Formula
Beta
Exponential
Gamma
Shape=α, Scale=β
Gumbel
Location = ξ, Scale = α
Log-normal
Normal
Triangular
Uniform
Lower=a, Upper=b
Weibull
Shape=k, Scale=λ
Specified Values
The Specified Values method allows users to sample from provided values instead of using a parameter
distribution. While this method is not an uncertainty analysis in the traditional sense—where values are
sampled from a parameter distribution—it facilitates variations in parameter values based on user
specifications. This enables users to specify multiple parameter sets intended to be treated as a unit,
defining relationships between parameters based on specified values for a particular index. When using this
method, it can only be performed sequentially if the number of values per parameter is the same (e.g., 10
values for parameter 1 and 10 values for parameter 2). Alternatively, users can opt for random sampling with
the same index to maintain consistency across the parameters.
As outlined, for each iteration, the values can be selected sequentially, randomly with the same index, or
randomly with independent indices. If the sequential method is chosen, the model will start the first iteration
from Index 1, the second iteration will use parameters from Index 2, and so on. Once the iteration reaches
the final value, it will loop back to the first index and repeat the cycle.
When the randomly with the same index method is selected, the parameter selection will use the same index
value for all parameters. When using this method, the number of values for each Parameter must be the
same length.
The last selection method is a random selection independent of the index value. This allows each parameter
to be selected without any consideration to the index value.