0% found this document useful (0 votes)
21 views99 pages

HWRFv3.5a ScientificDoc

Uploaded by

mohamed orif
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views99 pages

HWRFv3.5a ScientificDoc

Uploaded by

mohamed orif
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 99

DEVELOPMENTAL TESTBED CENTER

Hurricane Weather Research and Forecasting


(HWRF) Model:
2013 Scientific Documentation

August 2013 – HWRF v3.5a

Authors (in alphabetical order by last name):

Vijay Tallapragada
NOAA/NWS/NCEP Environmental Modeling Center, College Park, MD
Ligia Bernardet
NOAA Earth System Research Laboratory, CIRES / University of Colorado, and Developmental Testbed
Center, Boulder, CO
Sundararaman Gopalakrishnan
NOAA/AOML, Hurricane Research Division, Miami, FL
Young Kwon
NOAA/NWS/NCEP Environmental Modeling Center and IMSG Inc., College Park, MD

Qingfu Liu
NOAA/NWS/NCEP Environmental Modeling Center, College Park, MD
Timothy Marchok
NOAA/OAR/Geophysical Fluid Dynamics Laboratory, Princeton, NJ
Dmitry Sheinin
NOAA/NWS/NCEP Environmental Modeling Center and IMSG Inc., College Park, MD
Mingjing Tong
NOAA/NWS/NCEP Environmental Modeling Center and UCAR, College Park, MD
Samuel Trahan
NOAA/NWS/NCEP Environmental Modeling Center and IMSG Inc., College Park, MD
Robert Tuleya
Center for Coastal Physical Oceanography, Old Dominion University, Norfolk, VA
Richard Yablonsky
Graduate School of Oceanography, University of Rhode Island, Narragansett, RI
Xuejin Zhang
NOAA/AOML Hurricane Research Division and RSMAS/CIMAS, University of Miami, Miami, FL
1
Acknowledgments
The authors wish to acknowledge the Development Testbed Center (DTC) for facilitating
the coordination of writing this document amongst the following institutions:
NOAA/NWS/NCEP Environmental Modeling Center; NOAA/ESRL Global Systems
Division; NOAA/AOML Hurricane Research Division; NOAA/OAR Geophysical Fluid
Dynamics Laboratory; IM Systems Group Inc.; Graduate School of Oceanography,
University of Rhode Island; RSMAS/CIMAS, University of Miami; and CIRES
University of Colorado, Boulder, CO. The authors also wish to thank John Osborn of
NOAA/ESRL/GSD for providing editorial support for this document and addressing a
number of formatting issues. Thanks to Karen Griggs of NCAR for offering her desktop
publishing expertise in the preparation of this document.

2
Table of Contents
Acknowledgments .................................................................................................................... 2

An Introduction to the Hurricane Weather Research and Forecast (HWRF)


System .................................................................................................................................... 5

1.0 HWRF Initialization ........................................................................................................ 15


1.1 Introduction ................................................................................................................................ 15
1.2 HWRF cycling system.............................................................................................................. 15
1.3 Bogus vortex used to correct weak storms.................................................................... 19
1.4 Correction of vortex in previous 6-h HWRF or HDAS forecast .............................. 20
1.5 Data assimilation through GSI in HWRF ......................................................................... 32

2.0 Princeton Ocean Model for Tropical Cyclones (POM-TC) ................................. 40


2.1 Introduction ................................................................................................................................ 40
2.2 Purpose ......................................................................................................................................... 41
2.3 Grid size, spacing, configuration, arrangement, coordinate system, and
numerical scheme .................................................................................................................... 41
2.4 Initialization................................................................................................................................ 42
2.5 Physics and dynamics ............................................................................................................. 44
2.6 Coupling ........................................................................................................................................ 45
2.7 Output fields for diagnostics ................................................................................................ 46

3.0 Physics Packages in HWRF .......................................................................................... 47


3.1 HWRF physics ............................................................................................................................ 47
3.2 Microphysics parameterization .......................................................................................... 48
3.3 Cumulus parameterization ................................................................................................... 50
3.4 Surface layer parameterization .......................................................................................... 52
3.5 Land-surface model ................................................................................................................. 57
3.6 Planetary boundary layer parameterization................................................................. 59
3.7 Atmospheric radiation parameterization ....................................................................... 62
3.8 Physics interactions ................................................................................................................. 64

4.0 Design of Moving Nest ................................................................................................... 65


4.1 Grid Structure............................................................................................................................. 65
4.2 Terrain Treatment.................................................................................................................... 66
4.3 Moving Nest Algorithm .......................................................................................................... 68
4.4 Fine Grid Initialization............................................................................................................ 68
4.5 Lateral Boundary Conditions ............................................................................................... 70
4.6 Feedback ...................................................................................................................................... 71
3
5.0 Use of the GFDL Vortex Tracker ................................................................................. 72
5.1 Introduction ................................................................................................................................ 72
5.2 Design of the Tracking System ............................................................................................ 74
5.3 Parameters Used for Tracking............................................................................................. 79
5.4 Intensity and Wind Radii Parameters .............................................................................. 81
5.5 Thermodynamic Phase Parameters .................................................................................. 81
5.6 Detecting Genesis and Tracking New Storms ............................................................... 82
5.7 Tracker Output .......................................................................................................................... 84

6.0 The idealized HWRF framework............................................................................... 90

7.0 References ......................................................................................................................... 93

4
An Introduction to the Hurricane Weather Research and
Forecast (HWRF) System
The Environmental Modeling Center (EMC) at the National Centers for Environmental
Prediction (NCEP) provides real-time tropical cyclone forecasts to the National
Hurricane Center (NHC) for the Atlantic and Eastern North Pacific basins. The
Hurricane Weather Research and Forecast (WRF) system, or HWRF system, became
operational at NCEP starting with the 2007 hurricane season. It provides high-resolution
model forecasts for Atlantic and Eastern Pacific tropical systems. Development of the
HWRF began in 2002 at EMC in collaboration with the Geophysical Fluid Dynamics
Laboratory (GFDL) of the National Oceanic and Atmospheric Administration (NOAA)
and the University of Rhode Island (URI). To meet operational implementation
requirements, it was necessary that the skill of the track forecasts from the HWRF and
GFDL hurricane models be comparable. Since the GFDL model evolved as primary
guidance for track prediction used by NHC, the Central Pacific Hurricane Center
(CPHC), and the Joint Typhoon Warning Center (JTWC) after becoming operational in
1994, the strategy for HWRF development was to take advantage of the advancements
made to improve track prediction through a focused collaboration between EMC, GFDL,
and URI and transition those modeling advancements to the HWRF. This strategy
ensured comparable track skill to the GFDL forecasts for both the Eastern North Pacific
and Atlantic (including Caribbean and Gulf of Mexico) basins. Additionally, features of
the GFDL hurricane model that led to demonstrated skill for intensity forecasts, such as
ocean coupling, upgraded air-sea physics, and improvements to microphysics, were also
captured in the newly developed HWRF system.
Upgrades to the HWRF system are performed on an annual cycle that is dependent on the
hurricane season and on upgrades to the Global Data Assimilation System (GDAS) and
the Global Forecast System (GFS) that both provide initial and boundary conditions for
HWRF. Every year, prior to the start of the Eastern North Pacific and Atlantic hurricane
seasons (15 May and 1 June, respectively), HWRF upgrades are provided to NHC by
EMC so that NHC forecasters have improved hurricane guidance at the start of each new
hurricane season. These upgrades are chosen based on extensive testing and evaluation
(T&E) of model forecasts for at least two recent past hurricane seasons. There are
basically two phases of development. The first is developmental testing that occurs prior
to and during the hurricane season (roughly 1 April to 30 October) where potential
upgrades to the system are tested individually in a systematic and coordinated manner.
The pre-implementation testing starts in November and is designed to test the most
promising developments assessed in the development phase to define the HWRF
configuration for the upcoming hurricane season. The results of the pre-implementation
testing must be completed and the final HWRF configuration locked down by 15 March
for each annual upgrade. Once frozen, the system is handed off to NCEP Central
Operations (NCO) for implementation by about 1 June. The cycle is then repeated for the
next set of proposed upgrades to the HWRF system. During the hurricane season (1 June
to 30 November) changes are not made to the operational HWRF in order to provide

5
forecasters with consistent and documented numerical guidance performance
characteristics.
Since its initial implementation in 2007, HWRF has been upgraded every year to meet
specific scientific goals addressed through the aforementioned pre-implementation T&E.
Changes to the vortex initialization and convective parameterization were the focal areas
for the 2008 HWRF implementation. Infrastructure upgrades and transitioning to the
new IBM machine were dominant for the 2009 HWRF implementation. For 2010
upgrades, the HWRF team at EMC worked on further improving the vortex initialization,
including gravity wave drag parameterization and modifying the surface physics based on
observations. Limiting rapid growth of initial intensity errors was one of the focal areas
for the 2011 HWRF implementation, along with major upgrades to model dynamical core
from WRF v2.0 to community-based WRF v3.2, bridging the gap between the
operational and community versions of the WRF model. Other significant developments
in year 2011 were to make the operational HWRF model available to the research
community through the Developmental Testbed Center (DTC), and to draw the codes
from the community repository maintained and supported by DTC, ensuring that the
operational and research HWRF codes remain synchronized.
To significantly improve hurricane forecast skill, the hurricane modeling team at
NCEP/EMC, with support from NOAA’s Hurricane Forecast Improvement Project
(HFIP) and in collaboration with the Hurricane Research Division (HRD) of the NOAA
Atlantic Oceanographic and Meteorological Laboratory (AOML) and several partners
within NOAA as well as academia, implemented major changes to the 2012 version of
operational HWRF. The biggest improvement was the triple-nest capability that included
a cloud-resolving innermost grid operating at 3 km horizontal resolution. Other major
HWRF upgrades implemented for the 2012 hurricane season included:
• a centroid-based nest movement algorithm;
• explicit representation of moist processes in the innermost grid;
• inclusion of shallow convection in the Simplified Arakawa Schubert (SAS)
convective parameterization;
• observation-based modifications to the convective, microphysics, Planetary
Boundary Layer (PBL) and surface parameterizations, making them suitable
for higher resolution;
• re-design of the vortex initialization for 3-km resolution with improved
interpolation algorithms and better representation of the composite storm;
• improved Princeton Ocean Model for Tropical Cyclones (POM-TC)
initialization in the Atlantic domain and new 1-D ocean coupling for the
Eastern North Pacific basin;
• upgrade of the Gridpoint Statistical Interpolator (GSI) data assimilation
system to v3.4a;
• use of upgraded GFS, initialized using a hybrid ensemble-variational
procedure, for initial and boundary conditions;
• improved HWRF Unified Post Processor (UPP) to generate simulated
microwave satellite imagery products; and
6
• very high-resolution (every 5 s) storm tracker output to support NHC
operations.
The HWRF code was optimized to ensure that the 2012 model ran in the time slot allotted
for operational products at NCEP and with few additional computer resources.
Retrospective test results from the combination of these upgrades for two consecutive
hurricane seasons (2010-2011) showed significantly improved track, intensity and
structure forecast in both Atlantic and Eastern North Pacific basins.
The 2013 version of the operational HWRF made additional huge improvement in track,
intensity and structural prediction of tropical cyclones by taking further advantage of the
high-resolution capability built in the 2012 HWRF. Major upgrades for the 2013 HWRF
include;
• changes in model configuration, such as use of a larger innermost 3-km domain
and smaller physics-calling time step;
• upgrades of model infrastructure bringing all components to their latest
community versions, and including an advanced parent-nest interpolation method;
• adoption of a more sophisticated vortex following algorithm in order to fully
utilize the advantage of high resolution;
• upgrades to the PBL parameterization to fit observed structures of both the
hurricane area and the outer environmental region;
• modifications of the vortex initialization method with adjusted filter size, storm
size correction, and weak storm treatment;
• adoption of the Hurricane Data Assimilation System (HDAS), a GSI-basd one-
way hybrid ensemble-variational data assimilation scheme with the first guess
from GDAS 6-hr forecast field instead of GFS analysis data;
• ability to assimilate inner core observations from the NOAA P3 aircraft Tail
Doppler Radar (TDR) data, when available;
• removal of atmospheric-ocean flux truncation in POM-TC (previously set to 75%
of calculated values);
• upgrades of the post processing and products, such as refinement of simulated
satellite images;
• HWRF scripts and procedural upgrades in order to accommodate the hybrid data
assimilation, optimal use of computational resources and transition to new
operational computational platform; and
• Ability to run uncoupled (atmosphere only) in the Central Pacific, West Pacific,
and Indian Ocean basins.

One of the highlights of the 2013 HWRF configuration retrospective T&E, performed on
a vast sample of three hurricane seasons (2010-2012), was the remarkable intensity
7
forecast skill. Results indicated that HWRF outperformed the statistical models in the 2 to
3-day forecast period. Historically, statistical models have been more skillful than
dynamical models for hurricane intensity prediction. These HWRF results demonstrate,
for the first time, the potential of an operational dynamical model as a viable hurricane
intensity prediction tool. Track forecast skills from the 2013 HWRF have also been
significantly improved compared to the 2012 HWRF, and are now comparable to the
best-performing GFS model.
This documentation provides a description of HWRF v3.5a, which is functionally
equivalent to the model implemented for the 2013 hurricane season. The list of upgrades
to the HWRF for the hurricane seasons from 2008 through 2013 is available on EMC’s
HWRF website (http://www.emc.ncep.noaa.gov/index.php?branch=HWRF). These
details will also be made available on the WRF for Hurricanes website hosted by DTC
(http://www.dtcenter.org/HurrWRF/users).
The HWRF system is composed of the WRF model software infrastructure, the Non-
Hydrostatic Mesoscale Model (NMM) dynamic core, the POM-TC, and the NCEP
coupler. HWRF employs a suite of advanced physics developed for tropical cyclone
applications. These include GFDL surface physics to account for air-sea interaction over
warm water and under high wind conditions, GFDL land surface model and radiation,
Ferrier Microphysics, NCEP GFS boundary layer, and GFS SAS deep and shallow
convection. Figure I.1 illustrates all components of HWRF supported by the DTC, which
also include the WRF Pre-Processor System (WPS), prep_hybrid (used to process
spectral coefficients of GDAS and GFS in their native vertical coordinates), a
sophisticated vortex initialization package designed for HWRF, the regional hybrid
Ensemble Kalman Filter (EnKF) - three-dimensional variational data assimilation system
(3D-VAR) GSI, UPP, and the GFDL vortex tracker.
It should be noted that, although the HWRF uses the same dynamic core as the NMM in
the Arakawa E-staggered grid (NMM-E) developed at NCEP, HWRF was customized for
hurricane/tropical forecast applications, and is very different from other operational
models that employ NMM-E, such as the High-Resolution Windows (HRW) and the
Short-Range Ensemble Forecast (SREF) System. HWRF also differs substantially from
the North American Mesoscale (NAM) model, which now employs the NMM dynamic
core in the Arakawa-B grid (NMM-B). The HWRF is an atmosphere-ocean model
configured with a parent grid and two telescopic, high-resolution, movable 2-way nested
grids that follow the storm, using a unique physics suite and diffusion treatment. The
HWRF also contains a sophisticated initialization of both the ocean- and the storm-scale
circulation. Additionally, unlike other NCEP forecast systems which run continuously
throughout the year, the HWRF and GFDL hurricane models are launched for operational
use only when NHC determines that a disturbed area of weather has the potential to
evolve into a depression anywhere over NHC’s area of responsibility. After an initial
HWRF run is triggered, new runs are launched in cycled mode at 6-h intervals, until
either the storm dissipates after making landfall, becomes extra-tropical, or degenerates
into a remnant low, typically identified when convection becomes disorganized around
the center of circulation. Currently, the HWRF runs in NCEP operations four times daily

8
producing 126-h forecasts of track, intensity, structure, and rainfall to meet NHC
operational forecast and warning process objectives.

Figure I.1. A simplified overview of the HWRF system for the case in which TDR data is
not available. Components include the atmospheric initialization (WPS and
prep_hybrid), the vortex improvement, the GSI data assimilation, the HWRF atmospheric
model, the atmosphere-ocean coupler, the ocean initialization, the POM-TC, the post
processor, and the vortex tracker. When TDR data is available, the data assimilation in
the parent domain is not used as an input to the vortex improvement. Additionally, a
second run of GSI is performed on the high-resolution grid after the vortex improvement
(more details in Section 1).
The following paragraphs present an overview of the sections contained in this
documentation. A concluding paragraph provides proposed future enhancements to the
HWRF system for advancing track, intensity, and structure prediction, along with
modeling advancements to address issues of storm surge, inland flooding, and coastal
inundation for landfalling storms.

9
HWRF Atmospheric Initialization

The HWRF vortex initialization consists of several major steps: definition of the HWRF
domain based on the observed storm center position; interpolation of the HDAS fields
onto the HWRF parent domain; removal of the global model vortex and insertion of a
modified mesoscale vortex obtained from the previous cycle’s HWRF 6-hr forecast (if
available), from HDAS, or from a synthetic vortex (cold start). The modification of the
mesoscale hurricane vortex in the first guess field is a critical aspect of the initialization
problem. Modifications include corrections to the storm size and to the three-dimensional
structure based on observed parameters, including Radius of Maximum Wind (RMW),
radius of 34-kt winds (R34) and/or Radius of Outermost Closed Isobar (ROCI),
maximum sustained 10-m winds (intensity), and minimum mean sea level pressure
(MSLP). Each of these corrections requires careful rebalancing between the model
winds, temperature, pressure, and moisture fields. This procedure is described in Section
1.
An advancement of the HWRF system over the GFDL model bogus vortex initialization
is the capability of the HWRF to run in cycle to improve the three-dimensional structure
of the hurricane vortex. This capability provides a significant opportunity to add more
realistic structure to the forecast storm and is a critical step towards advancing hurricane
intensity/structure prediction.
The operational HWRF initialization procedure mentioned above, and described in
Section 1, utilizes the community GSI with a regional hybrid EnKF-3DVAR data
assimilation procedure that includes, for the first time, assimilation of TDR data from the
NOAA P3 aircraft, when available. The NCEP operational GFS 80-member ensemble
forecast provides the ensemble background error covariances for HDAS. Apart from the
NOAA P3 TDR and conventional observations, clear-sky radiance datasets from several
geostationary and polar orbiting satellites can also be assimilated in the hurricane
environment using GSI. At present, only conventional observations located at least 1500
km away from the storm center are included in the environment data assimilation
procedure. Section 1 provides more details on the application of GSI in the HWRF
modeling system.
Ocean Coupling

In 2001, the GFDL was coupled to POM-TC, a three-dimensional version of the POM
modified for hurricane applications over the Atlantic basin. GFDL was the first coupled
air-sea hurricane model to be implemented for hurricane prediction into NCEP’s
operational modeling suite. Prior to implementation, many experiments were conducted
over multiple hurricane seasons that clearly demonstrated the positive impact of the
ocean coupling on both the GFDL track and intensity forecasts. Given the demonstrated
improvements in the Sea Surface Temperature (SST) analyses and forecasts, this
capability was also developed for the HWRF 2007 implementation.
Since early experiments had shown the impact on intensity of storms traversing over a
cold-water wake, particular attention was given to the generation of the hurricane-
induced cold wake in the initialization of the POM-TC.
10
Some of the most recent improvements to the ocean initialization include feature-based
modifications of the temperature and salinity to produce more realistic ocean structures
than climatology can provide. These feature-based modifications include better
initialization of the Gulf Stream, the Loop Current, and both warm and cold core eddies
in the Gulf of Mexico (GOM). The GOM features have shown importance for more
accurate predictions in the GFDL model of intensification and weakening in Hurricanes
Katrina, Rita, Gustav, and Ike. Much research is currently underway in the
atmospheric/oceanic hurricane community to prioritize and determine the model
complexity needed to simulate realistic air-sea interactions. This complexity may
include: 1) coupling to and/or initializing with a more comprehensive three-dimensional
ocean model with data assimilation capabilities, such as the Message Passing Interface
POM for Tropical Cyclones (MPIPOM-TC) or the Hybrid Coordinate Community Ocean
Model (HYCOM), based on NCEP’s Real-Time Ocean Forecast System (RTOFS); 2)
coupling to an adaptable multi-grid wave model (WAVEWATCH III – WW3); and 3)
simulating wave-current interactions that may prove important to address coastal
inundation problems for landfalling hurricanes. Section 2 describes the configuration of
POM-TC used in HWRF and its initialization.
Earlier versions of the operational HWRF were coupled only in the North Atlantic basin.
Starting with the 2012 hurricane season, the operational HWRF is also coupled to the
one-dimensional POM-TC in the Eastern North Pacific basin. In the future, when the
Global HYCOM-based RTOFS, implemented at NCEP in 2011
(http://polar.ncep.noaa.gov/global/), is configured for HWRF, this capability will be
expanded to include other tropical cyclone basins.
HWRF Physics

Some of the physics in the HWRF evolved from a significant amount of development
work carried out over the past 15 years in advancing model prediction of hurricane track
with global models, such as the NCEP GFS, the Navy Operational Global Atmospheric
Prediction System (NOGAPS), the United Kingdom Met Office (UKMO) model, and
subsequently with the higher resolution GFDL hurricane model. These physics include
representations of the surface layer, planetary boundary layer, microphysics, deep
convection, radiative processes, and land surface. Commensurate with increasing
interest in the ocean impact on hurricanes in the late 1990’s and the operational
implementation of the coupled GFDL model in 2001, collaboration increased between
the atmospheric/oceanic research and operational communities that culminated in the
Navy’s field experiment, the Coupled Boundary Layer Air-Sea Transfer (CBLAST),
carried out in the Eastern Atlantic in 2004. During CBLAST, important observations
were taken that helped confirm that drag coefficients used in hurricane models were
incorrect under high wind regimes. Since then, surface fluxes of both momentum and
enthalpy under hurricanes remain an active area of hurricane scientific/modeling interest
and are being examined in simple air-sea coupled systems and three-dimensional air-sea
coupled systems with increasing complexity, including coupling of air-sea to wave
models.

11
A detailed treatment of the HWRF physics is presented in Section 3. However, it must
be re-emphasized that these physics, along with other HWRF upgrades, are subject to
modification or change on an annual basis to coincide with continuous advancement to
the components of this system.
Grid Configuration, Moving Nest and Vortex Tracker

The current HWRF configuration used in operations (starting with the 2012 hurricane
season) contains three domains: a parent domain with 27-km horizontal grid spacing and
two two-way interactive telescopic moving nests with 9- and 3-km spacing respectively,
to capture multi-scale interactions. The parent domain covers roughly 80o x 80 o on a
rotated latitude/longitude E-staggered grid. The large parent domain allows for rapidly
accelerating storms moving to the north typically seen over the mid-Atlantic within a
given 5-day forecast. The intermediate nest domain at 9-km resolution spans
approximately 11o x 10 o and the innermost nest domain at 3-km resolution covers an area
of about 7.2o x 6.5o. Both the intermediate and innermost grids are centered over the
initial storm location, and are configured to follow the projected path of the storm.
The HWRF movable nested grids and the internal mechanism that assures the nested
grids follow the storm are described in Section 4. A major upgrade for 2013 HWRF is
the redesign of the nest movement technique, now based on nine different atmospheric
parameters that are used to accurately determine the storm center at every nest movement
time step. The overall development of the movable nested grids required substantial
testing to determine optimal grid configurations, lateral boundary conditions, and domain
sizes to accommodate the required 5-day operational hurricane forecasts with
consideration for multiple storm scenarios occurring in either the Atlantic or Eastern
North Pacific basins. When more than one storm becomes active, a separate HWRF run
is launched with its unique storm-following nested grids.
After the forecast is run, a post-processing step includes running the GFDL vortex tracker
on the model output to extract attributes of the forecast storm. The GFDL vortex tracker
is described in Section 5.
Future HWRF Direction

Starting with the 2011 hurricane season, all components of HWRF have been
synchronized with their community code repositories to facilitate transition of
developments from Research to Operations (R2O). This effort, led by the DTC, has
enabled closer collaboration among HWRF developers and allowed accelerated R2O
transfer from government laboratories and academic institutions to the operational
HWRF.
The major HWRF upgrades for the 2013 season, previously listed, provide a solid
foundation for improved tropical cyclone intensity prediction. Future advancements to
the HWRF system include implementing advanced physics packages, such as land-
surface model (LSM), radiation, PBL, and microphysics, increasing the number of
HWRF model vertical levels, and raising the model top.

12
Future advancements to atmospheric initialization include assimilation of cloudy and all-
sky radiances from various satellites, and additional observations from aircraft and/or
Unmanned Aerial Vehicles (UAVs). Those include flight level data, dropsondes, and
surface winds obtained with the Stepped-Frequency Microwave Radiometer (SFMR). It
should be noted that, to support future data assimilation efforts for the hurricane core,
NOAA acquired the G-IV aircraft in the mid 1990’s to supplement the data obtained by
NOAA’s P-3s. The high altitude of the G-IV allows for observations that help define the
three-dimensional core structure from the outflow layer to the near surface. For storms
approaching landfall, the coastal 88-D high-resolution radar data is also available.
In order to make use of these newly expanded observations, several advanced data
assimilation techniques are being explored within the operational and research hurricane
modeling communities, including 4D-VAR and hybrid EnKF-4D-VAR approaches. The
improvement of hurricane initialization has become a top priority in both the research and
operational communities.
Enhancements to the HWRF modeling infrastructure include a much larger outer domain
with multiple movable grids, and an eventual transition to NOAA’s Environmental
Modeling System (NEMS), which can provide a global-to-local scale modeling
framework.
The ocean component (POM-TC) will be replaced by MPIPOM-TC or HYCOM in the
near future to be consistent with EMC’s general ocean model development plan for all
EMC coupled applications. The HYCOM has its own data assimilation system that
includes assimilation of altimetry data and data from other remote-based and
conventional in situ ocean data platforms. This system will also assimilate Airborne
eXpendable BathyThermograph (AXBT) data obtained by NOAA’s P-3s for selected
storm scenarios over the GOM. Also, to include the dynamic feedback of surface waves
on air-sea processes and the ocean, HWRF will be coupled to an advanced version of the
NCEP wave model, the Wave Watch III (WW3). Further advancement of the WW3 to a
multi-grid wave model (MWW3) will incorporate 2-way interactive grids at different
resolutions. Eventually this system will be fully coupled to a dynamic storm surge
model for more accurate prediction of storm surge and forecasts of waves on top of storm
surge for advanced prediction of landfalling storms’ impacts on coasts. Moreover, to
address inland flooding and inundation associated with landfalling storms, HWRF will
also be coupled to a comprehensive Land Surface Model (Noah LSM) to provide better
precipitation forecasts for landfalling storms and to provide improved input for hydrology
and inland inundation models.
Other advancements to the HWRF modeling system include advanced products tailored
to serve Weather Forecast Offices (WFOs) along the coastal regions, enhanced model
diagnostics capabilities, and high-resolution ensembles. Figure I.2 shows the fully
coupled proposed operational hurricane system, with 2-way interaction between the
atmosphere-land-ocean-wave models, providing feedback to high-resolution bay and
estuary hydrodynamic models that predict storm surge inundation.

13
Hurrica ne-Wa ve-Ocea n-Surge-Inunda tion Coupled
Models

NCEP/Environmental Modeling Center NOS


Atmosphere- Ocean-Wave-Land land and coastal waters

HWRF SYSTEM
NMM hurricane atmosphere
NOAH LSM
runoff
High resolution surge
Coastal, Bay & inundation
fluxes Estuarine
Atmosphere/oceanic
Boundary Layer hydrodynamic
radiative model
fluxes
winds other fluxes
air temp. SST elevations
currents currents
wave HYCOM 3D salinities
spectra temperatures
OCEAN
MODEL
WAVE MODEL
Spectral wave model wave fluxes

Figure I.2. Proposed future operational coupled hurricane forecast system. The left/right
parts of the diagram refer to the responsibilities of the National Weather Service and
National Ocean Service (NOS), respectively.

14
1.0 HWRF Initialization
1.1 Introduction
The 2013 operational initialization of hurricanes in the HWRF model involves several
steps to prepare the analysis at various scales. The environmental fields are derived from
the 6-h forecast from GDAS, enhanced through the HDAS. The vortex-scale fields are
generated by inserting a vortex, corrected using TCVitals data, onto the large scale fields.
The vortex may originate from HDAS, from the previous HWRF forecast, or from a
bogus calculation, depending on the storm intensity and on the availability of a previous
HWRF forecast. Additionally, if inner core observations are available and inner core data
assimilation is turned on, storm-scale data assimilation is performed. Finally, the analyses
are interpolated onto the HWRF outer domain and two inner domains to initialize the
forecast.
The data assimilation systems for the GFS and for HWRF (GDAS and HDAS,
respectively) follow similar procedures, but are run on different grids (global for GDAS
and regional for HWRF). Both systems employ the community GSI, which is supported
by the DTC.
The original design for the HWRF initialization (Liu et al. 2006a) was to continually
cycle the HWRF large scale fields, applying the vortex relocation technique (Liu et al.
2000, 2006b) at every model initialization time. However, the results were problematic.
Large scale flows can drift and the errors increased as cycles passed. To address this
issue, the environmental fields from the HDAS analysis are now used at every
initialization time.
This section discusses the details of the atmospheric initialization, while the ocean
initialization is described in Section 2.

1.2 HWRF cycling system


The location of the HWRF outer and inner domains is based on the observed hurricane’s
current and projected center position. Therefore, if a storm is moving, the outer domain
will not be in the same location for subsequent cycles.
Once the domains have been defined, the HDAS analysis and a vortex replacement
strategy are used to create the initial fields. If a previous 6-h HWRF forecast is available,
and the observed intensity of the storm is greater than or equal to 16 ms-1, the vortex is
extracted from that forecast and corrected to be included in the current initialization. If
the previous HWRF forecast is not available, or the observed storm has a maximum wind
speed of less than 16 ms-1, the HDAS vortex is corrected and added to the current
initialization.
The vortex correction process involves the following steps, partially represented in Figure
1.1.

15
1) Interpolate the 6-h GDAS forecast fields onto the HWRF model parent grids.

2) Perform a one-way hybrid ensemble-3DVAR GSI analysis, using large scale


observation data and the GFS 80-member ensemble background error
correlation, to create HDAS analysis fields for the HWRF parent domain (see
section 1.5).

3) Remove the vortex from the HDAS analysis. The remaining large scale flow
is termed the “environmental field”.

4) Determine which vortex will be added to the environmental fields. Check the
availability of the HWRF 6-h forecast from the previous run (initialized 6 h
before the current run) and the observed storm intensity.

a. If the forecast is not available:

i. if the observed storm maximum wind speed is greater than, or


equal to, 30 ms-1, use a bogus vortex; or

ii. if the observed maximum wind speed is less than 30 ms-1, use a
corrected HDAS vortex.

b. If the forecast is available:

i. if the observed maximum wind speed is equal to or more than


16 ms-1, extract the vortex from the forecast fields and correct
it based on the TCVitals; or

ii. if the observed maximum wind speed is less than 16 ms-1, use a
corrected HDAS vortex.

5) Add the vortex obtained in step 4) to the environmental fields obtained in step
3).

6) Interpolate the data obtained from step 5) onto the outer and ghost domains. If
performing inner core data assimilation (optional in HWRF v3.5a as detailed
in Section 1.5), assimilate the inner core data on the ghost domain. This
domain is created for inner core data assimilation only, has the same
resolution as the inner-most nest (0.02o), and is about three times larger than
the inner nest). Finally, merge the data from the ghost domain onto the outer
and inner nest domains.

16
7) Run the HWRF forecast model.

17
Figure 1.1. Simplified flow diagram for HWRF vortex initialization describing
a) the split of the HWRF forecast between vortex and environment, b) the split
of the background fields between vortex and analysis, and c) the insertion of
the corrected vortex in the environmental field.
The vortex correction, as described in Section 1.4, adjusts its location, size, and structure
based on the TCVitals:
• storm location (data used: storm center position);
• storm size (data used: radius of maximum surface wind speed. 34-kt wind radii,
and radius of the outmost closed isobar); and
• storm intensity (data used: maximum surface wind speed and, secondarily, the
minimum sea level pressure).

18
As noted above, a bogus vortex (described in Section 1.3) is only used in the initialization
of strong storms (intensity greater than 30 ms-1). This is in contrast with previous HWRF
implementations. For instance, in the 2012 operational HWRF initialization, a bogus
vortex was used for all cold start runs. Generally speaking, a bogus vortex does not
produce the best intensity forecast. Also, cycling very weak storms (less than 16 ms-1)
without inner-core data assimilation often leads to large errors in intensity forecasts. To
reduce the intensity forecast errors for cold starts and weak storms, the corrected HDAS
vortex is used in the 2013 operational HWRF. These changes improve the intensity
forecast for the first several cycles, as well as for weak storms (less than 16 ms-1).

1.3 Bogus vortex used to correct weak storms


The bogus vortex discussed here is primarily used to cold-start strong storms (observed
intensity greater than or equal to 30 ms-1) and to increase the storm intensity when the
storm in the HWRF 6-h forecast is weaker than that of the observation (see Section
1.4.2). It is also employed when the observed RMW is over 16 times greater the one
present in the HWRF 6-h forecast or the storm intensity is greater than 64 and the
intensity correction factor β (defined in Section 1.4) is greater than 0.7. This is in contrast
with previous HWRF implementations, in which a bogus vortex was used in all cold
starts. This change significantly improves the intensity forecasts in the first 1-3 cycles of
a storm.
The bogus vortex is created from a 2D axi-symmetric synthetic vortex generated from a
past model forecast. The 2D vortex only needs to be recreated when the model physics
has undergone changes that strongly affect the storm structure. We currently have two
composite storms, one created in 2007 for strong deep storms, another one created in
2012 for shallow and medium depth storms.
For the creation of the 2D vortex, a forecast storm (over the ocean) with small size and
near axi-symmetric structure is selected. The 3D storm is separated from its environment
fields, and the 2D axi-symmetric part of the storm is calculated. The 2D vortex includes
the hurricane perturbations of horizontal wind component, temperature, specific
humidity, and sea-level pressure. This 2D axi-symmetric storm is used to create the
bogus storm.
To create the bogus storm, the wind profile of the 2D vortex is smoothed until its RMW
or maximum wind speed matches the observed values. Next, the storm size and intensity
are corrected following a procedure similar to the cycled system.
The vortex in medium-depth and deep storms, receives identical treatment, while the
vortex in shallow storms undergoes two final corrections: the vortex top is set to 400 hPa
and the warm core structures are removed.

19
1.4 Correction of vortex in previous 6-h HWRF or HDAS forecast
1.4.1 Storm size correction
Before starting to describe the storm size correction, some frequently used terms will be
defined. Composite vortex refers to the 2D axi-symmetric storm, which is created once
and used for all forecasts. The bogus vortex is created from the composite vortex by
smoothing and performing size (and/or intensity) corrections. The background field, or
guess field, is the output of the vortex initialization procedure, to which inner core
observations can be added through data assimilation. The environment field is defined as
the HDAS analysis field after removing the vortex component.
For hurricane data assimilation, we need a good background field. Storms in the
background field (this background field can be the GFS analysis or, as in the operational
HWRF, the previous 6-h forecast of GDAS) may be too large or too small, so the storm
size needs to be corrected based on observations. We use two parameters, namely the
radius of maximum winds and the radius of the outermost closed isobar to correct the
storm size.
The storm size correction can be achieved by stretching/compressing the model grid.
Let’s consider a storm of the wrong size in cylindrical coordinates. Assume the grid size
is linearly stretched along the radial direction

∆ri*
αi = = a + bri , (1.4.1.1)
∆ri

where a and b are constants. r and r * are the distances from the storm center before and
after the model grid is stretched. Index i represents the ith grid point.
Let rm and Rm denote the radius of the maximum wind and radius of the outermost
closed isobar (the minimum sea-level pressure is always scaled to the observed value
before calculating this radius) for the storm in the background field, respectively. Let rm*
and Rm* be the observed radius of maximum wind and radius of the outermost closed
isobar (which can be redefined if α in Equation (1.4.1.1) is set to be a constant). If the
high resolution model is able to resolve the hurricane eyewall structure, rm* / rm will be
close to 1, therefore, we can set b = 0 in Equation (1.4.1.1) and α = rm* / rm is a constant.
However, if the model doesn’t handle the eyewall structure well ( rm* / rm will be smaller
than Rm* / Rm ) within the background fields, we need to use Equation (1.4.1.1) to
stretch/compress the model grid.

20
0 rm* rm Rm* Rm

Integrating Equation (1.4.1.1), we have


r r
1
r * = f (r ) = ∫ α (r )dr = ∫ (a + br )dr = ar + br 2 . (1.4.1.2)
0 0
2

We compress/stretch the model grids such that

At
r = rm , r * = f (rm ) = rm* (1.4.1.3)

At
r = Rm , r * = f ( Rm ) = Rm* . (1.4.1.4)
Substituting (1.4.1.3) and (1.4.1.4) into (1.4.1.2), we have

1
arm + brm2 = rm* (1.4.1.5)
2
1
aRm + bRm2 = Rm*
2 . (1.4.1.6)
Solving for a and b, we have

rm* Rm2 − rm2 Rm* Rm* rm − Rm rm*


a= , b=2 . (1.4.1.7)
Rm rm ( Rm − rm ) Rm rm ( Rm − rm )

Therefore,

rm* Rm2 − rm2 Rm* R * r − Rm rm* 2


r * = f (r ) = r+ m m r
Rm rm ( Rm − rm ) Rm rm ( Rm − rm ) (1.4.1.8)

One special case is α being constant, so that

rm* Rm*
α = αm = =
rm Rm (1.4.1.9)

21
where b = 0 in equation (1.4.1.1), and the storm size correction is based on one
parameter only (this procedure was used in the initial implementation of the operational
HWRF model in 2007).
To calculate the radius of the outmost closed isobar, it is necessary to scale the minimum
surface pressure to the observed value as discussed below. A detailed discussion is given
in the following. We define two functions, f1 and f2, such that
for the 6-h HWRF or HDAS vortex (vortex #1),

∆p1
f1 = ∆pobs (1.4.1.10)
∆p1c

and for composite storm (vortex #2),

∆p2
f2 = ∆pobs (1.4.1.11)
∆p2 c

where ∆p1 and ∆p2 are the 2D surface perturbation pressures for vortices #1 and #2,
respectively. ∆p1c and ∆p2c are the minimum values of∆p1 and ∆p2, while ∆pobs is the
observed minimum perturbation pressure.
The radius of the outmost closed isobar for vortices#1 and #2 can be defined as the radius
of the 1 hPa contour from f1 and f2, respectively.
We can show that after the storm size correction for vortices #1 and #2, the radius of the
outmost closed isobar is unchanged for any combination of the vortices #1 and #2. For
example (c is a constant),
∆p1 ∆p 2
∆p1 + c∆p 2 = ∆p1c + c ∆p 2 c
∆p1c ∆p 2 c

At the radius of the1-hPa contour, we have f1 =1 and f2=1, or

∆p1 ∆p2 1
= =
∆p1c ∆p2 c ∆pobs

so,
∆p1 ∆p 2 1
∆p1 + c∆p 2 = ∆p1c + c ∆p 2 c = (∆p1c + c∆p 2 c ) = 1
∆p1c ∆p 2 c ∆p obs

22
where we have used
(∆p1c + c∆p2 c ) =∆pobs . (1.4.1.12)

Similarly, to calculate the radius of 34-knot winds, we need to scale the maximum wind
speed for vortices #1 and #2. We define two functions, g1 and g2, such that for the hh
HWRF or HDAS vortex (vortex #1),

v1
g1 = (vobs − vm ) ; (1.4.1.13)
v1m
for the composite storm (vortex #2),

v2 ; (1.4.1.14)
g2 = (vobs − vm )
v2 m
where v1m and v2m are the maximum wind speed for vortices #1 and #2, respectively, and
( v obs − v m ) is the observed maximum wind speed minus the environment wind. The
environment wind is defined as

vm = max(0, U1m − v1m ) , (1.4.1.15)

where U1m is the maximum wind speed at the 6-h forecast.


The radius of 34-knot wind for vortices #1 and #2 are calculated by setting both g1 and g2
to be 34 knots.
After the storm size correction, the combination of vortices #1 and #2 can be written as

v1 v
v1 + β v2= v1m + β 2 v2 m .
v1m v2 m

At the 34-knot radius, we have (g1 =34, g2 =34)

v1 v 34
v1 + β v2= v1m + β 2 v2 m= (v1m + β v2 m =
) 34 .
v1m v2 m vobs − vm

Note we have used,

(v1m + β v2 m ) + vm =
vobs . (1.4.1.16)

23
In the 2010 operational HWRF initialization, only one parameter (radius of the maximum
wind) was used in the storm size correction. The radius of the outermost closed isobar
was calculated, but never used. Since the 2011 upgrade, a second parameter (radius of
the outermost closed isobar or radius of the average 34 knot wind for hurricanes) was
added by Kevin Yeh (HRD). Specifically, in the 2010 HWRF initialization, Equation
(1.4.1.12) was used for storm size correction, and b was set to zero in Equation (1.4.1.1).
In the new operational models, a and b are calculated using Equation (1.4.1.7).
Storm size correction can be problematic. The reason is that the eyewall size produced in
the model can be larger than the observed one, and the model does not support observed
small-size eyewalls. For example, the radius of maximum winds for 2005’s Hurricane
Wilma was 9 km at 140 knots for many cycles. The model-produced radius of maximum
wind was larger than 20 km. If we compress the radius of maximum winds to 9 km, the
eyewall will collapse and significant spin-down will occur. So the minimum value for
storm eyewall is currently set to 19 km. The eyewall size in the model is related to model
resolution, model dynamics, and model physics.
In the storm size correction procedure, we do not match the observed radius of maximum
winds. Instead, we replace rm* as the average between the model value and the
observation. We also limit the correction to be 15% of the model value. In the 2013
version, the limit is set as follows (the settings are the same as those in the 2012 version):
10% if rm* is smaller than 20 km; 10-15% if rm* is between 20 and 40km; and 15% if rm*
is larger than 40 km. For the radius of the outermost closed isobar (or average 34 knot
wind if storm intensity is larger than 64 knots), the correction limit is set to 15% of the
model value.
Even with the current settings, major spin-down may occur if the eyewall size is small
and lasts for many cycles (due to the consecutive reduction of the storm eyewall size in
the initialization). To fix this problem, size reduction is stopped if the model storm size
(measured by the average radius of the filter domain) is smaller than the radius of the
outermost closed isobar.

1.4.1.1 Surface pressure adjustment after the storm size correction

In our approximation, we only correct the surface pressure of the axi-symmetric part of
the storm. The governing equation for the axi-symmetric components along the radial
direction is
∂u ∂u ∂u v 1 ∂p
+u + w − v( + f 0 ) + = Fr (1.4.1.1.1)
∂t ∂r ∂z r ρ ∂r

where u, v and w are the radial, tangential, and vertical velocity components, respectively.
u
Fr is friction and Fr ≈ −C d v where H B is the top of the boundary layer. Fr can be
HB
estimated as Fr ≈ −10 −6 v away from the storm center, and Fr ≈ −10 −5 v near the storm

24
center. Dropping the small terms, Equation (1.4.1.1.1) is close to the gradient wind
balance.
Since we separate the hurricane component from its environment, the contribution from
the environment flow to the average tangential wind speed can be neglected. From now
on, the tangential velocity we discussed refers to the vortex component.
We define the gradient wind stream function ψ as

∂ψ v2
= +v (1.4.1.1.2)
∂r rf 0

and
r
v2
ψ = ∫( + v)dr . (1.4.1.1.3)

rf 0

Due to the coordinate change, Equation (1.4.1.1.2) can be rewritten as the following,

∂ψ ∂ψ ∂r * ∂ψ
= * =α *
∂r ∂r ∂r ∂r

v2 v2 r* v 2 f (r )
+v = * +v = * +v ( r = r (r * ) ).
rf 0 r rf 0 r rf 0

Therefore, the gradient wind stream function becomes (due to the coordinate
transformation)

1  v 2 f (r * ) 
r*
ψ =∫ *  *
+ v(r * ) dr * . (1.4.1.1.4)

α (r )  r r (r ) f 0
*

We can also define a new gradient wind stream function for the new vortex as

∂ψ * v2
= +v, (1.4.1.1.5)
∂r * r * f0

where v is a function of r * . Therefore,

. (1.4.1.1.6)

Assuming the hurricane sea-level pressure component is proportional to the gradient


wind stream function at model level 1 (roughly 40 m in height), i.e.,

∆p (r * ) = c(r * )ψ (r * ) (1.4.1.1.7)
25
and

∆p* (r * ) = c(r * )ψ * (r * ) , (1.4.1.1.8)

where c(r * ) is a function of r * and represents the impact of friction on the gradient wind
balance. If friction is neglected, c(r * ) = 1.0 , we have gradient wind balance.
From equations (1.4.1.1.7) and (1.4.1.1.8), we have

ψ*
∆p = ∆p
*
, (1.4.1.1.9)
ψ

where ∆p = p s − p e and ∆p * = p s* − p e are the hurricane sea-level pressure


perturbations before and after the adjustment, and p e is the environment sea-level
pressure.
Note that the pressure adjustment is small due to the grid stretching. For example, if in
Equation (1.4.1.1) α s a constant we can show that Equation (1.4.1.1.4) becomes

r*
v2 1
ψ = ∫( + v)dr * . (1.4.1.1.10)

r f0 α
*

This value is very close to that of Equation (1.4.1.1.6) since the first term dominates.

1.4.1.2 Temperature adjustment


Once the surface pressure is corrected, we need to correct the temperature field.
Let’s consider the vertical equation of motion. Neglecting the Coriolis, water load, and
viscous terms, we have,

dw 1 ∂p
=− − g.
dt ρ ∂z (1.4.1.2.1)

The first term on the right hand side is the pressure gradient force, and g is gravity. dw/dt
is the total derivative (or Lagrangian air parcel acceleration) which, in the large scale
environment, is small compared to either of the last two terms. Therefore, we have,
1 ∂p
− −g =0
ρ ∂z

or

26
∂p p
=− g
∂z RTv (1.4.1.2.2)

Applying equation (1.4.1.2.2) to the environmental field and integrating from surface to
model top, we get:
H
p g dz
ln s = ∫ (1.4.1.2.3)
pT R 0 T v

where H and pT are the height and pressure at the model top, respectively. T v is the
virtual temperature of the environment.
The hydrostatic equation for the total field (environment field + vortex) is

p + ∆p g
H
dz
ln s = ∫ , (1.4.1.2.4)
pT R 0 (T v + ∆Tv )

where ∆p and ∆Tv are the sea-level pressure and virtual temperature perturbations for the
hurricane vortex. Since ∆p << p s and ∆Tv << T v , we can linearize Equation (1.4.1.2.4)

∆p ∆T
H H
ps g dz g dz
ln (1 + )= ∫ ≈ ∫ (1 − v ) . (1.4.1.2.5)
pT ps R 0 (T v + ∆Tv ) R 0 T v Tv

Subtract Equation (1.4.1.2.3) from Equation (1.4.1.2.5) and we have

∆p g ∆T
H
ln(1 + ) ≈ − ∫ 2v dz ,
ps R 0 Tv

or

∆p g ∆T
H
≈ − ∫ 2v dz . (1.4.1.2.6)
ps R 0 Tv

Multiplying Equation (1.4.1.2.6) by Γ(r * ) = ψ * /ψ ( Γ is a function of x and y only), we


have

Γ∆p g Γ∆Tv
H
≈− ∫ dz . (1.4.1.2.7)
ps R 0 T v2

We choose a simple solution to equation (1.4.1.2.7), i.e. the virtual temperature


correction is proportional to the magnitude of the virtual temperature perturbation. So the
new virtual temperature is

27
Tv = T v + Γ∆Tv = Tv + (Γ − 1)∆Tv .
*
(1.4.1.2.8)

In terms of the temperature field, we have

T * = T + Γ∆T = T + (Γ − 1)∆T (1.4.1.2.9)

where T is the 3D temperature before the surface pressure correction, and ∆T is


perturbation temperature for vortex #1.

1.4.1.3 Water vapor adjustment


Assume the relative humidity is unchanged before and after the temperature correction,
i.e.,

e e*
RH = ≈ * * (1.4.1.3.1)
es (T ) es (T )

where e and es (T ) are the vapor pressure and the saturation vapor pressure in the model
guess fields, respectively. e * and es* (T ) are the vapor pressure and the saturation vapor
pressure respectively, after the temperature adjustment.
Using the definition of the mixing ratio,
e
q = 0.622 (1.4.1.3.2)
p−e

at the same pressure level and from Equation (1.4.1.3.1)

. (1.4.1.3.3)

Therefore, the new mixing ratio becomes

e* e s* e s*
q ≈ q ≈ q ≈ q + ( − 1)q .
*
(1.4.1.3.4)
e es es

From the saturation water pressure


(T − 273.16)
es (T ) = 6.112 exp[17.67 ] (1.4.1.3.5)
(T − 29.66)

we can write

es* 17.67 * 243.5(T * − T )


= exp[ * ]. (1.4.1.3.6)
es (T − 29.66)(T − 29.66)
28
Substituting Equation (1.4.1.3.6) into (1.4.1.3.4), we have the new mixing ratio after the
temperature field is adjusted.

1.4.2 Storm intensity correction


Generally speaking, the storm in the background field has a different maximum wind
speed compared to the observations. We need to correct the storm intensity based on the
observations, which is discussed in detail in the following sections.

1.4.2.1 Computation of intensity correction factor ß

Let’s consider the general formulation in the traditional x, y, and z coordinates; where u1*
and v1* are the background horizontal velocity, and u 2 and v 2 are the vortex horizontal
velocity to be added to the background fields. We define

F1 = (u1* + u 2 ) 2 + (v1* + v 2 ) 2 (1.4.2.1.1)

and

F2 = (u1* + β u 2 ) 2 + (v 2* + β v 2 ) 2 . (1.4.2.1.2)

Function F1 is the wind speed if we simply add a vortex to the environment (or
background fields). Function F2 is the new wind speed after the intensity correction.
We consider two cases here.

Case I: F1 is larger than the observational maximum wind speed. We set u1* and v1* to be
the environment wind component; i.e., u1* = U and v1* = V (the vortex is removed and
the field is relatively smooth); and u2 = u1 and v2 = v1 are the vortex horizontal wind
components from the previous cycle’s 6-h forecast (we call it vortex #1, which contains
both the axi-symmetric and asymmetric parts of the vortex).
Case II: F1 is smaller than the observational maximum wind speed. We add the vortex
back into the environment fields after the grid stretching, i.e., u1* = U + u1 and
v1* = V + v1 . We choose u 2 and v 2 to be an axi-symmetric composite vortex (vortex #2)
which has the same radius of maximum wind as that of the first vortex.
In both cases, we can assume that the maximum wind speed for F1 and F2 are at the same
model grid point. To find β, we first locate the model grid point where F1 is at its
maximum. Let’s denote the wind components at this model grid point as u1m , v1m , u 2m ,
and v 2m (for convenience, we drop the superscript m), so that

29
(u1* + βu 2 ) 2 + (v1* + βv 2 ) 2 = vobs
2
(1.4.2.1.3)

where vobs is the 10m observed wind converted to the first model level.

Solving for β, we have

−u1* u2 − v1*v 2 + v obs


2
(u2 2 + v 2 2 ) − (u1*v 2 − v1* u2 ) 2
β= . (1.4.2.1.4)
(u2 2 + v 2 2 )

The procedure to correct wind speed is as follows.


First, we calculate the maximum wind speed from Equation (1.4.2.1.1) by adding the
vortex into the environment fields. If the maximum of F1 is larger than the observed wind
speed, we classify it as Case I and calculate the value of β. If the maximum of F1 is
smaller than the observed wind speed, we classify it as Case II. The reason we classify it
as Case II is that we don’t want to amplify the asymmetric part of the storm. Amplifying
it may negatively affect the track forecasts. In Case II, we first add the original vortex to
the environment fields after the storm size correction, then add a small portion of an axi-
symmetric composite storm. The composite storm portion is calculated from Equation
(1.4.2.1.4). Finally, the new vortex 3D wind field becomes

u ( x, y, z ) = u1* ( x, y, z ) + βu 2 ( x, y, z )

v( x, y, z ) = v1* ( x, y, z ) + βv 2 ( x, y, z ) .

1.4.2.2 Surface pressure, temperature and moisture adjustments after the


intensity correction
If the background fields are produced by high resolution models (such as in HWRF), the
intensity corrections are small and the correction of the storm structure is not necessary.
The guess fields should be close to the observations, therefore, we have

In Case I β is close to 1;
In Case II β is close to 0.
After the wind speed correction, we need to adjust the sea-level pressure, 3D temperature,
and the water vapor fields. These adjustments are described below.
In Case I, β is close to 1. Following the discussion in Section.1.4.1.1, we define the
gradient wind stream function ψ as

30
∂ψ v
= 2 + v2 (1.4.2.2.1)
∂r rf 0

and
r
v22
ψ2 = ∫( + v2 )dr . (1.4.2.2.2)

rf 0

The new gradient wind stream function is

( βv2 ) 2
r
ψ new = ∫ [ + βv2 ]dr . (1.4.2.2.3)

rf 0

The new sea-level pressure perturbation is


ψ new
∆p new = ∆p (1.4.2.2.4)
ψ2

where ∆p = p s − p e and ∆p new = p snew − p e are the hurricane sea-level pressure


perturbations before and after the adjustment and p e is the environment sea-level
pressure.
In Case II, β is close to 0. Let’s define
r
v12
ψ1 = ∫ ( + v1 )dr , (1.4.2.2.5)

rf 0

and the new gradient wind stream function is

(v1 + βv2 ) 2
r
ψ new = ∫ [ + (v1 + βv2 )]dr . (1.4.2.2.6)

rf 0

And the new sea-level pressure perturbation is calculated as,

ψ new
∆p new = ∆p . (1.4.2.2.7)
ψ1

Equations (1.4.2.2.4) and (1.4.2.2.7) are supposed to be close to the observed surface
pressure. However, if the model has an incorrect surface pressure-wind relationship,
Equations (1.4.2.2.4) and (1.4.2.2.7) may have a large surface pressure difference from
the observation. In 2013 HWRF, the pressure-wind relationship is further improved, and
we can set the limit to be 10% off the observation ∆pobs without producing large spin
up/spin down problems.

31
The correction of the temperature field is as follows,
In Case I, we define

ψ new
Γ= . (1.4.2.2.8)
ψ2

Then we use the following equation to correct the temperature fields.


T * = Te + Γ∆T1 = T + (Γ − 1)∆T1 (1.4.2.2.9)

In Case II, we define

ψ new
Γ= (1.4.2.2.10)
ψ1

and
T * = Te + ∆T1 + (Γ − 1)∆T2 = T + (Γ − 1)∆T2 , (1.4.2.2.11)

where T is the 3D background temperature field (environment+vortex1), and ∆T2 is the


temperature perturbation of the axi-symmetric composite vortex.
The corrections of water vapor in both cases are the same as those discussed in Section
1.4.1.3.
We would like to mention that the storm intensity correction is, in fact, a data analysis.
The observation data used here is the surface maximum wind speed (single point data),
and the background error correlations are flow dependent and based on the storm
structure. The storm structure used for the background error correlation is vortex #1 in
Case I, and vortex #2 in Case II (except for water vapor which still uses the vortex #1
structure). Vortex #2 is an axi-symmetric vortex. If the storm structure in vortex #1 could
be trusted, one could choose vortex #2 as the axi-symmetric part of vortex #1. In HWRF,
the structure of vortex #1 is not completely trusted when the background storm is weak,
and therefore an axi-symmetric composite vortex from old model forecasts is employed
as vortex #2.

1.5 Data assimilation through GSI in HWRF


In the 2013 operational HWRF, major changes have been made to the data assimilation
procedure. First, the data assimilation scheme used for HWRF has been upgraded from a
3DVAR scheme to a hybrid ensemble-variational data assimilation scheme. In the 2012
HWRF 3DVAR data assimilation scheme, the background error covariance was obtained
through the National Meteorological Center (now NCEP) method and was isotropic and
static. By introducing the hybrid method, flow-dependent background error covariance
estimated from the short term ensemble forecast are now incorporated into the variational
framework of the data assimilation scheme. The hybrid method has been shown to be
32
better than the stand-along ensemble-based method (e.g. Ensemble Kalman filter, EnKF),
especially when the ensemble size is small or large model error is present (Wang et al.
2007b).
In GSI, the ensemble covariance is incorporated in the variational scheme through the
extended control variable method (Lorenc 2003; Buehner 2005). The following
description of the algorithm follows Wang 2010.
The analysis increment, denoted as x´, is a sum of two terms:

, (1.5.1)

where is the increment associated with the GSI static background covariance and the
second term is the increment associated with the flow-dependent ensemble covariance. In
the second term is the kth ensemble perturbation normalized by , where K is
the ensemble size. The vector contains the extended control variables for
each ensemble member. The second term represents a local linear combination of
ensemble perturbations, and is the weight applied to the kth ensemble perturbation.

The cost function minimized to obtain x´ is

, (1.5.2)

where

is the static background error covariance matrix,


is the weight applied to the static background error covariance

A defines the spatial correlation of ,


is the weight applied to the ensemble covariance,
y is the innovation vector,
R is the observational and representativeness error covariance matrix,
H is the observation operator, and

is the constraint term.


Wang (2007a) proved the equivalence of using (1.5.1) and (1.5.2) to find the solution to
that, with the ensemble covariance explicitly included as part of the background
covariance. It is also shown in this paper that matrix A determines the covariance
localization on the ensemble covariance.
33
The same conjugate gradient minimization algorithm used for the 3DVAR scheme is
used to find the optimal solution for the analysis problem (1.5.1.) and (1.5.2), except the
control variable and the background covariance are extended as

, (1.5.3)

and

. (1.5.4)

More information about the hybrid algorithm can be found in Wang (2010). The iteration
algorithm can be found in the GSI User’s Guide Chapter 6, Section 6.1. Two outer loops
with 50 iterations each are used for HWRF (miter=2, niter(1)=50, niter(2)=50). The
outer loop consists of more complete (nonlinear) observation operators and quality
control. Usually, simpler observation operators are used in the inner loop. Variational
quality control, which is part of the inner loop, is not used for HWRF (noiqc=.false.).
The analysis variables are: streamfunction; unbalanced part of velocity potential;
unbalanced part of temperature; unbalanced part of surface pressure; pseudo-relative
humidity (qoption = 1) or normalized relative humidity (qoption = 2); ozone mixing ratio;
cloud condensation mixing ratio; and satellite bias correction coefficients. Ozone and
cloud variables are not analyzed in HWRF. The definition of the normalized relative
humidity allows for a multivariate coupling of the moisture, temperature, and pressure
increments, as well as flow dependence (Kleist et al. 2009). Therefore this option is used
for HWRF.
The 2013 HWRF hybrid data assimilation system is a one-way hybrid system. It uses the
global hybrid EnKF/Var system run at T254L64 to provide ensemble perturbations
(Figure 1.2). It is called a one-way hybrid, because it does not include a HWRF ensemble
forecast and the associated EnKF analysis components. The HWRF hybrid analysis does
not feed back to the updated ensemble perturbations through EnKF, as is the case with
the global hybrid system. The first guess of the HWRF hybrid analysis at each analysis
time is the global GDAS 6-h forecast at T574L64 after vortex relocation is done in the
global model, the same first guess used in global hybrid analysis.

34
HWRF one-way hybrid system
HWRF Vortex GSI hybrid analysis HWRF
forecast initialization Ens/Var ghost on d01, forecast
d02, d03

GSI hybrid Ens/Var


d01

member 1 member 1
forecast analysis

Recenter analysis
global hybrid EnKF/Var system

member 2 EnKF member member 2


forecast update analysis

member K member K
forecast analysis

high res GSI hybrid high res


forecast Ens/Var analysis

Figure 1.2. Flow diagram of HWRF and GFS hybrid data assimilation systems.
Processes described by the black and magenta lines always run, while those described by
the red and blue lines run when inner core data assimilation is performed or is not
performed, respectively.
In the 2013 HWRF, the method to combine data assimilation and vortex initialization has
been modified. Most of the time, observations within the vortex area are quite sparse.
Without good data coverage, applying the GSI analysis after the vortex initialization can
negatively affect the vortex structure and wind-mass balance. To address this issue in the
2012 HWRF, conventional data within 1200 km of the storm center were not assimilated.
However, inner core observations, such as NOAA P3 tail Doppler radar (TDR) radial
winds and aircraft (US Air Force WC-13J and NOAA P3) reconnaissance data (flight
level observations, dropsonde, and surface wind speed data), can be used to retrieve key
structures of the TC vortices. The capabilities of assimilating TDR and aircraft
reconnaissance data have been added into GSI and are available in the HWRF v3.5a.
Based on results obtained from retrospective testing for the 2010-2012 hurricane seasons,
it was decided to assimilate TDR data in the operational 2013 HWRF. On the other hand,
other aircraft reconnaissance data, including flight level observations, dropsondes, and
surface wind speeds obtained with the SFMR, will not be assimilated in operations due to
negative impact noted for intensity forecasts.

35
However, the TDR and reconnaissance data are not always available. When inner core
observations are not available, the GSI hybrid analysis is performed only on the HWRF
outer domain (75ox75o, 0.18o horizontal resolution). This outer domain analysis then
constitutes the TC environment. In the vortex initialization procedure, the original vortex
from the outer domain analysis is replaced by the relocated, size- and intensity-corrected,
HWRF or HDAS vortex, or the bogus vortex (see Section 1.2). The fields on the vortex
initialization domain (30ox30o, 0.02o horizontal resolution) are then interpolated into the
outer domain and two inner nests to initialize the forecast. This procedure is represented
by the blue arrows in Figure 1.2.
If inner core data assimilation is turned on and the inner core observations are available,
the hybrid analysis is performed on the intermediate ghost domain (20ox20o, 0.02o
horizontal resolution), and the GSI analysis on the ghost domain is done after vortex
initialization. This is represented by the red arrows in Figure 1.2. Note that, when inner
core data are assimilated, the hybrid analysis on the outer domain is not used as input to
the vortex initialization. Instead, the vortex initialization uses the GDAS forecast as
input. This is because including observations is done in the GSI hybrid analysis
performed in the ghost domain after the vortex initialization. After data assimilation, the
ghost domain analyses are interpolated onto the HWRF outer domain and two inner
domains to initialize the forecast. For the HWRF outer domain, a blending zone is added
around the ghost domain boundary area, so that the model fields gradually change from
the values of the ghost domain analysis to the values of the HWRF outer domain analysis.
To collect inner core observations, the aircraft has to penetrate the targeted TC a few
times following a certain pattern. The pattern depends on storm intensity. It takes a few
hours to finish one reconnaissance mission. To take into account the distribution of the
inner core observations in the data assimilation time window, a technique called First
Guess at Appropriate Time (FGAT) is used. In traditional 3DVAR data assimilation
schemes, observations are assumed to be valid at the analysis time. With FGAT,
observations are compared with the first guess at the observation time to obtain the
innovation. The first guess at observation time is obtained by interpolating the two closest
time levels of background fields within in GSI. The GDAS forecasts with relocation
applied can only be computed at three-hourly intervals. Therefore, the preprocessing
steps and the vortex initialization are generated at the time of HWRF initialization, and
also at plus and minus three hours.
The GDAS forecast with relocation applied is preferred, because there are occasions in
which two storms are close to each other and present in the HWRF outer domain at the
same time. It would be undesirable to have the initial position error of the non-targeted
storm to impact the forecast of the targeted storm.
For the HWRF hybrid analysis, 6-hour forecasts of the GFS 80-member ensemble are
used to provide ensemble covariance. For the outer domain analysis, (beta1_inv in
GSI namelist) is set to 0.25, which means 75% of the weight is placed on the ensemble
covariance. For the ghost domain analysis, is set to 0.20, which means 80% of the
weight is from the ensemble covariance.

36
As mentioned earlier, matrix A in (1.5.2) effectively conducts the covariance localization
on the ensemble covariance. In GSI, a recursive filter is used to approximate the static
background error covariance , as well as A. The correlation length scale of the
recursive filter used to approximate A prescribes the covariance localization length scale
for ensemble covariance. The parameters in the GSI namelist that define the horizontal
and vertical localization length scales refer to the recursive filter e-folding length scale. In
most EnKF applications, the correlation function given by Eq. (4.10) of Gaspari and
Cohn (1999) is used for covariance localization. The Gaspari-Cohn type of localization
length scale refers to the distance at which the covariance is forced to be zero, which is
roughly equivalent to the e-folding length scale divided by 0.388.
For the HWRF outer domain analysis, the horizontal localization length scale is set to
600 km (s_ens_h=600) from the 1st through the 26th model level (approximately 300
hPa). Above the 26th model level, the horizontal length scale gradually increases,
reaching 900 km at the model top. For the ghost domain analysis, the horizontal
localization length scale does not change vertically and is set to 150 km. This smaller
value is used because the higher horizontal resolution can resolve smaller-scale
structures. The vertical localization length scale for the outer domain is 0.5 in units of
(s_ens_v=-0.5), where is the pressure in units of cb. Note that if the vertical
localization length scale is measured in units of , s_ens_v is expressed as a negative
value and the length scale is the absolute value of s_ens_v. For the ghost domain
analysis, the vertical localization length scale is 10 in vertical grid units for storms
weaker than category 1 and is 20 grid units for storms equal to or stronger than category
1. When using GSI with HWRF, ‘wrf_nmm_regional’ and ‘uv_hyb_ens’ in the GSI
namelist should be set to ‘true’. The use of ‘uv_hyb_ens=.true.’ means that ensemble
perturbations contain the zonal and meridional components of the wind instead of stream
function and velocity potential.
Conventional observations assimilated in the HWRF outer and ghost domains include:
• radiosondes;
• dropsondes;
• aircraft reports (AIREP/PIREP, RECCO , MDCRS-ACARS, TAMDAR ,
AMDAR);
• surface ship and buoy observations;
• surface observations over land;
• pibal winds;
• wind profilers;
• radar-derived Velocity Azimuth Display (VAD) wind;
• WindSat scatterometer winds; and

37
• integrated precipitable water derived from the Global Positioning System.

Dropsonde wind data and surface pressure data are not assimilated near the storm center
because they negatively impact the forecast. Dropsonde wind data are excluded within a
radius of 111 km or three times the RMW, whichever is larger. Surface pressure data are
excluded within a radius of 200 km or the ROCI, whichever is larger.
Mean sea level pressure data from the TCVitals are only assimilated in the HWRF outer
domain. MSLP data may not have much impact on the targeted storm, because the vortex
in the outer domain analysis is later replaced by the modified vortex through the vortex
initialization procedure. However, the TCVitals MSLP may help describe non-targeted
storms in the outer domain. Mean sea level pressure is not assimilated in the ghost
domain because it degrades the wind forecast, which is of primary importance for the
NHC. This is a result of an inconsistency between the forecast and observed wind-
pressure relationship.
Satellite radiance observations and satellite wind estimates are also not assimilated. The
main issues with radiance data assimilation are bias correction and the fact that the
HWRF model top is relatively low (50 hPa), requiring careful selection of the channels.
Negative impact on both track and intensity forecasts was found when satellite wind
estimates were assimilated. Quality control is one of the issues with satellite wind data.
More research and testing are needed to realize a positive impact using radiance and wind
estimates in HWRF.
The NOAA P3 TDR, using the Fore-Aft Scanning Technique (FAST) probes the three-
dimensional wind field in the inner cores of the hurricanes (Gamache et al. 1995). The
antenna is programmed to scan as much as 25° fore or aft of the plane perpendicular to
the fuselage. Major quality control of the radial velocity data, including: (1) removing the
projection of the aircraft motion on the observed Doppler velocity; (2) removing the
reflection of the main lob and side lobs off the sea surface; (3) removing noise; and (4)
unfolding, are conducted aboard the P3 aircraft before the data are sent to the ground.
The TDR data in Binary Universal Form for the Representation of meteorological data
(BUFR) format contain quality-controlled radial velocities averaged over 8 gates along
the radial direction. Further data thinning to the model resolution and quality control of
the TDR radial velocities are performed before the data are assimilated. Figure 1.3 is an
example of the TDR radial velocity data assimilated for Hurricane Earl at 12 Z on August
29, 2010. The observation error, including the representative error, of the radial velocity
data is set to be 5 ms-1. When the difference between the observation and the background
field is more than 10 ms-1, the observation error gradually increases to 10 ms-1.
Observations differing from the background more than 20 ms-1 are rejected. Positive
impact of assimilating TDR data was found in a HWRF experiment performed during the
2012 hurricane season.

38
Figure 1.3. NOAA TDR radial velocities between 800 hPa and 700 hPa assimilated at 12
Z on August 29, 2010.

39
2.0 Princeton Ocean Model for Tropical Cyclones (POM-
TC)
2.1 Introduction
The three-dimensional, primitive equation, numerical ocean model that has become
widely known as the POM was originally developed by Alan F. Blumberg and George L.
Mellor in the late 1970s. One of the more popularly cited references for the early version
of POM is Blumberg and Mellor (1987), in which the model was principally used for a
variety of coastal ocean circulation applications. Through the 1990’s and 2000’s, the
number of POM users increased enormously, reaching over 3500 registered users as of
October 2009. During this time, many changes were made to the POM code by a variety
of users, and some of these changes were included in the “official” versions of the code
housed at Princeton University (http://aos.princeton.edu/WWWPUBLIC/htdocs.pom/).
Mellor (2004), currently available on the aforementioned Princeton University website, is
the latest version of the POM User’s Guide and is an excellent reference for
understanding the details of the more recent versions of the official POM code.
Unfortunately, some earlier versions of the POM code are no longer supported or well-
documented at Princeton, so users of these earlier POM versions must take care to
understand the differences between their version of the code and the version described in
Mellor (2004). Also, some minor changes have been made to the official POM code
since Mellor (2004), and other versions of the code with various new capabilities have
been developed and continue to be developed based on the official code.
In 1994, a version of POM available at the time was transferred to URI for the purpose of
coupling to the GFDL hurricane model. At this point, POM code changes were made
specifically to address the problem of the ocean’s response to hurricane wind forcing in
order to create a more realistic Sea Surface Temperature (SST) field for input to the
hurricane model, and ultimately to improve 3-5 day hurricane intensity forecasts in the
model. Initial testing showed hurricane intensity forecast improvements when ocean
coupling was included (Bender and Ginis 2000). Since operational implementation of the
coupled GFDL/POM model at NCEP in 2001, additional changes to POM were made at
URI and subsequently implemented in the operational GFDL model, including improved
ocean initialization (Falkovich et al. 2005, Bender et al. 2007, Yablonsky and Ginis
2008). This POM version was then coupled to the atmospheric component of the HWRF
model in the North Atlantic Ocean (but not in the North Pacific Ocean) before
operational implementation of HWRF at NCEP/EMC in 2007. Then for the 2012
operational implementation of HWRF, a simplified one-dimensional (vertical columnar)
version of POM was coupled to the atmospheric component of HWRF in the eastern
North Pacific Ocean, as in the operational GFDL model. The remainder of this document
primarily describes the POM component of the 2013 operational HWRF model used to
forecast tropical cyclones in the North Atlantic and North Pacific Oceans, including the
so-called “United,” “East Atlantic,” and “East Pacific” regions (see “Grid Size, Spacing,
Configuration, Arrangement, Coordinate System, and Numerical Scheme” below); this
40
version of POM will henceforth be referred to as POM-TC. Alternative POM-TC
configurations that deviate from the 2013 operational HWRF model version are clearly
indicated in the text, including occasional discussion of URI’s brand new version of
POM-TC, henceforth called MPIPOM-TC, which includes: 1) MPI (to run on multiple
processors); 2) higher resolution; 3) larger, relocatable ocean domain; 4) improved
physics; 5) 18 years of community-based improvements and bug fixes; and 6) flexible
initialization options (Yablonsky et al. 2013). MPIPOM-TC, which is currently not part
of the supported HWRF community code, is under consideration for HWRF operational
implementation in 2014.

2.2 Purpose
The primary purpose of coupling the POM-TC (or any fully three-dimensional ocean
model) to the HWRF (or to any hurricane model) is to create an accurate SST field for
input into the hurricane model. The SST field is subsequently used by the HWRF to
calculate the surface heat and moisture fluxes from the ocean to the atmosphere. An
uncoupled hurricane model with a static SST field is restricted by its inability to account
for SST changes during model integration, which can contribute to high intensity bias
(e.g. Bender and Ginis 2000). Similarly, a hurricane model coupled to an ocean model
that does not account for fully three-dimensional ocean dynamics may only account for
some of the hurricane-induced SST changes during model integration (e.g. Yablonsky
and Ginis 2009, 2013).

2.3 Grid size, spacing, configuration, arrangement, coordinate


system, and numerical scheme
The horizontal POM-TC grid uses curvilinear orthogonal coordinates. There are
currently two POM-TC grids in the North Atlantic Ocean and one POM-TC grid in the
eastern North Pacific Ocean (although MPIPOM-TC combines and expands the two
North Atlantic POM-TC grids in a single transatlantic grid). HWRF uses the current and
72-hour projected hurricane track to choose which of the two North Atlantic POM-TC
grids to use for coupling. The projected track is based on a simple extrapolation in time
of the currently-observed storm translation speed. The first North Atlantic grid covers
the United region, which is bounded by 10°N latitude to the south, 47.5°N latitude to the
north, 98.5°W longitude to the west, and 50°W longitude to the east. In the operational
POM-TC United region, there are 225 latitudinal grid points and 254 longitudinal grid
points, yielding ~18-km grid spacing in both the latitudinal and longitudinal directions.
The second North Atlantic grid covers the East Atlantic region, which is bounded by
10°N latitude to the south, 47.5°N latitude to the north, 60°W longitude to the west, and
30°W longitude to the east. In the operational POM-TC East Atlantic region, there are
225 latitudinal grid points and 157 longitudinal grid points, yielding ~18-km grid spacing
in both the latitudinal and longitudinal directions. The North Pacific grid covers the East
Pacific region, which is bounded by 0° latitude to the south, 40°N latitude to the north,
and variable west and east boundaries that are determined by the initial position of the
center of the outermost HWRF atmospheric grid for a given forecast. Regardless of the
west and east boundaries of the East Pacific grid, the grid is always 40° longitude in
41
width and has 241 latitudinal and longitudinal grid points, yielding ~18-km grid spacing
in both the latitudinal and longitudinal directions.
The vertical coordinate is the terrain-following sigma coordinate system (Phillips 1957,
Mellor 2004, Figure 1 and Appendix D). In the North Atlantic Ocean, there are 23
vertical levels, where the level placement is scaled based on the bathymetry of the ocean
at a given location; the largest vertical spacing occurs where the ocean depth is 5500 m.
Here, the 23 half-sigma vertical levels (“ZZ” in Mellor 2004) are located at 5, 15, 25, 35,
45, 55, 65, 77.5, 92.5, 110, 135, 175, 250, 375, 550, 775, 1100, 1550, 2100, 2800, 3700,
4850, and 5500 m depth. These depths also represent the vertically-interpolated z-levels
of the three-dimensional variables in the POM-TC output files, including temperature
(T), salinity (S), east-west current velocity (U), and north-south current velocity (V) (see
“Output Fields for Diagnostics” that follows). In the North Pacific Ocean, there are 16
vertical levels, where the level placement is scaled based on the bathymetry of the ocean
at a given location, but the ocean depth is truncated to 600 m, where the largest vertical
spacing occurs. Here, the 16 half-sigma vertical levels (“ZZ” in Mellor 2004) are located
at 5, 15, 25, 35, 45, 55, 65, 77.5, 92.5, 110, 135, 175, 250, 365, 515, and 600 m depth.
Again, these depths also represent the vertically interpolated z-levels of the three-
dimensional variables in the POM-TC output files (see “Output Fields for Diagnostics”
that follows).
During model integration, horizontal spatial differencing (in the North Atlantic Ocean) of
the POM-TC variables occurs on the so-called staggered Arakawa-C grid. With this grid
arrangement, some model variables are calculated at a horizontally shifted location from
other model variables. See Mellor (2004, Section 4) for a detailed description and
pictorial representations of POM-TC’s Arakawa-C grid. In the POM-TC output files,
however, all model output variables have been horizontally-interpolated back to the same
grid; that is, the so-called Arakawa-A grid (see “Output Fields for Diagnostics” that
follows).
POM-TC has a free surface and a split time step. The external mode is two-dimensional
and uses a short time step (13.5 s during coupled POM-TC integration, 22.5 s during pre-
coupled POM-TC spinup) based on the well-known Courant-Friedrichs-Lewy (CFL)
condition and the external wave speed. The internal mode is three-dimensional (in the
North Atlantic Ocean) and uses a longer time step (9 min during coupled POM-TC
integration, 15 min during pre-coupled POM-TC spinup) based on the CFL condition and
the internal wave speed. Horizontal time differencing (in the North Atlantic Ocean) is
explicit, whereas the vertical time differencing is implicit. The latter eliminates time
constraints for the vertical coordinate and permits the use of fine vertical resolution in the
surface and bottom boundary layers. See Mellor (2004, Section 4) for a detailed
description and pictorial representations of POM-TC’s numerical scheme.

2.4 Initialization
Prior to coupled model integration of the HWRF/POM, POM-TC is initialized with a
realistic, three-dimensional T and S field, and subsequently integrated to generate
realistic ocean currents and to incorporate the pre-existing hurricane-generated cold
42
wake. The starting point for the ocean initialization in the North Atlantic Ocean is the
Generalized Digital Environmental Model (GDEM) monthly ocean T and S climatology
(Teague et al. 1990), which has 1⁄2° horizontal grid spacing and 33 vertical z-levels
located at 0, 10, 20, 30, 50, 75, 100, 125, 150, 200, 250, 300, 400, 500, 600, 700, 800,
900, 1000, 1100, 1200, 1300, 1400, 1500, 1750, 2000, 2500, 3000, 3500, 4000, 4500,
5000, and 5500 m depth. In the United region, the GDEM climatology is then modified
diagnostically by interpolating it in time to the POM-TC initialization date (using two
months of GDEM data), horizontally-interpolating it onto the POM-TC United grid,
assimilating a land/sea mask and bathymetry data, and employing a feature-based
modeling procedure that incorporates historical and near-real time observations of the
ocean (Falkovich et al. 2005, Yablonsky and Ginis 2008). This feature-based modeling
procedure has also been configured to utilize alternative T and S climatologies with 1⁄4°
grid spacing, including a newer GDEM climatology and a Levitus climatology (Boyer
and Levitus 1997), but tests with these climatologies in the North Atlantic Ocean in the
GFDL model do not show increased skill over the original GDEM climatology used
operationally (Yablonsky et al. 2006). In the East Atlantic region (unlike the United
region), the only diagnostic modifications currently made to the GDEM climatology are
horizontal interpolation onto the POM-TC East Atlantic grid and assimilation of a
land/sea mask and bathymetry data. No feature-based modeling procedure is used in the
East Atlantic region. In the East Pacific region, the Levitus monthly ocean T and S
climatology, which has 1⁄4° horizontal grid spacing and the same 33 vertical z-levels as
GDEM (Boyer and Levitus 1997), is used to initialize the ocean.
The basic premise of the feature-based modeling procedure is that major oceanic fronts
and eddies in the western North Atlantic Ocean, namely the Gulf Stream (GS), the Loop
Current (LC), and Loop Current warm and cold core rings (WCRs and CCRs), are poorly
represented by the GDEM climatology’s T and S fields. By defining the spatial structure
of these fronts and eddies using historical observations gathered from various field
experiments (Falkovich et al. 2005, Section 3), cross-frontal “sharpening” of the GDEM
T and S fields can be performed to obtain more realistic fields. These sharpened fields
yield stronger geostrophically-adjusted ocean currents along the front than would be
obtained directly from GDEM, causing the former to be more consistent with
observations than the latter. In addition, algorithms were incorporated into the feature-
based modeling procedure to initialize the GS and LC with prescribed paths, and to insert
WCRs and CCRs in the Gulf of Mexico based on guidance from near real-time
observations, such as satellite altimetry (Yablonsky and Ginis 2008, Section 2).
After the aforementioned diagnostic modifications to the GDEM (or Levitus) climatology
(including the feature-based modifications in the United region), at the beginning of what
is referred to as ocean spinup “phase 1” (also commonly known as “phase 3” for
historical reasons), the upper ocean temperature field is modified by assimilating the real-
time daily SST data (with 1° grid spacing) that is used in the operational NCEP GFS
global analysis (hereafter NCEP SST; Reynolds and Smith 1994). Further details of the
SST assimilation procedure used in the United region can be found in Yablonsky and
Ginis (2008, Section 2); an older version of this procedure is still used in the East
Atlantic and East Pacific regions (although there are potential future plans to make the

43
East Atlantic and East Pacific SST assimilation procedure consistent with the United
procedure). Finally, the three-dimensional T and S fields are interpolated from the
GDEM (or Levitus) z-levels onto the POM-TC vertical sigma levels, and the density
(RHO) is calculated using the modified United Nations Educational, Scientific, and
Cultural Organization (UNESCO) equation of state (Mellor 1991), ending the diagnostic
portion of the ocean initialization.
Ocean spinup phase 1 involves 48-h of POM-TC integration in the North Atlantic Ocean,
primarily for dynamic adjustment of the T and S (and ultimately, RHO) fields and
generation of geostrophically adjusted currents. In the East Pacific, where the one-
dimensional simplification is used, ocean spinup phase 1 involves only 4-h of POM-TC
integration, because there is no current generation in the absence of wind forcing. During
phase 1, SST is held constant. Once phase 1 is complete, the phase 1 output is used to
initialize ocean spinup “phase 2” (also commonly known as “phase 4” for historical
reasons). During phase 2, the cold wake at the ocean surface and the currents produced
by the hurricane prior to the beginning of the coupled model forecast are generated by a
72-h integration of POM-TC with the observed hurricane surface wind distribution
provided by NOAA’s NHC along the storm track. Once phase 2 is complete, the phase 2
output is used to initialize the POM-TC component of the coupled HWRF.

2.5 Physics and dynamics


As previously stated, the primary purpose of coupling the POM-TC to the HWRF is to
create an accurate SST field for input into the HWRF. An accurate SST field requires
ocean physics that can generate accurate SST change in response to wind (and to a lesser
extent, thermal) forcing at the air-sea interface. The leading order mechanism driving
SST change induced by wind forcing is vertical mixing and entrainment in the upper
ocean. Vertical mixing occurs because wind stress generates ocean surface layer
currents, and the resulting vertical current shear leads to turbulence, which then mixes the
upper ocean and entrains colder water from the thermocline up into the well-mixed ocean
surface layer, ultimately cooling the SST. In POM-TC, turbulence is parameterized using
an imbedded second moment turbulence closure submodel, which provides the vertical
mixing coefficients. This submodel is widely known as the Mellor-Yamada Level 2.5
turbulence closure model (Mellor and Yamada 1982, Mellor 2004, Sections 1 and 14).
If vertical mixing (and the resulting entrainment) was the only ocean response to
hurricane wind forcing that impacted SST, then a one-dimensional (vertical columnar)
ocean model would be sufficient. Indeed, a simplified, one-dimensional version of POM-
TC is now implemented operationally in the East Pacific region in HWRF. However,
idealized experiments comparing the three-dimensional and one-dimensional versions of
POM-TC show that the one-dimensional POM-TC underestimates SST cooling for slow-
moving hurricanes (Yablonsky and Ginis 2009). This is consistent with previous studies
(e.g. Price 1981). The primary reason a one-dimensional ocean model fails to capture the
magnitude of SST cooling for slow-moving storms is the neglect of upwelling, which is a
fully three-dimensional process. The cyclonic wind stress induced by a hurricane creates
divergent surface currents in the upper ocean, thereby causing upwelling of cooler water
from the thermocline towards the sea surface. For slow-moving storms, this upwelling
44
increases the efficiency with which vertical mixing can entrain cooler water from the
thermocline into the well-mixed ocean surface layer, ultimately cooling the SST. Finally,
horizontal advection, which is also neglected by one-dimensional ocean models, may
impact the SST distribution, especially in oceanic fronts and eddies where strong
background currents exist (Yablonsky and Ginis 2013). Horizontal diffusion in POM-TC,
which generally has relatively little impact on the SST over the time scale of the
hurricane, uses Smagorinsky diffusivity (Smagorinsky 1963).

2.6 Coupling
At NCEP, a coupler was developed to act as an independent interface between the HWRF
atmospheric component and the POM-TC. While the technology of the atmosphere-
ocean coupling in HWRF differs from the GFDL model, the purpose is the same. During
forecast integration of HWRF, the east-west and north-south momentum fluxes at the air-
sea interface (“wusurf” and “wvsurf” in Mellor 2004) are passed from the atmosphere to
the ocean, along with temperature flux (“wtsurf”) and the short wave radiation incident
on the ocean surface (“swrad”). Prior to a change made in the operational HWRF in
between the 2012 and 2013 Atlantic hurricane seasons, all four of these fluxes (wusurf,
wvsurf, wtsurf, and swrad) were first truncated by 25% before being passed from the
atmosphere to the ocean in order to mitigate excessive SST cooling in POM-TC. This
25% flux truncation has now been eliminated, based on recent testing that showed
improved SST cooling prediction and improved HWRF intensity prediction in the
absence of the 25% flux truncation. During forecast integration of the POM-TC, the SST
is passed from the ocean to the atmosphere.
The time integration of the coupled system is carried out with three executables working
in Multiple Program Multiple Data (MPMD) mode for the HWRF atmospheric
component, POM-TC, and the coupler. The coupler serves as a hub for MPI
communications between HWRF atmosphere and POM-TC and performs the
interpolation of the surface fluxes from the fixed and moving HWRF atmospheric grids
to the POM-TC grid and of the SST from the POM-TC grid to the two outermost HWRF
atmospheric grids. A generalized bilinear interpolation for non-rectangular quadrilateral
grid cells is used. Only sea-point values of the surface fields are employed for the
interpolation. For missing values due to model domain inconsistencies, a limited
extrapolation within the relevant connected component of the model sea surface is used.
The computations that establish the mutual configuration of the grids (interpolation
initialization) are performed prior to the forecast, using an algorithm with the number of
operations reduced to the order of N3, where N is the number of points in a grid row. The
coupler also provides run-time analysis and diagnostics of the surface data.
Finally, the coupler includes the non-operational capability for three-way coupling, where
the third model component is the WAVEWATCH III wave model. With the three-way
option activated, HWRF atmosphere supplies WAVEWATCH III with surface wind data
and the hurricane’s current geographical position, which is taken to be a circle
circumscribed around HWRF’s moving domain. WAVEWATCH III is not currently
supported by the DTC.

45
2.7 Output fields for diagnostics
At a given time interval, which can be as short as one hour, but is typically either 6 hours
or 24 hours (as prescribed in the PARAMETERS.inp file), some of the two-dimensional
and three-dimensional variables are saved in individual FORTRAN binary output files
for diagnostic purposes (although MPIPOM-TC eliminates these FORTRAN binary
output files in favor of more comprehensive netCDF output files). The format of the
names of these FORTRAN binary output files is “X.YYMMDDHH,” where “X” is the
variable name and “YYMMDDHH” is the two-digit year, month, day, and hour. The
first output time is always the model initialization time (for the particular model phase
being simulated), and can therefore be used to diagnose the current model phase’s initial
condition. The default three-dimensional output variables are T in °C, U in ms-1, and V
in ms-1, although other variables such as S in psu, RHO in kg m-3, and twice the turbulent
kinetic energy (Q2) in m2s-2 may also be useful to output. The default two-dimensional
(i.e. horizontal only) output variables are sea surface height (EL) in m, the east-west and
north-south components of the wind stress at the sea surface (TXY) in kg m-1s-2, written
sequentially as TX and TY, and the east-west and north-south components of the
vertically-averaged current velocity (UVA) in ms-1, written sequentially as UA and VA.
Another output file, “GRADS.YYMMDDHH,” includes, sequentially: T, S, RHO, U, V,
UA, VA, and EL. This file is intended for users of GRADS.
Changing the output variables requires manipulation of SUBROUTINE OUTPUT, and
care should be taken to ensure that any variable not calculated on the Arakawa-A grid
during model integration is horizontally interpolated to the Arakawa-A grid in
SUBROUTINE OUTPUT before being written to an output file. Similarly, all three-
dimensional variables should be vertically interpolated from sigma levels to z-levels (by
calling SUBROUTINE INTERP). Also, some output variables include an offset, or bias,
to reduce output file size. Of the output variables listed herein, only T, S, and RHO
require bias adjustments, as follows: (1) the “T.YYMMDDHH” files (and variable T in
the GRADS file) are written with a -10°C bias, so 10°C should be added to the values
within these files during post-processing; (2) the “S.YYMMDDHH” files (and variable S
in the GRADS file) are written with a -35 psu bias, so 35 psu should be added to the
values within these files during post-processing; and (3) the “RHO.YYMMDDHH” files
(and variable RHO in the GRADS file) are written with a -1025 kg m-3 bias and a 10-3
non-dimensionalization, so a multiplicative factor of 1000 followed by an addition of
1025 kg m-3 should be applied to the values within these files during post-processing.
Finally, the POM-TC land/sea mask is applied such that all land points for all output
variables are written with a value of -99.9990, so MATLAB users, for example, may
wish to replace the land points with a value of “NaN” for plotting purposes.

46
3.0 Physics Packages in HWRF
The HWRF system was designed to utilize the strengths of the WRF software system, the
well tested NMM dynamic core, and the physics packages of the GFDL and GFS forecast
systems. Since the HWRF system became operational in 2007, the physics packages of
the HWRF model have been upgraded on a yearly basis, and this document describes the
HWRF physics suites implemented for the 2013 hurricane season.
Examples of recent improvements include surface layer and PBL parameterization
changes designed to bring the HWRF physics packages more in line with observations of
surface roughness, enthalpy and momentum surface fluxes, and PBL height. The physics
packages of HWRF will be briefly described and contrasted with other NOAA models
such as GFS, GFDL and NAM. A GFS model and physics descriptions can be found at
http://www.emc.ncep.noaa.gov/GFS/doc.php, while more information on additional
physics available in the WRF model are available in Skamarock et al. (2008) and at
http://www.mmm.ucar.edu/wrf/users/tutorial/201207/Basic/WRF_Physics_Dudhia.pdf.
See Bender et al. (2007) for more information on the GFDL hurricane model. Note that
the POM coupling component of HWRF is described in Section 2.

3.1 HWRF physics


This section outlines the physical parameterizations used in the operational HWRF
model, which fall into the following categories: (1) microphysics, (2) cumulus
parameterization, (3) surface layer, (4) PBL (5) LSM, and (6) radiation. It closely follows
the basic WRF physics tutorial of Jimy Dudhia mentioned above. Horizontal diffusion,
which may also be considered part of the physics, is not described in this section. The
WRF system has been expanded to include all HWRF physics and, for each category, the
operational HWRF employs a specific choice within the WRF spectrum of physics
options. As mentioned above, the HWRF physics initially followed the physics suite
used by the benchmark operational GFDL hurricane model, but in the last few years
several modifications have been introduced.
In the WRF framework, the physics section is insulated from the rest of the dynamics
solver by the use of physics drivers. These drivers are located between the following
solver-dependent steps: pre-physics preparations and post-physics modifications of the
tendencies. The physics preparation involves filling arrays with physics-required
variables, such as temperature, pressure, heights, layer thicknesses, and other state
variables in MKS units at half-level and full levels. The velocities are de-staggered so
that the physics code is independent of the dynamical solver's velocity staggering. Since
HWRF uses the E-grid on the rotated lat-lon projection of the WRF-NMM dynamic core,
this velocity de-staggering involves interpolating the momentum variables from the
velocity to the mass grid points. Physics packages compute tendencies for the un-
staggered velocity components, potential temperature, and moisture fields. The solver-
dependent post-physics step re-staggers the tendencies as necessary, couples tendencies
with coordinate metrics, and converts to variables or units appropriate to the dynamics

47
solver. As in other regional models, the physics tendencies are generally calculated less
frequently than dynamic tendencies for computational expediency. The interval of
physics calls is controlled by namelist parameters.

3.2 Microphysics parameterization


Microphysics parameterizations explicitly handle the behaviors of hydrometeor species
by solving prognostic equations for their mixing ratio and/or number concentration, so
they are sometimes called explicit cloud schemes (or gridscale clould schemes) in
contrast to cumulus schemes, which parameterize sub-grid scale convection. The
adjustment of water vapor exceeding saturation values is also included inside the
microphysics. The treatment of water species such as rain, cloud, ice, and graupel was
first utilized in the development of cloud models, which simulated individual clouds and
their interactions. Gradually, as it became more computationally feasible to run at high
grid resolutions, microphysics schemes were incorporated into regional atmospheric
models. At high enough resolution (~1 km or less), convective parameterization of cloud
processes may not be needed because convection can be resolved explicitly by a
microphysics scheme. In the simpler microphysics schemes (single moment schemes),
such as the one used in HWRF, only the mixing ratios of the water species are carried as
predicted variables, while the number concentration of the variables is assumed to follow
standard distributions. If number concentrations are also predicted, the schemes are
coined “double moment”. A further sophistication in microphysics schemes is introduced
if the water species are predicted as a function of size. This added level of complexity is
coined a “bin” scheme. The present HWRF model, like the NAM and GFDL models,
uses the Ferrier scheme, which is simplified so that the cloud microphysical variables are
considered in the physical column, but only the combined sum of the microphysical
variables, the total cloud condensate, is advected horizontally and vertically. A possible
upgrade of HWRF microphysics would be to extend the Ferrier scheme to handle
advection of cloud species. Note that the in the last year modifications were made to the
HWRF model such that it can now be run in research mode with a variety of
microphysics packages, including the Thompson and WRF single-moment 6-class
(WSM6) parameterizations.
The Ferrier scheme

The present HWRF Ferrier microphysics scheme is based on the Eta Grid-scale Cloud
and Precipitation scheme developed in 2001 and known as the EGCP01 scheme (Ferrier
2005). The WRF model has three versions of the Ferrier microphysics; one for general
applications (used in the old WRF-based NAM model), one for higher resolution (used
operationally in the 4-km grid spacing High-Resolution Windows application), and one
tailored for the tropics (used in HWRF). The latter duplicates some features used in the
GFDL model implementation. For example, the number concentration of droplets is set to
60 cm-3 and 100 cm-3 in the HWRF and old WRF-based NAM versions, respectively.
In addition, the onset of condensation above the planetary boundary layer in the parent
grid of the tropical Ferrier is set to 97.5%, while the standard value of 100 % relative
humidity is used throughout the domain in the old WRF-based NAM version. These

48
changes for the tropics were implemented primarily to obtain a more realistic intensity
distribution in HWRF and GFDL forecasts. The scheme predicts changes in water vapor
and condensate in the forms of cloud water, rain, cloud ice, and precipitation ice
(snow/graupel/sleet). The individual hydrometeor fields are combined into total
condensate, and it is the water vapor and total condensate that are advected in the model.
This is done for computational expediency. Local storage arrays retain first-guess
information of the contributions of cloud water, rain, cloud ice, and precipitation ice of
variable density in the form of snow, graupel, or sleet (Figure 3.1).

Figure 3.1. Water species used internally in the Ferrier microphysics and their
relationship to the total condensate. The left column represents the quantities available
inside the microphysics scheme (mixing ratios of vapor, ice, snow, rain, and cloud
water). The right column represents the quantities available in the rest of the model: only
the water vapor and the total condensate get advected. After advection is carried out, the
total condensate is redistributed among the species based on fractions of ice and rain
water.
The density of precipitation ice is estimated from a local array that stores information on
the total growth of ice by vapor deposition and accretion of liquid water. Sedimentation is
treated by partitioning the time-averaged flux of precipitation into a grid box between
local storage in the box and fall out through the bottom of the box. This approach,
together with modifications in the treatment of rapid microphysical processes, permits
large time steps to be used with stable results. The mean size of precipitation ice is
assumed to be a function of temperature following the observational results of Ryan et al.
(1996). Mixed-phase processes are now considered at temperatures warmer than -40 oC

49
(previously -10 oC), whereas ice saturation is assumed for cloudy conditions at colder
temperatures.
Changes were made in three parameters of the Ferrier microphysics scheme for the 2012
HWRF upgrades. Although these changes did not significantly alter the HWRF
forecasted track and intensity of a storm, they contributed to produce better mixing ratio
forecasts, allowing the creation of more realistic large-scale forecast products, in
particular simulated infra-red and water vapor satellite images. The three parameters
changed were the maximum allowable ice concentration (NLImax), the number
concentration of cloud droplet (NCW) and snow fall speed. NLImax was increased from 2
to 20 L-1 and NCW was increased from 60 to 250 cm-3. The reason for the NCW increase
was the reduction of the unrealistic widespread spotty drizzle/light rain in the open ocean
of the parent domain. Finally, the fall speed of snow particles in the regions of warmer-
than-freezing temperature (0°C) was also increased to mimic higher order moment
schemes.
Further description of the scheme can be found in Sec. 3.1 of the November 2001 NCEP
Technical Procedures Bulletin (TPB) at
http://www.emc.ncep.noaa.gov/mmb/mmbpll/eta12tpbh and on the COMET page at
http://meted.ucar.edu/nwp/pcu2/etapcp1.htm.

3.3 Cumulus parameterization


Cumulus parameterization schemes, or convective parameterization schemes, are
responsible for the sub-gridscale effects of deep and/or shallow convective clouds. These
schemes are intended to represent vertical fluxes unresolved by gridscale microphysics
schemes such as updrafts, downdrafts and compensating motion outside the clouds. In its
early development, convective parameterization was believed to be necessary to avoid
possible numerical instability due to of simulating convection at coarse resolutions. The
schemes operate only on individual vertical columns where the scheme is triggered and
provide vertical heating and moistening profiles. Some schemes additionally provide
cloud and precipitation field tendencies in the column, and additionally some schemes,
such as the one used in HWRF, provide momentum tendencies due to convective
transport of momentum. The schemes all provide the convective component of surface
rainfall.
Cumulus parameterizations are theoretically only valid for coarser grid sizes, (e.g.,
greater than 10 km), where they are necessary to properly release latent heat on a realistic
time scale in the convective columns. While the assumptions about the convective eddies
being entirely sub-grid-scale break down for finer grid sizes, sometimes these schemes
have been found to be helpful in triggering convection in 5­10 km grid applications and
accurately predicting rainfall patterns. Normally, they should not be used when the model
itself can resolve the convective eddies (grid spacing less than approximately 5 km). In
the 2013 operational implementation of HWRF, the cumulus parameterization is
activated only in the parent domain and in the coarser nest (27- and 9-km horizontal grid
spacing, respectively). No convective parameterization is used in the 3-km horizontal
grid spacing inner nest.
50
The Simplified Arakawa-Schubert (SAS) scheme

HWRF uses the SAS cumulus parameterization also employed, with some modifications,
in the GFS (Pan and Wu 1995, Hong and Pan 1998, Pan 2003, Han and Pan 2011) and
GFDL models. It was made operational in NCEP’s global model in 1995 and in the
GFDL hurricane model in 2003. This scheme, which is based on Arakawa and Schubert
(1974), was simplified by Grell (1993) to consider only one cloud top at a specified time
and location and not the spectrum of cloud sizes, as in the computationally expensive
original scheme. Since 2011, the GFS and HWRF models use the newly upgraded SAS
scheme, which no longer considers a random distribution of cloud tops but one cloud top
value in a grid box from various entrainment ensemble averaged parameters. The scheme
was revised to make cumulus convection stronger and deeper by increasing the maximum
allowable cloud base mass flux and having convective overshooting from a single cloud
top.
In addition to the deep convection scheme, the shallow convection parameterization was
incorporated in the operational GFS and HWRF models in 2011 and 2012, respectively.
The parameter used to differentiate shallow from deep convection is the depth of the
convective cloud. When the convective thickness is greater than 150 hPa, convection is
defined as deep; otherwise it is treated as shallow. In the HWRF model, precipitation
from shallow convection is prohibited when the convection top is located below the PBL
top and the thickness of the shallow convection cloud is less than 50 hPa. These
customizations were made in order to remove widespread light precipitation in the model
domain over the open ocean areas. Note that because the shallow convection scheme
requires knowledge of the PBL height, it needs to be run in conjunction with a PBL
parameterization that provides that information. In the current code, only the GFS PBL
scheme has been tested to properly communicate the PBL height to the HWRF SAS
parameterization.
In SAS, convection depends on the cloud work function, a quantity derived from the
temperature and moisture in each air column of the model, which is similar to the
Convective Available Potential Energy (CAPE). When the cloud work function exceeds a
certain critical threshold, which takes into account the cloud base vertical motion, the
parameterizations are triggered and the mass flux of the cloud, Mc, is determined using a
quasi-equilibrium assumption. As the large-scale rising motion becomes strong, the cloud
work function is allowed to approach zero (therefore approaching neutral stability).
The temperature and moisture profiles are adjusted towards the equilibrium cloud work
function within a specified time scale using the deduced mass flux, and can be
determined on the resolvable scale by:
∂h/∂t = E(h-h~) + D(h~-h) + Mc∂h/∂z

∂q/∂t = E(q-q~) + D(q~+l-q) + Mc∂q/∂z

where h, l and q are the moist static energy, liquid water and specific humidity on the
resolvable scale and the tilde refers to the environmental values in the entraining (E) and
detraining (D) cloud regions.
51
The cloud model incorporates a downdraft mechanism as well as evaporation of
precipitation. Entrainment of the updraft and detrainment of the downdraft in the sub-
cloud layers is included. Downdraft strength is based on vertical wind shear through the
cloud.
In the revised SAS scheme in HWRF, the cloud top level is determined by the parcel
method to be the level where the parcel becomes stable with respect to the environment.
Detrained water is separated into condensate and vapor, with the condensate used as a
source of prognostic cloud condensate above the level of the minimum moist static
energy. In contrast to HWRF, the GFDL hurricane model version of SAS does not
export condensate to the rest of the model.
In the current implementation of SAS, the mass fluxes induced in the updrafts and the
downdrafts are allowed to transport momentum (Pan 2003). The momentum exchange is
calculated through the mass flux formulation in a manner similar to that of heat and
moisture. The introduction of the effect of momentum mixing was made operational in
NCEP’s GFS model in May 2001 and greatly reduced the generation of spurious vortices
(Figure 3.2) in the global model (see Han and Pan 2006). It has also been shown to have a
significant positive impact on hurricane tracks in the GFDL model. The effect of the
convection-induced pressure gradient force on cumulus momentum transport is
parameterized in terms of mass flux and vertical wind shear (Han and Pan 2006). As a
result, the cumulus momentum exchange is reduced by about 55 % compared to the full
exchange in previous versions. To improve intensity forecasts, the momentum mixing
coefficient (pgcon in the WRF namelist) has been tuned in the 2013 operational HWRF
model to 0.55 and 0.20 in the 27- and 9-km grid spacing domains, respectively. In
previous implementations, the same value of pgcon was used in both domains (0.20 in
2012 and 0.55 in 2011).
Han and Pan (2011) found that the revised SAS contributed to reductions in the root-
mean-squared errors of tropical winds and yielded improved hurricane tracks (Figure
3.3). For more detailed information see:
http://www.emc.ncep.noaa.gov/GFS/doc.php#conv. For some tests at NCEP and DTC,
the HWRF has been configured to use alternate convective schemes: Betts-Miller-Janjic
(BMJ - Janjic 1994, 2000 - used in the operational NCEP NAM model), Tiedke (Tiedtke
1989, Zhang et al. 2011), Kain-Fritsch (modified from Kain and Fritsch 1993), and the
New Simplified Arakawa Schubert (NSAS) scheme coded by the Yonsei University
(YSU), also based on Han and Pan 2011. Generally speaking, these schemes have not
demonstrated superior skill to the operational HWRF SAS scheme, but may serve as a
way to create a physics diversity ensemble using WRF.

3.4 Surface layer parameterization


The surface layer schemes calculate friction velocities and exchange coefficients that
enable the calculation of surface heat, moisture and momentum fluxes by the LSM. Over
water, the surface fluxes and surface diagnostic fields are computed by the surface layer
scheme itself. These fluxes, together with radiative surface fluxes and rainfall, are used as
input to the ocean model. Over land, the surface layer schemes are capable of computing
52
both momentum and enthalpy fluxes as well. However, if a land model is invoked, only
the momentum fluxes are retained and used from the surface layer scheme. The schemes
provide no tendencies, only the stability-dependent information about the surface layer
for the land-surface and PBL schemes.

Figure 3.2. Comparison among a) verifying GFS mean sea level pressure (hPa) analysis
and 132-h GFS model forecasts with b) no cumulus momentum mixing and c) and d) with
some amount of cumulus momentum mixing. The GFS forecasts were initialized at 0000
UTC 22 Sep 2000. Note several spurious vortices west of 100 W in (b) and (d) (from Han
and Pan 2006).
53
Figure 3.3 Root mean square vector wind errors (ms-1) at (a) 850 hPa and (b) 200 hPa
over the Tropics (20S-20N) from the control (solid line) and revised SAS (dashed line)
GFS model forecasts during 20 June – 10 November, 2008 (from Han and Pan 2011).
Each surface layer option is normally tied to a particular boundary-layer option but, in the
future, more interchangeability may become available. The HWRF operational model
uses a modified GFDL surface layer and the GFS PBL scheme. The GFS surface layer
has been used as an alternate configuration of HWRF in some tests at NCEP.
The HWRF surface layer scheme

While the 2009 versions of the HWRF and GFDL surface parameterizations were nearly
identical, they have since diverged. Since 2012, HWRF uses a modified surface layer
parameterization over water based on Kwon et al. (2010), Powell et al. (2003) and Black
et al. (2007). The air-sea flux calculations use a bulk parameterization based on the
Monin-Obukhov similarity theory (Sirutis and Miyakoda 1990, and Kurihara and Tuleya
1974). The HWRF scheme retained the stability-dependent formulation of the GFDL
surface parameterization, with the exchange coefficients now recast to use momentum
and enthalpy roughness lengths that conform to observations. In this formulation, the
neutral drag coefficient Cd is defined as:
54
 zm  −2
Cd = κ  ln  ,
2

 z0  (3.4.1)

where κ is the von Karman constant (= 0.4), z0 is the roughness length for momentum,
and zm is the lowest model level height. The neutral heat and humidity coefficients
(assumed equal, Ck) are expressed as
 zm  −1 zT  −1
Ck = κ  ln  ln  ,
2

 z0   zO  (3.4.2)

where zT is the roughness length for heat and humidity.


In the HWRF implementation of the Monin-Obukov formulations, the Cd and Ck for
neutral conditions are prescribed at the lowest model level (~35 m). Ck has a constant
value of 1.1 x 10-3 based on observations from the Coupled Boundary Layers Air-Sea
Transfer (CBLAST) experiment. Cd increases with wind speed until 30 m s-1, when it
levels off consistently with field measurements (Powell et al. 2003, Donelan et al. 2004,
Emanuel 2003, Moon et al. 2004a,b, Makin 2005, and Black et al. 2007). These
prescribed values of Cd and Ck are valid only in neutral conditions. In HWRF, Cd and Ck
also depend on atmospheric stability, and are larger in unstable conditions when vertical
mixing is more vigorous.
Over land, the roughness in HWRF is specified (as in the NAM model) with z0 = zT. Over
water, the HWRF momentum roughness, z0, is obtained by inverting Equation 2.1. The
enthalpy roughness, zT, is obtained by inverting Equation 2.2 above. The resulting
formulations for zo, as in Moon et al. (2007) and Powell et al. (2003), and zT, modified
from Kwon et al. (2010) and Black et al. ( 2007), are

zo =( 0.0185/g) x (7.59 x 10-8 W2 + 2.46 x 10-4 W)2, W ≤ 12.5 ms-1

zo = (7.40 x 10-4 W – 0.58) x 10-3, 12.5 ms-1 < W ≤ 30 ms-1

zo = -1.34 x 10-4 + 3.02 x 10-5 W + 1.52 x 10-6 W2 - 3.57 x 10-8 W3 + 2.05 x 10-10 W4, W>
30 ms-1

zT = ( 0.0185/g) x (7.59 x 10-8 W2 + 2.46 x 10-4W)2, W ≤ 7 ms-1

zT = 2.38 x 10-3 exp(-0.53 W) + 2.5 x 10-5 exp(-0.021 W), W > 7 ms-1

where W (ms-1) is the wind speed at zm and g is the gravitational acceleration.


In older versions of the GFDL hurricane model, z0 and zT were both calculated by the
Charnock’s relation as 0.0185 u*2/g, where u* is the friction velocity. Cd kept increasing
with wind speed in the original GFDL model, which overestimated the surface drag at
55
high wind speeds, leading to underestimation of the surface wind speed for a given
central pressure in strong hurricanes (Ginis et al. 2004). The surface parameterization
scheme used in GFS is also based on Sirutis and Miyakoda (1990) but modified by P.
Long in the very stable and very unstable situations. The roughness length over ocean is
updated with a Charnock formula after the surface stress has been obtained. The GFS
thermal roughness over the ocean is based on a formulation derived from the Tropical
Ocean Global Atmosphere Coupled Ocean Atmosphere Response Experiment (TOGA
COARE, Zeng et al. 1998). Interestingly, the GFS scheme retains the Charnock
formulation of roughness for momentum while the GFDL hurricane model retains the
Charnock formulation for enthalpy. Therefore there is a distinction between momentum
and enthalpy roughness among the HWRF, GFDL, and GFS surface flux schemes, with
correspondingly different momentum and enthalpy coefficients at high wind speed.
Another surface flux parameterization scheme that has been used experimentally in
HWRF is the Mellor-Yamada-Janjic (MYJ) scheme, formerly referred to as the Eta
surface layer scheme (Janjic 1996b, 2002) which is based on the similarity theory
(Kurihara and Tuleya 1974). The scheme includes parameterizations of a viscous sub-
layer. The surface fluxes are computed by an iterative method. This surface layer scheme
is generally run in conjunction with the Eta MYJ PBL scheme, and therefore is referred
to as the MYJ surface scheme. As mentioned previously, when the HWRF model is run
with the NAM options, including the MYJ scheme, hurricane intensity tends to be
reduced. Note that the use of the MYJ PBL and surface layer parameterizations in
HWRF is not supported in the current HWRF code.

56
Figure 3.4. Over-water drag coefficients for a) momentum (Cd,) and b) enthalpy (Ch)
coefficients at a height of 10 m above ground level (Gopalakrishnan et al. 2013). Values
for HWRF (gray crosses) are compared against observational estimates by Zhang et al.
2008 (green dashes) and Haus et al. 2010 (pink circles).

3.5 Land-surface model


LSMs use atmospheric information from the surface layer scheme, radiative forcing from
the radiation scheme, and precipitation forcing from the microphysics and convective

57
schemes, together with internal information on the land's state variables and land-surface
properties, to provide heat and moisture fluxes over land points and sea-ice points. These
fluxes provide a lower boundary condition for the vertical transport done in the PBL
schemes (or the vertical diffusion scheme in the case where a PBL scheme is not run,
such as in large-eddy mode). Land-surface models have various degrees of sophistication
in dealing with thermal and moisture fluxes in multiple layers of the soil and also may
handle vegetation, root, and canopy effects and surface snow-cover prediction. In WRF,
the LSM provides no tendencies, but updates the land's state variables which include the
ground (skin) temperature, soil temperature profile, soil moisture profile, snow cover, and
possibly canopy properties. There is no horizontal interaction between neighboring points
in the LSM, so it can be regarded as a one-dimensional column model for each WRF land
grid-point, and many LSMs can be run in a stand-alone mode when forced by
observations or atmospheric model input. One of the simplest land models involve only
one soil layer (slab) and predict surface temperature only. In this formulation, all surface
fluxes (both enthalpy and momentum) are predicted by the surface layer routines. HWRF
uses such a simple land model: the GFDL slab option.
The GFDL slab scheme

The GFDL slab model was developed by Tuleya (1994) based on Deardorff (1978). This
simple one-level slab model, together with the GFDL radiation package, completed the
requirement for realistic tropical cyclone behavior over land during the development of
the GFDL hurricane model (see Figure 3.5). The surface temperature, T*, is the only
predicted parameter in this system.
∂T*/∂t = (-σT*4 - Shfx - Levp + (S+F ))/(ρscsd), where

Shfx is the sensible heat flux, Levp is the evaporative flux, (S+F) is the net downward
radiative flux, ρs ,cs, and d are the density, specific heat and damping depth of the soil,
respectively.
The surface wetness is assumed to be constant during the model forecast, with initial
values based on the host model GFS analysis. Note that this simple model is able to
realistically simulate the development of the ‘cool pool’ land temperature under
landfalling tropical storms, thereby drastically reducing the surface evaporation over land
leading to rapid decay over land.
This simple slab model can be contrasted with the Noah LSM developed jointly by
NCAR and NCEP, which is a unified code for research and operational purposes, being
almost identical to the code used in the NAM Model. This is a 4-layer soil temperature
and moisture model with canopy moisture and snow cover prediction. The layer
thicknesses are 10, 30, 60 and 100 cm (adding to 2 meters) from the top down. It includes
root zone, evapotranspiration, soil drainage, and runoff, taking into account vegetation
categories, monthly vegetation fraction, and soil texture. The scheme provides sensible
and latent heat fluxes to the boundary-layer scheme. The Noah LSM additionally predicts
soil ice, and fractional snow cover effects, has an improved urban treatment, and

58
considers surface emissivity properties. The Noah LSM is presently being run in test
mode in HWRF.

Figure 3.5. Fluxes employed in the LSM used in the GFDL and HWRF hurricane models.
The surface land temperature is the only state variable predicted in this scheme. G
represents the flux of heat into the ground and all other terms are defined in the text.

3.6 Planetary boundary layer parameterization


The PBL parameterization is responsible for vertical sub-grid-scale fluxes due to eddy
transports in the whole atmospheric column, not just the boundary layer. Thus, when a
PBL scheme is activated, no explicit vertical diffusion is activated with the assumption
that the PBL scheme will handle this process. Horizontal and vertical mixing are
therefore treated independently. The surface fluxes are provided by the surface layer and
land-surface schemes. The PBL schemes determine the flux profiles within the well-
mixed boundary layer and the stable layer, and thus provide atmospheric tendencies of
temperature, moisture (including clouds), and horizontal momentum in the entire
atmospheric column. Most PBL schemes consider dry mixing, but can also include
saturation effects in the vertical stability that determines the mixing. Conceptually, it is
important to keep in mind that PBL parameterization may both complement and conflict
with cumulus parameterization. PBL schemes are one-dimensional, and assume that there
is a clear scale separation between sub-grid eddies and resolved eddies. This assumption
will become less clear at grid sizes below a few hundred meters, where boundary layer
eddies may start to be resolved, and in these situations the scheme should be replaced by
a fully three-dimensional local sub-grid turbulence scheme, such as the Turbulent Kinetic
Energy (TKE) diffusion scheme. HWRF uses a non-local vertical mixing scheme based

59
on the GFS PBL option with several modifications to fit hurricane and environmental
conditions
The HWRF PBL scheme

The HWRF code uses the non-local scheme as the GFDL operational hurricane model
(Hong and Pan1996) which is based on Troen and Mahrt (1986), and was implemented in
the GFS in 1995. Note that this scheme is similar, but not the same, as the Yonsei
University (YSU) scheme and the Medium-Range Forecast (MRF) boundary layer
scheme.
Historically the GFS PBL scheme has been found to give reasonable tropical cyclone
tracks for the global GFS and GFDL hurricane models when packaged with the SAS
cumulus scheme. The scheme is a first-order vertical diffusion parameterization that uses
the surface bulk-Richardson approach to iteratively estimate the PBL height starting from
the ground upward. The PBL height (h) depends on the virtual temperature profile
between the surface and the PBL top, on the wind speed at the PBL top and on the critical
Richardson number (Ric), and is given by

where, θvg and θv (h) are the virtual potential temperature at surface and at the PBL top,
U(h) is wind speed at PBL top, and θs is the surface potential temperature. Once the PBL
height is determined, a preliminary profile of the eddy diffusivity is specified as a cubic
function of the PBL height. This value is then refined by matching it with the surface
layer fluxes. The process above determines the local component of the eddy diffusivity,
which can be expressed as

 u*    z  
2

K c ( z ) = κ   α 1 −  ,
Φ
 m    h  

where z is the height above ground, Φm is a wind profile function evaluated at the top of
the surface layer, and α is a parameter that controls the eddy diffusivity magnitude
(Zhang et al. 2012).
Additionally, a counter-gradient flux parameterization based on the surface fluxes and on
the convective velocity scale (Hong and Pan 1996), is also included. This non-local effect
incorporates the contribution of large-scale eddies driven by surface layer conditions (see
Figure 3.6). The overall diffusive tendency of a variable C can be expressed as

where ∂C/∂z and γc are the local and non-local parts, respectively.
60
In addition, the GFS boundary layer formulation also considers dissipative heating, the
heat produced by molecular friction of air at high wind speeds (Bister and Emanuel
1998). This contribution is controlled by namelist parameter disheat.
Previous studies have shown that the class of PBL schemes used in HWRF
(GFS/MRF/YSU) often produces too deep a PBL when compared to observations in the
hurricane environment (Braun and Tao 2000, Zhang et al. 2012). Because the magnitude
of the eddy diffusivity coefficient in the HWRF PBL scheme is directly proportional to
the PBL height, a deep PBL causes stronger vertical, which in turn leads to a more
diffuse and larger storm. In order to reduce the feedback mechanism of the HWRF PBL,
a couple of modifications were made in the 2013 HWRF upgrades. One was the use of a
variable Ric instead of the constant value of 0.25 employed in previous implementations..
This modification was based on Vickers and Mahrt (2004), who performed several tests
by using different values of Richardson numbers and Ric in order to investigate the
sensitivity of the PBL height. They concluded that a Ric which varies with the surface
Rossby number produced the best solution compare to all methods of modification of the
Richardson number. The surface Rossby number (Ro) and the dependent Ric fit to it
using observational data are given by
Ro = (U10/fz0), and
Ric=0.16 (10-7 Ro)-0.18
where U10 is the wind speed at 10-m above ground level (AGL) and f is the Coriolis
parameter introduced for dimensional purposes. Vickers and Marhrt (2004) stated that
there is no evidence that Ro depends on f, so the HWRF model uses a typical value of f,
10-4 s-1.
Because the Vickers and Mahrt (2004) study did not use observational data for
hurricanes, the archived datasets of the Hurricane Research Division of the NOAA
Atlantic Oceanographic and Meteorological Laboratory were used to confirm that this
Ric definition is applicable to hurricane conditions. By using the variable Ric method, the
PBL height was matched seamlessly around the hurricane and its environment.
In the 2013 HWRF implementation, the artificial decrease of momentum diffusivity in
the PBL through the use of a non-zero α parameter was maintained, and its value was
increased to 0.7 from 0.5 used in the 2012 HWRF. Although the variable Ric method
reduced the diffusivity in hurricane regions, analysis showed that further reduction of
diffusivity was still needed to match the observational data.
This GFS PBL scheme can be contrasted with local schemes such as the Mellor-Yamada-
Janjic (MYJ) PBL used in NAM, which is an option for experimental, non-supported,
versions of HWRF. This parameterization of turbulence in the PBL and in the free
atmosphere (Janjic 1990a,b, 1996a, 2002) represents a nonsingular implementation of the
Mellor-Yamada Level 2.5 turbulence closure model (Mellor and Yamada 1982) through
the full range of atmospheric turbulent regimes. In this implementation, an upper limit is
imposed on the master length scale. This upper limit depends on the TKE as well as the
buoyancy and shear of the driving flow. In the unstable range, the functional form of the

61
upper limit is derived from the requirement that the TKE production be nonsingular in the
case of growing turbulence. In the stable range, the upper limit is derived from the
requirement that the ratio of the variance of the vertical velocity deviation and TKE
cannot be smaller than that corresponding to the regime of vanishing turbulence. The
TKE production/dissipation differential equation is solved iteratively. The empirical
constants used in the original Mellor-Yamada scheme have been revised (Janjic 1996a,
2002). Interestingly, the MYJ PBL scheme is quite similar to the Mellor-Yamada Level
2.5 scheme used in the early operational versions of the GFDL hurricane model. Note
that the TKE in the MYJ boundary layer scheme has a direct connection to the horizontal
diffusion formulation in the NNM-E grid and NMM-B grid dynamic cores, but this has
been turned off in HWRF.

Figure 3.6. Time-pressure cross sections of the eddy diffusivity (m2s-1) calculated with
the local (dotted) and nonlocal (solid) schemes and for (a) thermal and (b) momentum.
The GFS boundary layer uses the nonlocal formulation in which the eddy mixing is due
in part to surface conditions (Hong and Pan 1996).

3.7 Atmospheric radiation parameterization


Radiation schemes provide atmospheric heating due to radiative flux divergence and
surface downward longwave and shortwave radiation for the ground heat budget.
Longwave radiation includes infrared or thermal radiation absorbed and emitted by gases
and surfaces. Upward longwave radiative flux from the ground is determined by the
surface emissivity that in turn depends upon land-use type, as well as the ground (skin)
62
temperature. Shortwave radiation includes visible and surrounding wavelengths that
make up the solar spectrum. Hence, the only source is the Sun, but processes include
absorption, reflection, and scattering in the atmosphere and at surfaces. For shortwave
radiation, the upward flux is the reflection due to surface albedo. Within the atmosphere,
radiation responds to model-predicted cloud and water vapor distributions, as well as
specified carbon dioxide, ozone, and (optionally) trace gas concentrations and
particulates. All the radiation schemes in WRF currently are column (one-dimensional)
schemes, so each column is treated independently, and the fluxes correspond to those in
infinite horizontally uniform planes, which is a good approximation if the vertical
thickness of the model layers is much less than the horizontal grid length. This
assumption would become less accurate at high horizontal resolution, especially where
there is sloping topography. Atmospheric radiation codes are quite complex and
computationally intensive and are therefore often invoked at less frequent intervals than
the rest of the model physics. The HWRF radiation parameterizaton used in operations is
that from GFDL (see below) and is similar to the one used in the NAM. Compared to
extra-tropical phenomena, hurricanes are less dependent on radiative fluxes except when
migrating out of the tropics and/or progressing over land. Radiation-cloud interactions
may be more important than direct radiative impacts, except during extra-tropical
transition.
The Eta GFDL longwave scheme

This longwave radiation scheme follows the simplified exchange method of Fels and
Schwarzkopf (1975) and Schwarzkopf and Fels (1991), with calculation over spectral
bands associated with carbon dioxide, water vapor, and ozone. Included are Schwarzkopf
and Fels (1985) transmission coefficients for carbon dioxide, a Roberts et al. (1976)
water vapor continuum, and the effects of water vapor-carbon dioxide overlap and of a
Voigt line-shape correction. The Rodgers (1968) formulation is adopted for ozone
absorption. Clouds are randomly overlapped. More recent versions of the GFDL
longwave radiation scheme, such as the one used in the NAM model but not adopted in
HWRF, contain parameters for urban effects, as well as surface emissivities that can be
different than 1.0.
The Eta GFDL shortwave scheme

This shortwave radiation is a GFDL version of the Lacis and Hansen (1974)
parameterization. Effects of atmospheric water vapor, ozone (both from Lacis and
Hansen 1974), and carbon dioxide (Sasamori et al. 1972) are employed. Clouds are
randomly overlapped. Shortwave calculations are made using a daylight-mean cosine
solar zenith angle for the specific time and grid location averaged over the time interval
(given by the radiation call frequency). The newest version of the GFDL shortwave
radiation scheme, used for example in the NAM model but not adopted in HWRF,
contains parameters for urban effects.

63
3.8 Physics interactions
While the model physics parameterizations are categorized in a modular way, it should be
noted that there are many interactions between them via the model state variables (potential
temperature, moisture, wind, etc.) and their tendencies, and via the surface fluxes. The
surface physics, while not explicitly producing tendencies of atmospheric state variables, is
responsible for updating the land-state variables as well as updating fluxes for ocean
coupling. Note also that the microphysics does not output tendencies, but updates the
atmospheric state at the end of the model time-step. The radiation, cumulus parameterization,
and PBL schemes all output tendencies, but the tendencies are not added until later in the
solver, so the order of call is not important. Moreover, the physics schemes do not have to be
called at the same frequency as each other or at the basic model dynamic time step. When
lower frequencies are used, their tendencies are kept constant between calls or time
interpolated between the calling intervals. In contrast to HWRF, note that the GFDL
hurricane modeling system calls all physics packages once per time step except for radiation.
The land-surface and ocean models, excluding simple ones, also require rainfall from the
microphysics and cumulus schemes. The boundary-layer scheme is necessarily invoked after
the land-surface scheme because it requires the heat and moisture fluxes.

64
4.0 Design of Moving Nest
HWRF, which uses the NMM dynamic core under the WRF model software framework,
supports moving, one- or two-way interactive nests. While WRF-NMM can handle
multiple stationary domains at the same nest level, and/or multiple nest levels
(telescoping) with two-way interaction, the HWRF configuration employs a single
domain per nest level. In HWRF, the 9- and 3-km domains follow the storm, while the
27-km parent domain is stationary. When more than one tropical storm is observed, more
than one independent run of HWRF is launched so that every storm has its own high-
resolution moving nest.
In the current implementation of the nesting algorithm, only horizontal refinement is
available, that is, there is no vertical nesting option. The nested grids adopt ratio 1:3 to
refine the resolution of the coarse and fine grids based on Arakawa-E grid staggering
structure. Correspondingly, the time step ratio between the coarse and fine grids is 1:3 as
well. The mass points of the nested grids are aligned with those of the coarser grids
within which they are nested. The coincidence of grid points between the parent and
nested domains simplifies remapping and feedback procedures. The design of
constructing nesting grids also conforms to the parallel strategy within the WRF
advanced software framework (Michalakes et al. 2004) and enhances code portability of
the model for various applications. HWRF inherits time controlling capabilities of WRF-
NMM, which means that HWRF can initialize and terminate the integration of nested
grids at any time during the model run. In the operational implementation, nested grids
are present throughout the entire forecast.

4.1 Grid Structure


As described in the NMM scientific documentation (Janjic et al. 2010), the WRF-NMM
is a non-hydrostatic model formulated on a rotated latitude-longitude, Arakawa E-grid,
with a vertical pressure-sigma hybrid vertical coordinate system. The rotated latitude-
longitude coordinate is transformed in such a way that the coordinate origin is located in
the center of the parent domain, and the x-axis and y-axis are aligned with the new
coordinate equator and the prime meridian through the domain center, respectively
(Figure 4.1). In order to deal with multi-scale forecasting, a horizontal mesh refinement
capability was developed for this system. All interpolations from the parent to the nested
domain are achieved on a rotated latitude-longitude E-grid. The nested domain can be
freely moved anywhere within the grid points of the parent domain, yet the nested
domain rotated latitude-longitude lines will always coincide with the rotated latitude-
longitude lines on the mass grid of the parent domain at fixed parent-to-nest grid-size
ratio 1:3.

65
Figure 4.1 Schematic rotated latitude and longitude grid. The blue dot is the rotated
latitude-longitude coordinate origin. The origin is the cross point of the new coordinate
equator and zero meridian, and can be located anywhere on Earth.

4.2 Terrain Treatment


Terrestrial properties are important external forcing on the dynamics and
thermodynamics of any numerical models. The impact of terrain effects on TC track,
intensity and structure has been recognized in many previous studies (e.g. Lin, 2007).
Therefore, careful treatment of static terrestrial conditions such as terrain and land-sea
contrast is necessary to contain contamination and possible computational noise in the
modeled solution due to improper adjustment from coarse- to fine-resolution terrestrial
information.
The terrain treatment in the HWRF system is tailored to the TC problem by using high-
resolution topography to take into account the detailed topographic effects of the complex
islands and landmasses. In order for the movable nests to always have access to high-
resolution topography, before the forecast starts WPS is used to interpolate topography
information from prescribed high-resolution terrain datasets to the required grid
resolution over the entire parent domain. For example, in a typical operational forecast at
27 km with two center-aligned movable nests at 9- and 3-km resolutions, terrestrial data
are generated at all three resolutions for the entire static parent domain shown in Figure
4.2. This way, when the grid moves during the forecast, it always has access to high-
resolution topographic information.

66
Figure 4.2 An example of topography differences in model for domains at 27- (blue) and
3-km (red) resolutions, respectively. The cross section is along latitude 22°N, between
longitudes 85.28°W and 79.32°W. The biggest differences are in the mountainous area of
Eastern Cuba.
Topography is the only static dataset generated in high resolution. For pragmatic
considerations, all other static terrestrial information for nests is downscaled from the
coarser-resolution parent domain.
The terrain within the nest is smoothed before being used. The grids are smoothed
through a four-point weighted average with a special treatment at the four corners. Points
in the inner part of the domain are smoothed using

where thr is the high-resolution terrain in a nest. Points along the boundary use a modified
equation. For example, for the western boundary, the equation is

The smoothed topography in the corner points also follows a modified formula. For
example, the average terrain at the southwest point is given by

Certainly, the high-resolution terrain conforms to land-sea mask binary categories, that is,
only land points have terrain assigned.

67
4.3 Moving Nest Algorithm
The use of enhanced resolution only in the TC region is a pragmatic solution commonly
adopted in the tropical cyclone NWP community to reduce computational costs. In order
for the TC to be always contained in the highest-resolution domain, that domain has to
move to follow the storm.
The nest motion for tropical cyclones and tropical depressions is currently based on the
GFDL Vortex Tracker (Section 5). Nine fields are calculated and smoothed, and then
their extremes (fix locations) are calculated:
1-3. Minimum wind speed at 10 meters, 850 hPa and 700 hPa
4-6. Maximum 10 meter vorticity at 10 meters, 850 hPa and 700 hPa
7. Minimum MSLP
8-9. Minimum geopotential height at 700 and 850 hPa
Once all nine fix locations have been calculated, the standard deviation of the fix
locations with respect to the domain center is calculated. Fix locations that are far from
the domain center are discarded. The maximum allowed distance differs by parameter
and varies with time, based on prior fix-to-center standard deviations. Once the final set
of parameters is chosen, the mean fix location is used as the storm center. The standard
deviation of the fix parameters is stored for discarding fix parameters at later timesteps.
Lastly, the nest is moved if the storm location is more than two parent gridpoints in the Y
(rotated north-south) direction or more than one parent gridpoint in the X (rotated east-
west) direction. One should note that, while at every time-step numerous fields are
passed between domains before and after the grid motion, the interpolation and
hydrostatic mass balancing are also applied in the region of the leading edge of the
moving nest as described in the next subsection.

4.4 Fine Grid Initialization


The generation of initial conditions for the HWRF parent domain is be discussed in
Chapter 1. For the nests, all variables, except topography, are initialized using the
corresponding variables downscaled from the parent grid during the integration. In order
to alleviate potential problems related to singularities due to high resolution terrestrial
information in the nested domain, the initialization of the land variables, such as land-sea
mask, soil temperature and vegetation type, are exclusively initialized through a nearest-
neighbor approach at the initial time step and the leading edge during the integration.
To obtain the temperature, geopotential, and moisture fields for the nest initialization,
hydrostatic mass balance is applied. The first step is to horizontally interpolate coarser-
resolution data to the fine-resolution grid. The second step is to apply the high-resolution
terrain and the geopotential to determine the surface pressure on the nest. The pressure
values in the nest hybrid surfaces are then calculated. The final step is to compute the
geopotential, temperature and moisture fields on the nest hybrid surfaces using linear
interpolation in a logarithm of pressure vertical coordinate. The schematic procedure is
68
illustrated in Figure 4.3. The zonal and meridional components of the wind are obtained
by first performing a horizontal interpolation from the parent to the nest grid points using
a bi-linear algorithm over the diamond-shaped area indicated in grey in Figure 4.4. The
wind components are then linearly interpolated in the vertical from the parent hybrid
surfaces onto the nest hybrid surfaces. Note that, while the hybrid levels of the nest and
parent in sigma space coincide, the nest and the parent do not have the same levels in
pressure or height space. This is due to the differing topography, and consequently
different surface pressure between the nest and the parent.

Figure 4.3 An illustration of the vertical interpolation process and mass balance.
Hydrostatic balance is assumed during the interpolation process.

69
Figure 4.4 The schematic E-grid refinement - dot points the represent mass grid. Big and
small dots represent coarse and fine resolution grid points, respectively. The black
square represents the nest domain. The diamond square on the right side is composed of
four big dot points representing the bilinear interpolation control points.

4.5 Lateral Boundary Conditions


Figure 4.5 illustrates a sample E-grid structure, in which the outermost rows and columns
of the nest are termed the prescribed interface, and the third rows and columns are termed
the dynamic interface. The prescribed interface is forced to be identical to the parent
domain interpolated to the nest grid points. The dynamic interface is obtained from
internal computations within the nest. The second rows and columns are a blend of the
first and third rows/columns. Because the prescribed interface is well separated from the
dynamic interface in the E-grid structure, nested boundaries can be updated at every time
step of the parent domain exactly the same way as the parent domain boundary is updated
from the external data source. This is done using bi-linear interpolation and extrapolation
using the same mass adjustment procedure previously described. This approach is simple,
and yet produces an effective way of updating the interface without excessive distortion
or noise.

70
Figure 4.5 Lateral boundary condition buffer zone - the outmost column and row are
prescribed by external data from either a global model or regional model. The blending
zone is an average of data prescribed by global or regional models and those predicted
in the HWRF domain. Model integration is the solution predicted by HWRF. Δψ and Δλ
are the grid increment in the rotated latitude-longitude coordinate.

4.6 Feedback
The feedback from fine resolution domain to coarse resolution domain is an important
process for a hurricane forecast model. It reflects the multiple-scale physical interactions
in the hurricane environment. This is done using the same mass adjustment procedure
previously described, except that the parent pressure is retained. In addition, rather than
horizontal interpolation, horizontal averaging is used: the nine fine-grid points
surrounding a coarser-resolution grid point are used before the mass adjustment process.
Furthermore, feedback is done with a 0.5 weighting factor, replacing the coarse grid data
with the average of the coarse and fine grid data.

71
5.0 Use of the GFDL Vortex Tracker
5.1 Introduction
Numerical modeling has become an increasingly important component of hurricane
research and operational hurricane forecasting. Advances in modeling techniques, as
well as in fundamental understanding of the dynamics of tropical cyclones, have enabled
numerical simulations of hurricanes to become more realistic and contributed to
hurricane forecasts becoming more skillful. One critical element of assessing the
performance of a hurricane model is the evaluation of its track and intensity forecasts.
These forecasts are typically represented in the form of text data that are output either
directly from the forecast model or in a post-processing step of the modeling system
using an external vortex tracker. This document provides a description of the GFDL
vortex tracker (Marchok 2002), which operates as a standalone tracking system in a post-
processing step. The GFDL vortex tracker has been used as an operational tool by NCEP
since 1998, and it is flexible enough to operate on a variety of regional and global models
of varying resolutions. In addition, the tracker has been updated for the 2012 release so
that it can function in a mode in which it will also detect new cyclones that the model
develops during the course of a forecast but this capability is not used operationally.

5.1.1 Purpose of the vortex tracker


A numerical model produces an abundance of digital output, with up to hundreds of
variables on dozens of vertical levels, including variables for mass, momentum, density,
moisture, and various surface and free-atmosphere fluxes. While a tropical cyclone’s
center is defined by its low-level circulation features, a comparison of synoptic plots of
various low-level parameters will often reveal a range of variability in a storm’s center
position. This variability can be particularly large for storms that are either just forming
or are undergoing extratropical transition. Figure 5.1 illustrates this variability for a case
of Tropical Storm Debby (2006) in an analysis from the NCEP GFS. At this time, Debby
was a weak, 40-kt tropical storm, and the variability in the center location fixes indicates
that the model had not yet developed a coherent vertical structure for the storm.

72
Figure 5.1: Mean sea level pressure (contours, mb), 850 mb relative vorticity (shaded, s-
1
*1E5) and 850 mb winds (vectors, ms-1) from the NCEP GFS analysis for Tropical
Storm Debby, valid at 06 UTC 24 August 2006. The triangle, diamond and square
symbols indicate the locations at which the GFDL vortex tracker identified the center
position fix for each of the three parameters. The notation to the left of the synoptic plot
indicates that the distance between the 850 mb vorticity center and the mslp center is 173
km.
A vortex tracker is needed in order to objectively analyze the data and provide a best
estimate of the storm’s central position and then track the storm throughout the duration
of the forecast. Depending on the complexity of the tracker, additional metrics can be
reported, including the minimum sea-level pressure, the maximum near-surface wind
speed, the radii of gale-, storm- and hurricane-force winds in each storm quadrant,
parameters that describe the thermodynamic structure or phase of the storm, and
parameters that detail the spatial distribution of the near-surface winds. This document
will focus primarily on the basic functioning of the tracker and its reporting of the track,
intensity and wind radii parameters.

73
5.1.2 Key issues in the design of a vortex tracker
When designing a tracking scheme, there are a couple of fundamental issues that must be
considered. The first issue is deciding on the method used to locate a maximum or a
minimum in some field of values. There are numerous methods that can be used for this
purpose. The simplest method is to scan the field of values and pick out the maximum or
minimum at one of the model output grid points. However, this method restricts the
maximum or minimum value to being located at one of the fixed data points on the grid.
For many grids, especially those with coarser resolutions, the actual maximum or
minimum value may fall between grid points. The data can be interpolated to a finer
resolution, but interpolation is a procedure that can be both expensive and complicated to
generalize for usage with both regional and global grids over a range of resolutions. In
addition, a problem can still remain after interpolation in which the tracking scheme
needs to choose between two or more candidate points with identical values that are
located close to one another. The GFDL vortex tracker uses a scheme that employs a
Barnes analysis of the data values at each candidate grid point to provide a field of values
that have been weight-averaged based on distance from the candidate grid point. This
technique, which will be described in detail below, helps to mitigate the issues described
above.
The second issue involves finding the right balance between making the scheme sensitive
enough so that it can detect and track weaker storms, and making it overly sensitive such
that it continues tracking for too long and tracks weak remnants that no longer resemble a
cyclone, or worse, it jumps to a stronger passing storm and begins tracking that storm
instead. There are several checks that have been included in the GFDL vortex tracker,
some with thresholds that can be adjusted either in the source code or via namelists as
inputs to the executable. These will be described below.
The remainder of this document will describe in detail the design and functioning of the
GFDL vortex tracker. Section 5.2 will focus on the design of the tracker and the input
data that it needs. Section 5.3 presents a discussion of the various low-level parameters
that are tracked and how they are combined to produce a mean position fix at a given lead
time. Section 6.4 describes how the maximum wind and the various wind radii in each
storm quadrant are obtained. Section 6.5 describes diagnostics that are performed by the
tracker to analyze the thermodynamic phase of a model cyclone. Section 6.6 details
usage of the tracker for the purpose of detecting and tracking new, model-generated
storms, and Section 6.7 provides detail on the tracker output.

5.2 Design of the Tracking System


5.2.1 Input data requirements
The GFDL vortex tracker can operate in two different modes. In the basic mode, it will
perform tracking only for storms that have been numbered by a Regional Specialized
Meteorological Center (RSMC), such as the National Hurricane Center (NHC). It can
also operate in a mode in which it detects and tracks new storms that a model generates
during the course of a forecast.
74
5.2.1.1 Synoptic forecast data
The tracker requires input data to be in Gridded Binary (GRIB) version 1 format, on a
cylindrical equidistant, latitude-longitude (lat/lon) grid. While the dx and dy grid
increments each need to be uniform across the grid, dx does not need to be equal to dy.
The data should be ordered so that j and i increment from north to south and east to west,
respectively, such that point (1,1) is in the far northwestern part of the grid, and point
(imax,jmax) is in the far southeastern part of the grid. Data files that instead have data
values incrementing from south to north can be flipped prior to execution of the tracker
using an external GRIB file manipulation tool.
The data files do not need to have regular spacing for the lead time intervals. This
flexibility allows the user to obtain tracker output using output model data at more
frequent time intervals around a particular time of interest. The tracker reads in a list of
forecast lead times from a text file that the user prepares. The tracker has the ability to
process GRIB files that have the lead times identified in the Product Definition Section
(PDS) of the GRIB header as either hours or minutes. The choice for using either minutes
or hours is passed to the program via a namelist option. Regardless of which choice is
made, those lead times must be listed in the user input text file as integers in units of
minutes (the exact required format can be seen in the read statement in subroutine
read_fhours), and then the tracker can manipulate the hours and minutes as needed.

5.2.1.2 Real-time observed storm data


The tracker works by searching for a vortex initially at a location specified by a 1-line
text record that is produced by either NHC for storms in the Atlantic, eastern Pacific and
central Pacific basins, or by the Joint Typhoon Warning Center (JTWC) for storms in
other global basins. This record contains just the basic, vital information necessary to
define the observed location and intensity parameters of the storm, and it is commonly
referred to as the “TC vitals” record. An example TC vitals record is shown here for
Katrina for the observed time of 00 UTC 29 August 2005:
NHC 12L KATRINA 20050829 0000 272N 0891W 335 046 0904 1006 0649 72 037
0371 0334 0278 0334 D 0204 0185 0139 0185 72 410N 815W 0167 0167 0093 0167
The critical information needed from the TC vitals record for tracking is the Automated
Tropical Cyclone Forecast (ATCF) ID number for the storm (12L), the observed time
(20050829 0000), and the location of the storm, indicated here as “272N 0891W”, or
27.2o North, 89.1o West. For this example, the tracker would start looking for Katrina in
the 00 UTC 29 August 2005 analysis for a given model at 27.2o North, 89.1o West, and if
it finds a storm near there, it records its position, writes out a record in a specific text
format that contains critical storm forecast location and intensity forecast data, and then
makes a guess for the next position at the next forecast lead time to begin searching
again.

75
5.2.2 The search algorithm
To locate a maximum or minimum value for a given variable, we employ a single-pass
Barnes analysis (Barnes 1964, Barnes 1973) at grid points in an array centered initially
around the NHC-observed position of the storm. We refer to this NHC-observed position
as the initial guess position. For a given variable F, the Barnes analysis, B, at a given
point, g, in this array is given as:

(5.2.1.2.1)

where w is the weighting function defined by:

(5.2.1.2.2)

and where dn is the distance from a data point, n, to the grid point, g, and re is the e-
folding radius. The e-folding radius is the distance at which the weighting drops off to a
value of 1/e, and this value can be adjusted. Currently, most regional and global model
grids fall into a category with output file grid spacing between about 0.1o and 1.25o, and
for those we use a value of re = 75 km. For any models with resolutions coarser than
1.25o, we use a value of re = 150 km. For model grids with a grid spacing finer than 0.1o,
we use a value of re = 60 km. The overriding idea is that we want to find a balance
whereby we include enough points in the averaging process to produce a weighted
average from the Barnes function that is representative of the surrounding region, but not
so many points that finer scale details are smoothed out to the degree of making it
difficult to differentiate the average value at one grid point from that of an adjacent point.
The Barnes analysis provides an array of Gaussian weighted-average data values
surrounding the initial guess position. The center is defined as the point at which this
function is maximized (e.g., Northern Hemisphere relative vorticity) or minimized (e.g.,
geopotential height, sea level pressure, Southern Hemisphere relative vorticity),
depending on the parameter being analyzed.
As described above, the center location for a given parameter will often lie between grid
points, and this is especially true for coarser resolution grids. In order to produce a
position fix with enough precision such that center fixes for variables with center
locations between grid points can be properly represented, it may be necessary to perform
several iterations of the Barnes analysis. In the initial iteration, a Barnes analysis grid is
defined with grid spacing equal to that of the input data grid, and the weighted values
from the Barnes analysis are assigned to the points on the analysis grid. The difference
between the input data grid and the Barnes analysis grid is that the input data grid has
specific (i,j) locations that are fixed, while for the analysis grid we can define an array of
points, relative to the guess position in latitude-longitude space. After a position fix is
returned from the first iteration of the Barnes analysis, we can perform an additional
iteration of the Barnes analysis, this time centering the analysis grid on the position fix
from the first iteration. In this second iteration, the search area for the center location is
restricted, and the grid spacing of the Barnes analysis grid is halved in order to produce a
76
finer resolution position fix. We can iterate this process a number of times and run the
Barnes analysis over increasingly finer resolution analysis grids in order to more
precisely fix the center position. In the current version of the tracker, we specify a
variable (“nhalf”) to indicate that five additional iterations of the Barnes analysis should
be done for grids with spacing greater than 0.2o. For example, for a grid with original
grid spacing of 1o, halving the analysis grid spacing five times would result in a final
analysis grid spacing of approximately 3 km, which is already beyond the one-tenth of a
degree precision contained in the observational Best Track dataset. For data grids with
original spacing of less than 0.2o, such as the operational HWRF, only two additional
Barnes iterations are performed, and for grids with spacing less than 0.05o, only one
additional Barnes iteration is performed.

5.2.3 Tracking a vortex throughout a forecast


A tracking algorithm ultimately produces a set of points that contains information on the
forecast location of the storm at discrete time intervals. A fundamental challenge is
ensuring that the points that are connected from one lead time to the next do in fact
represent points from the same storm and that there is no “contamination” introduced by
accidentally having the tracker follow a different storm. This challenge becomes greater
for model output with longer intervals between lead times. For example, it is far easier to
know with certainty that a nearby storm is the same storm that we have been tracking up
to this time if the last position fix only occurred 30 minutes ago in model time as opposed
to it having occurred 12 hours ago. This section deals with how the model handles the
tracking of a vortex from one lead time to the next and what types of quality control
checks are applied.

5.2.3.1 Tracking from one lead time to the next


If the tracker finds a storm at a given lead time, it needs to know where to begin
searching for the storm at the next lead time. There are two methods that the tracker
employs for this purpose. In the first method, a Barnes analysis is performed for the
location at which the tracker position fix was made for the current lead time. This
analysis is performed for the winds at 500, 700 and 850 mb, using a relatively large e-
folding radius of 500 km. The idea here is to create smoothed fields that represent the
mean fields at each level. The mean values from these three levels are then averaged
together to give a wind vector that can be used as a deep layer mean steering wind. A
hypothetical parcel is then advected according to the deep layer mean wind for the length
of the lead time interval in order to produce a dynamically generated guess position for
the next lead time.
The second method uses a basic linear extrapolation of the current model storm motion.
For all lead times after the initial time, this method can be employed by using the
previous and current forecast position fixes. For the initial time, there is obviously no
previous position from the current model forecast to use for an extrapolation, however
this extrapolation method is still used at the initial time by instead using the observed
storm motion vector information that is read from the TC vitals record. This method of

77
using the storm motion vector is not as reliable, however, since the observed storm
motion vector may differ from the model storm motion vector.
The estimates from these two methods are averaged together to produce a position guess
around which the tracker will begin searching for the storm at the next lead time. Both of
these methods use estimates that are static in time, and therefore error is introduced in the
position guesses. Those errors obviously become larger with increasingly longer lead
time intervals. However, it is important to note that these are only position guesses, and
the tracker will allow a position fix to be made up to a certain distance from that position
guess. Experience in operations has shown the combination of these two methods to be a
reliable means of providing position guesses for successive lead times, even for model
output with lead time intervals of 12 hours. Cases which should be watched for trouble
with the use of this method include those in which the storm begins to rapidly accelerate
or decelerate, and those in which the storm is rapidly recurving into the westerlies.

5.2.3.2 Quality control checks


Once the tracker has produced a position fix at a given lead time, a number of checks are
performed to help ensure that the system the tracker found is not only a storm, but also is
the same storm that has been tracked to this point in the forecast. As a first check, the sea
level pressures of the points surrounding the position fix are evaluated to determine if a
pressure gradient exceeding a particular threshold exists and is sloped in the correct
direction. This is a fairly easy criterion for a storm to satisfy since the requirement is
only that it be satisfied for any azimuthal direction, and not that it be satisfied by a mean
gradient value. The threshold can be set by the user in the run script by specifying its
value in the “mslpthresh” variable. In the current version of the tracker, the mslpthresh
variable is set to a value of 0.0015 mb/km, which is equivalent to 0.5 mb per 333 km.
A second check involves the wind circulation at 850 mb. The tangential component of
the wind (VT) is computed for all points within 225 km of the position fix, and the mean
VT must be cyclonic and exceed a user-specified threshold. This threshold is also set in
the run script by specifying the value of the v850thresh variable. This variable has units
of ms-1 and is set in the current version of the tracker to 1.5 ms-1.
For a third check, the distance between the position fixes for two parameters is evaluated
to ensure it does not exceed a specified distance. As will be described below in Section
5.3, the tracker finds the center location of several different low-level parameters. If the
distance between the mean sea-level pressure (mslp) and 850 mb relative vorticity
position fixes becomes too large, it could indicate either that the storm is becoming too
disorganized due to dissipation or that it is undergoing extratropical transition and the
tracker may have perhaps incorrectly “locked on” to a different storm nearby with one of
those two parameter fixes. In either case, if that distance is exceeded, the tracker will
stop tracking for this particular storm. That distance threshold is specified by the variable
“max_mslp_850” in subroutine tracker, and it is currently set at 323 km for most models,
including HWRF.
One final check is made of the model storm’s translation speed. The current and previous
position fixes are used to calculate the average speed that the model storm must have
78
traveled in order to reach the current position, and if that speed exceeds a certain
threshold, then the tracker assumes that it has incorrectly locked on to a different storm
nearby and tracking is stopped for this storm. That speed is specified by the
“maxspeed_tc” variable in module error_parms and is currently set to a value of 60 kt. It
should be noted here that during the evaluation of model forecasts from the Hurricane
Forecast Improvement Project (HFIP) High Resolution Hurricane (HRH) test in 2008,
this storm translation speed check was responsible for erroneously stopping a number of
forecasts. The problem arose for cases in which a very weak model storm center
reformed after only 30 minutes of model time at a location more than 100 km away.
While such behavior is reasonable for a very weak but developing storm to exhibit, this
large shifting of storm position over a very short time period resulted in a computed
translation speed that exceeded the threshold. If necessary, this problem can be
circumvented by setting the maxspeed_tc threshold to an unrealistically high value.
It is important to point out that while these last two quality control checks will
occasionally terminate tracking for storms that are undergoing extratropical transition
(ET), the intended purpose is not to stop tracking when ET is taking place. To the
contrary, we want to continue tracking in order to provide track and intensity guidance
for as long as possible in the forecast, and furthermore the model forecast of the onset of
ET may not correspond at all to what happens with the observed storm. These last two
checks are instead meant to stop tracking if the tracker detects that it may have
erroneously begun to track a different, nearby storm.
The current version of the tracker has code in it that will report on the thermodynamic
phase of the system, that is, whether the system is tropical, extratropical, etc. This code
requires input data that has been interpolated to certain levels and/or averaged, as will be
described in Section 5.5.

5.3 Parameters Used for Tracking


The GFDL vortex tracker produces position fixes for several low-level parameters. The
position fixes are then averaged together to produce the mean position fix that is reported
for that lead time. This section describes the various parameters and how the tracker
combines them in order to produce the mean position fix.

5.3.1 Description of the primary and secondary tracking variables


There are six primary parameters and three secondary parameters that are used for
tracking. All of these parameters are from the lower levels of the troposphere. The
primary parameters include relative vorticity at 10 m and at 850 and 700 mb; mslp; and
geopotential height at 850 and 700 mb. Most models, including HWRF, will output
absolute vorticity, and for those models the tracker will subtract out the Coriolis
component at each grid point. If vorticity is not included in the input GRIB data file, the
tracker will compute it using the u- and v-components of the wind that have been read in.
The Barnes analysis is performed for each of these six parameters. If the Barnes analysis
returns a location for the maximum or minimum that is within a specified distance
threshold, then that parameter’s location fix is saved for use later in computing the
79
average position fix. If it is not within that distance threshold, the position fix for that
parameter is discarded for that lead time. If one or more of these parameters is missing
from the input GRIB data file, the tracker simply continues tracking using the limited
subset of available parameters.
The distance thresholds are defined initially by the “err_gfs_init” and “err_reg_init”
parameters in module error_parms. Values for this initial error parameter vary according
to the resolution of the data grid, with finer resolution grids being assigned a threshold of
275 km and coarser resolution global grids being assigned a less restrictive 300 km
threshold. For lead times after the initial time, this distance threshold is defined as a
function of the standard deviation in the positions of the parameter location fixes
including up to the three previous lead times. For example, for very intense, steady-state
storms that have strong vertical coherence in their structure, the various parameter fixes
are likely to be located closely together. In these cases, the distance threshold defined by
the standard deviation of the parameter fixes will be small, as will be the tolerance for
outliers in the parameter fixes. For weak systems, or for storms that are undergoing ET,
there is less coherence to the vertical structure and often wider variance in location of the
parameter fixes. In these cases, the larger distance thresholds defined by the larger
standard deviation allow more flexibility in accepting parameter fixes that are not located
close to the guess position for a given lead time.
After the Barnes analysis is performed for the six primary tracking parameters, tracking is
performed for three secondary wind-based parameters in order to refine the storm’s
location fix. For these secondary parameters, a search is performed for the minimum in
wind speed at the center of the storm at 10 m and at 850 and 700 mb. These are not
included as primary parameters since, in an unrestricted search in the vicinity of a storm,
it would be possible for the tracking scheme to focus in on a quiescent region outside of
the storm instead of on the calm at the center of the storm. To help ensure that the search
is focused as close to the storm center as possible, a modified guess position for the wind
minimum search is created by averaging together the original guess position for this time
and the locations of the primary parameter fixes for this lead time that are within 225 km
of the original guess position. The Barnes analysis is then called to produce location
fixes for the wind minimum at the three different vertical levels. It is important to note
that if the tracker cannot make a position fix for any of the six primary parameters, then
there will be no attempt to make a position fix using the three secondary wind-based
parameters, and tracking will terminate for that particular storm.

5.3.2 Computation of the mean position fix


Once the Barnes analysis has been completed for the primary and secondary parameters,
a mean location fix is computed for the storm. A parameter is only included in the mean
computation if its location is found within the distance threshold, as described in Section
6.3a. The mean computation is performed in two steps. In the first step, a mean position
is computed using all available parameters found within the distance threshold. In the
second step, the distance of each parameter fix from that mean position is computed, as is
the standard deviation of the parameter fixes. The mean position fix is then recalculated
by using a Gaussian weighting that is controlled by the standard deviation of the position
80
fixes. The goal here is to minimize the impact of an outlier parameter fix by weighting
the mean towards the larger cluster of parameter position fixes.

5.4 Intensity and Wind Radii Parameters


The vortex tracker must also report on forecast data related to intensity and wind
structure. For the mslp, the value that was reported during the search for the storm center
was a smoothed value that came out of the Barnes analysis. A separate call is made to
subroutine fix_latlon_to_ij in order to return the minimum gridpoint value of mslp near
the storm center. The tracker then analyzes the near-surface wind data (10 m for HWRF
and most other models) in order to report on the value of the maximum wind speed. For
high resolution grids (spacing < 0.25o), the search for the maximum wind is restricted to
points within 200 km of the center. For coarser resolution grids with spacing up to 1.25o,
the search can extend out to 300 km from the center. The value of the radius of
maximum winds is obtained at the same time.
As large storms such as Katrina and Isabel have shown, it is important to have guidance
on the structure of the wind field in addition to also having the forecast maximum wind
value. The tracker provides for basic reporting of the forecast near-surface wind structure
by obtaining the radii of 34-, 50- and 64-kt winds in each quadrant of the storm. The
values that are reported indicate the maximum distance at which winds of these
magnitudes were found anywhere in the quadrant and are not necessarily aligned along
any particular azimuth within a quadrant. The values are then output in the standard
ATCF text format, which will be described in Section 5.7 below.

5.5 Thermodynamic Phase Parameters


The fundamental tracking algorithm of the tracker is designed such that it will analyze
data in order to find the central location of a cyclone and report on its intensity.
However, additional diagnostics can be performed after the tracker has located the
cyclone center at a given lead time in order to determine if a model cyclone is of a
tropical nature or not. This section describes two different methods used in the tracker
for diagnosing the thermodynamic phase of a cyclone.
The first method used by the tracker to diagnose the thermodynamic phase of cyclones is
the cyclone phase space methodology developed by Hart (2003). The tracker takes as
input the average temperature from 300 to 500 mb and the geopotential height every 50
mb from 300 to 900 mb. There are three critical parameters which are diagnosed: (1) The
storm motion-relative, left-to-right asymmetry in the lower-troposphere (900-600 mb);
(2) Warm / cold core structure in the lower troposphere (900-600 mb) as diagnosed by
assessing the vertical variation of the near-storm isobaric height gradient; and (3) Warm /
cold core structure in the upper troposphere (600-300 mb) as diagnosed by assessing the
vertical variation of the near-storm isobaric height gradient.
The second method used for diagnosing thermodynamic phase employs a more basic
algorithm, loosely based on Vitart (1997), to determine the existence of a temperature
anomaly in the 300-500 mb layer near the cyclone center. The tracker takes as input a

81
field containing mean temperatures in the 300-500 mb layer and it runs the tracking
algorithm to locate the maximum temperature in that mean layer. It then calls a routine
to analyze the 300-500 mean temperature field to determine if a closed contour exists in
the temperature field surrounding the maximum temperature. The value of the contour
interval that is checked is set by the user as an input parameter in the script, and we have
found empirically that setting the contour interval to 1oK provides an acceptable
threshold.
Analyses for both the cyclone phase space and for the simple check of the warm core
return values which are output in a modified ATCF format, described below in Section
5.7. It is important to note that the calculations and determinations made by these
thermodynamic diagnostics are provided as auxiliary information and will not affect how
a cyclone is tracked or how long the cyclone is tracked. In particular, the tracker will not
cease tracking a cyclone if the values returned from these thermodynamic phase
diagnostics return values which indicate the storm has either begun or completed
transition to an extratropical or subtropical cyclone. It is up to the user to interpret the
tracking and phase diagnostic results that are reported in the ATCF output.

5.6 Detecting Genesis and Tracking New Storms


As the forecasting community becomes increasingly interested in forecasts of cyclones at
longer lead times, there is also increased interest in predicting cyclone genesis. In recent
years, global models have shown the ability to develop cyclones without the aid of
synthetic bogusing techniques. The tracker algorithm has been updated to detect genesis
in numerical models and track any such new disturbances that the models develop.
Creating an algorithm for detecting new storms generated by a model presents a
somewhat more complex problem than for tracking already-existing storms. For a storm
that is already being tracked by an RSMC, an observed location is provided by that
RSMC and the tracker begins searching near that location for what is known to be a
coherent circulation in nature and is assumed to be a coherent circulation in the model.
In the case of detecting genesis, no assumptions are made about the coherence of any
circulation, and extra steps must be taken to ensure that any systems that are detected by
the tracker in the model output are not only cyclones, but tropical cyclones. It is
important to note, however, that these additional checks to determine if the system is of a
tropical nature are only done if the trkrinfo%type is set to “tcgen” in the input namelist
file. If trkrinfo%type is instead set to “midlat”, then the tracker only uses mslp for
locating the storm center, and no checks are performed to differentiate tropical from non-
tropical cyclones.
The tracker begins by searching at the forecast initial time for any RSMC-numbered
systems that may have been listed on the input TC vitals record (if provided). This is
done so that these systems are properly identified by the tracker and are not then
available to be detected and identified as new cyclones by the tracker. For each RSMC-
numbered cyclone that is found, a routine named check_closed_contour is called. The
primary purpose of this routine is to determine if at least one closed contour in the mslp
field exists surrounding the cyclone. An additional important function of this routine is
82
to continue searching outwards from the center of the low in order to find all closed
contours surrounding the low. All grid points contained within these closed contours are
then masked out so that when the tracker searches for additional lows at the same lead
time, any points that have been masked out will not be detected again as a new low.
After finding any RSMC-numbered systems and masking out grid points surrounding
those systems, the tracker performs a two-step searching procedure over the remainder of
the model domain. First, a search is performed in order to identify any candidate
cyclones, and then a detailed tracking scan is performed in order to more accurately
determine the location and intensity of the candidate cyclones found in the first search
and to perform additional diagnostics.
In the first search to identify candidate cyclones, a looping procedure is conducted in
which the grid points are scanned to find the lowest mslp on the grid. For the grid point
that is found with the lowest mslp, a check is made to determine if there is at least one
closed mslp contour surrounding the system. If so, then this grid point is saved into an
array as a candidate low to be analyzed in the second step. The looping procedure then
continues searching for grid points with the next lowest mslp, and this procedure
continues until the lowest pressure that is found is greater than one half standard
deviation above the mean mslp on the grid.
In the second step, the candidate cyclones found in the first step are analyzed more
critically using the full tracking algorithm outlined above in Section 5.2 in order to more
accurately determine the location and intensity of the cyclone. The quality control checks
outlined above in Section 5.2(c(ii)) are employed to ensure that the system being tracked
has the fundamental characteristics of a cyclone and are used as input to determine
whether or not to continue tracking for a given system.
Some of the more critical checks for newly detected storms include the check for a closed
mslp contour as well as the check to determine if the azimuthally averaged 850 mb winds
are cyclonic and exceed a user-specified threshold. However, due to the fact that
incipient, developing cyclones have structures that are often weak and vacillating in
intensity, there is some leniency that is used in the application of these checks from one
lead time to the next for the purpose of genesis tracking. In particular, for the closed
mslp contour check, the requirement is only that the checks return a positive result for at
least 50% of the lead times over the past 24-h period in order to continue tracking. For
the 850 mb circulation check, the threshold is that a positive result must be returned for at
least 75% of the lead times. The threshold is more rigorous for the 850 mb circulation
check than for the mslp check since 850 mb is above the boundary layer and the storm
circulation there is generally more inertially stable and less prone to high frequency
fluctuations in intensity than is the surface layer.
Additional diagnostics can be performed at this time in order to determine the
thermodynamic phase of the system, as described above in Section 5.6. Results from the
thermodynamic phase diagnostics are included in the output, as described below in
Section 5.7, but are not used in any algorithms for determining whether or not to continue
tracking a system.

83
5.7 Tracker Output
The motivation behind making this tracker operational in 1998 was to provide track and
intensity guidance from forecasts for a number of models in as short a time as possible.
One of the requirements was that the output data be in the same text ATCF format as that
used by NHC. The two primary output files from the tracker include one file in ATCF
format and another in a format just slightly modified from the ATCF format. The
advantage of using the ATCF format is that user forecasts can easily be compared with
those from some of the operational modeling centers.

5.7.1 Description of the ATCF format


The ATCF format contains information on the ocean basin, the storm number, the model
ID, the initial date, the forecast hour, and various track, intensity and wind radii guidance.
There can be up to three ATCF records that are output for each lead time. A sample
segment with some ATCF records from a GFDL hurricane model forecast for Hurricane
Emilia (2012) is shown here:

EP, 05, 2012071000, 03, GFDL, 000, 131N, 1118W, 98, 951, XX, 34, NEQ, 0080,
0072, 0057, 0078, 0, 0, 17, 0, 0, , 0, , 0, 0, , , , , 0, 0, 0, 0,
THERMO PARAMS, -9999, -9999, -9999, Y, 10, DT, -999

EP, 05, 2012071000, 03, GFDL, 000, 131N, 1118W, 98, 951, XX, 50, NEQ, 0056,
0047, 0036, 0053, 0, 0, 17, 0, 0, , 0, , 0, 0, , , , , 0, 0, 0, 0,
THERMO PARAMS, -9999, -9999, -9999, Y, 10, DT, -999

EP, 05, 2012071000, 03, GFDL, 000, 131N, 1118W, 98, 951, XX, 64, NEQ, 0040,
0028, 0017, 0037, 0, 0, 17, 0, 0, , 0, , 0, 0, , , , , 0, 0, 0, 0,
THERMO PARAMS, -9999, -9999, -9999, Y, 10, DT, -999

EP, 05, 2012071000, 03, GFDL, 006, 134N, 1129W, 80, 963, XX, 34, NEQ, 0100,
0084, 0057, 0088, 0, 0, 34, 0, 0, , 0, , 0, 0, , , , , 0, 0, 0, 0,
THERMO PARAMS, 45, 1405, 1742, Y, 10, DT, -999

EP, 05, 2012071000, 03, GFDL, 006, 134N, 1129W, 80, 963, XX, 50, NEQ, 0061,
0053, 0027, 0058, 0, 0, 34, 0, 0, , 0, , 0, 0, , , , , 0, 0, 0, 0,
THERMO PARAMS, 45, 1405, 1742, Y, 10, DT, -999

EP, 05, 2012071000, 03, GFDL, 006, 134N, 1129W, 80, 963, XX, 64, NEQ, 0045,
0034, 0008, 0038, 0, 0, 34, 0, 0, , 0, , 0, 0, , , , , 0, 0, 0, 0,
THERMO PARAMS, 45, 1405, 1742, Y, 10, DT, -999

84
EP, 05, 2012071000, 03, GFDL, 012, 137N, 1137W, 78, 964, XX, 34, NEQ, 0084,
0071, 0068, 0078, 0, 0, 22, 0, 0, , 0, , 0, 0, , , , , 0, 0, 0, 0,
THERMO PARAMS, 26, 1609, 1879, Y, 10, DT, -999

EP, 05, 2012071000, 03, GFDL, 012, 137N, 1137W, 78, 964, XX, 50, NEQ, 0054,
0048, 0041, 0050, 0, 0, 22, 0, 0, , 0, , 0, 0, , , , , 0, 0, 0, 0,
THERMO PARAMS, 26, 1609, 1879, Y, 10, DT, -999

EP, 05, 2012071000, 03, GFDL, 012, 137N, 1137W, 78, 964, XX, 64, NEQ, 0039,
0033, 0023, 0036, 0, 0, 22, 0, 0, , 0, , 0, 0, , , , , 0, 0, 0, 0,
THERMO PARAMS, 26, 1609, 1879, Y, 10, DT, -999

The first two columns represent the ATCF ID, here indicating that Emilia was the 5th
named storm in the eastern Pacific basin in 2012. The next column indicates the initial
time for this forecast. The ‘03’ is constant and simply indicates that this record contains
model forecast data. After the column with the model ID is a column indicating the lead
time for each forecast record. Note that in the current version of the tracker, the
frequency at which ATCF data are written out is defined by the atcffreq variable defined
in the namelist. That variable should be specified as an integer * 100. The next two
columns indicate the latitude and longitude, respectively, in degrees that have been
multiplied by 10. The next two columns, respectively, are the maximum wind speed, in
kt, and the minimum sea-level pressure, in mb. The “XX” is a placeholder for character
strings that indicate whether the storm is a depression, tropical storm, hurricane,
subtropical storm, etc. Currently, that storm type character string is only used for the
observed storm data in the NHC Best Track data set.
The next six columns are for reporting wind radii forecast data. The first in those six
columns is an identifier that indicates whether this record contains radii for the 34-, 50- or
64-kt wind thresholds. The “NEQ” indicates that the four radii values that follow will
begin in the northeast quadrant. Each subsequent value is from the next quadrant
clockwise. The radii are listed in units of nautical miles (n mi). If the tracker has
detected winds of at least 50 kt in the 10 m wind data, then an additional record will be
output for this lead time. This record is identical to the first record, with the exception
that the wind radii threshold identifier is ‘50’ instead of ‘34’, and the radii values are
included for the 50-kt threshold. Similarly, if the tracker has detected winds of at least 64
kt at this lead time, then an additional record is output containing those 64-kt wind radii.
For any of these thresholds for which at least one quadrant has wind value exceedance, if
one or more of the remaining quadrants does not have exceedance, then for each of those
quadrants a value of zero is output.
After the four quadrant values for wind radii, there are two placeholders that are always
zero, and then a column that indicates the radius of maximum winds, in n mi. This value
is reported using the location of the maximum wind speed that the tracker returned.
After the radius of maximum winds, there is a series of commas and zeroes, followed by
a user-defined section of the ATCF record, which is used here to output the values for the
85
thermodynamic diagnostics. The first three values listed after the “THERMO PARAMS”
character string are the three cyclone phase space parameters, and all values shown have
been multiplied by a factor of 10. The values are listed in the following order: (1)
Parameter B (left-right thickness asymmetry); (2) Thermal wind (warm/cold core) value
for lower troposphere (900-600 mb); and (3) Thermal wind value for upper troposphere
(600-300 mb). Note that for the first lead time listed for a given model storm, the
cyclone phase space parameters will always have undefined values of -9999. The reason
for this is that the calculation of Parameter B is highly sensitive to the direction of
motion, and for the first lead time listed for a storm, it is not possible to know which
direction the model storm is heading.
After the cyclone phase space parameters is a character that indicates whether or not the
simple check for a warm core in the 300-500 mb layer was successful. The possible
values listed here are ‘Y’, ‘N’, and a ‘U’ for ‘undetermined’ if, for any reason, the warm
core check was unable to be performed. The next parameter indicates the value of the
contour interval that was used in performing the check for the warm core in the 300-500
mb layer (that value is listed with a magnitude of *10). The last two parameters are
currently unsupported and will always be listed as “DT, -999”.

5.7.2 Output file with a modified ATCF format for sub-hourly lead times
As described in Section 5.2, the tracker can process lead times that are not regular
intervals. In addition, it can process sub-hourly lead times (e.g., tracking using data
every 20 minutes). However, the standard ATCF format described in the previous
section cannot represent non-integral, sub-hourly lead times. To handle this problem, a
separate file with a format just slightly modified from the standard ATCF format is also
output. The only difference is that the lead time in the modified format contains five
digits instead of three and is represented as the lead time * 100. For example, a lead time
of 34 hours, 15 minutes would be 34.25 hours and would be represented in the modified
ATCF format as 03425.
To summarize, the modified ATCF format can be output at every lead time, including
sub-hourly, non-integral lead times. The standard ATCF format was only designed to
handle integral, hourly lead times. Therefore, if a user is processing code that has data at
sub-hourly temporal resolutions, a standard ATCF formatted record will not be output
for those sub-hourly times.

5.7.3 Output file with a modified ATCF format for use with genesis
tracking features
A modified ATCF format is required for the output from genesis tracking runs. In these
runs, there will often be a mixture of RSMC-numbered storms as well as new storms that
the model develops on its own. For the model-generated storms, a new storm-naming
convention is devised to account for the fact that these storms have no previous, set
identity as assigned by an RSMC, and the identifiers for the storms must be unique.

86
Included below is an example of output from a genesis tracking run for the NCEP GFS
model. Shown is the output for one model-generated storm as well as for one RSMC-
numbered storm, 99L. The first column is reserved for what will either be the ATCF
basin ID (AL, EP, WP, etc) for an RSMC-numbered storm or an identifier to indicate the
type of tracking run that is being performed (“TG” = tropical cyclogenesis). The second
column will either be the ATCF ID for an RSMC-numbered storm (e.g., 99L) or a
tracker-defined cyclone ID for this particular tracking run. This cyclone ID is specific to
this particular tracking run only, and it should not be used for any purposes of counting
storms throughout a season, since that number may be repeated in the next run of the
tracker, but for a different storm.
The third column contains the unique identifier for the storm. Using
2012080100_F150_138N_0805W_FOF from the first record below as an example, the first
element indicates the initial date/time group for this particular tracker run, the “F150”
indicates the forecast hour at which this particular storm was first detected in the model,
and the next two elements (“138N_0805W”) indicate the latitude and longitude at which
the storm was first detected. The “FOF” indicates that this storm was “Found On the
Fly” by the tracker in a genesis tracking run, as opposed to being tracked from the initial
time as an RSMC-numbered storm.
After the unique identifier in the third column, the format is the same as the standard
ATCF described above in Section 5.7(a), through and including the wind radii values.
After the wind radii values, the next two parameters listed are for the pressure and radius
(n mi) of the last closed isobar (1009 and 196 in the first record below), and that is
followed by the radius of maximum winds (n mi).
The next four values listed are for the thermodynamic diagnostics. The first three values
listed are the three cyclone phase space parameters, and all values shown have been
multiplied by a factor of 10. The values are listed in the following order: (1) Parameter
B (left-right thickness asymmetry); (2) Thermal wind (warm/cold core) value for lower
troposphere (900-600 mb); and (3) Thermal wind value for upper troposphere (600-300
mb). Refer to Hart (2003) for interpretation of the three cyclone phase space parameters.
After the cyclone phase space parameters is a character that indicates whether or not the
simple check for a warm core in the 300-500 mb layer was successful. The possible
values listed here are ‘Y’, ‘N’, and a ‘U’ for ‘undetermined’ if, for any reason, the warm
core check was unable to be performed.
After the warm core flag, the next two values (259 and 31 in record 1) indicate the
direction and translation speed of storm motion, with the speed listed in ms-1 * 10. The
final four values (112, 144, 69, 89) are, respectively, the values for the mean relative
vorticity returned from the tracker at 850 mb, the gridpoint maximum vorticity near the
cyclone center at 850 mb, the mean relative vorticity returned from the tracker at 700 mb,
and the gridpoint maximum vorticity near the cyclone center at 700 mb. All vorticity
values have been scaled by 1E6.
TG, 0048, 2012080100_F150_138N_0805W_FOF, 2012080100, 03, GFSO, 150,
138N, 805W, 18, 1008, XX, 34, NEQ, 0000, 0000, 0000, 0000, 1009,

87
196, 80, -999, -9999, -9999, N, 259, 31, 112, 144, 69,
89

TG, 0048, 2012080100_F150_138N_0805W_FOF, 2012080100, 03, GFSO, 156,


134N, 813W, 17, 1008, XX, 34, NEQ, 0000, 0000, 0000, 0000, 1010,
251, 98, 19, 106, -89, N, 252, 36, 126, 168, 67,
93

TG, 0048, 2012080100_F150_138N_0805W_FOF, 2012080100, 03, GFSO, 162,


134N, 816W, 17, 1008, XX, 34, NEQ, 0000, 0000, 0000, 0000, -999, -
999, 55, -11, 162, 77, N, 266, 17, 110, 150, 70,
91
TG, 0048, 2012080100_F150_138N_0805W_FOF, 2012080100, 03, GFSO, 168,
133N, 818W, 16, 1007, XX, 34, NEQ, 0000, 0000, 0000, 0000, 1008,
92, 74, -27, 95, -26, N, 253, 16, 96, 118, 87, 113
TG, 0048, 2012080100_F150_138N_0805W_FOF, 2012080100, 03, GFSO, 174,
133N, 822W, 17, 1008, XX, 34, NEQ, 0000, 0000, 0000, 0000, 1010,
378, 56, -6, 100, -102, Y, 275, 24, 99, 139, 83,
105
TG, 0048, 2012080100_F150_138N_0805W_FOF, 2012080100, 03, GFSO, 180,
136N, 826W, 20, 1008, XX, 34, NEQ, 0000, 0000, 0000, 0000, 1009,
118, 57, -19, 123, -131, Y, 293, 29, 111, 150, 87,
113
TG, 0048, 2012080100_F150_138N_0805W_FOF, 2012080100, 03, GFSO, 192,
140N, 835W, 14, 1008, XX, 34, NEQ, 0000, 0000, 0000, 0000, 1009,
74, 62, -25, 137, -141, N, 294, 24, 108, 139, 96, 126
TG, 0048, 2012080100_F150_138N_0805W_FOF, 2012080100, 03, GFSO, 204,
143N, 846W, 17, 1009, XX, 34, NEQ, 0000, 0000, 0000, 0000, -999, -
999, 159, -3, -41, -106, Y, 292, 30, 64, 73, 62,
68
TG, 0048, 2012080100_F150_138N_0805W_FOF, 2012080100, 03, GFSO, 216,
153N, 859W, 14, 1009, XX, 34, NEQ, 0000, 0000, 0000, 0000, 1012,
89, 155, 30, -19, -118, Y, 293, 31, 51, 56, 50, 55
AL, 99L, 2012080100_F000_097N_0430W_99L, 2012080100, 03, GFSO, 000,
105N, 430W, 28, 1012, XX, 34, NEQ, 0000, 0000, 0000, 0000, 1013,
68, 92, -999, -9999, -9999, N, 279, 83, 221, 267, 207, 258
AL, 99L, 2012080100_F000_097N_0430W_99L, 2012080100, 03, GFSO, 006,
110N, 443W, 33, 1011, XX, 34, NEQ, 0000, 0000, 0000, 0000, 1013,
178, 81, 41, 73, 112, Y, 286, 73, 265, 402, 230,
352
AL, 99L, 2012080100_F000_097N_0430W_99L, 2012080100, 03, GFSO, 012,
113N, 459W, 33, 1012, XX, 34, NEQ, 0000, 0000, 0000, 0000, 1014,
122, 68, 41, 278, 200, N, 282, 78, 302, 403, 257,
358
AL, 99L, 2012080100_F000_097N_0430W_99L, 2012080100, 03, GFSO, 018,
116N, 474W, 34, 1010, XX, 34, NEQ, 0000, 0000, 0000, 0000, 1012,
104, 61, 49, 379, 174, N, 280, 72, 283, 390, 225,
291
AL, 99L, 2012080100_F000_097N_0430W_99L, 2012080100, 03, GFSO, 024,
115N, 488W, 31, 1011, XX, 34, NEQ, 0000, 0000, 0000, 0000, 1013,
107, 72, 47, 427, 21, N, 271, 70, 255, 330, 189,
239
AL, 99L, 2012080100_F000_097N_0430W_99L, 2012080100, 03, GFSO, 030,
117N, 501W, 29, 1009, XX, 34, NEQ, 0000, 0000, 0000, 0000, 1011,

88
334, 79, 7, 494, 67, N, 278, 67, 240, 323, 175,
233
AL, 99L, 2012080100_F000_097N_0430W_99L, 2012080100, 03, GFSO, 036,
121N, 511W, 36, 1011, XX, 34, NEQ, 0083, 0000, 0000, 0000, 1013,
315, 62, 2, 471, 12, Y, 284, 62, 290, 505, 231,
400
AL, 99L, 2012080100_F000_097N_0430W_99L, 2012080100, 03, GFSO, 042,
123N, 526W, 39, 1009, XX, 34, NEQ, 0085, 0000, 0000, 0073, 1011,
114, 70, -10, 599, 217, Y, 277, 71, 359, 640, 302,
536
AL, 99L, 2012080100_F000_097N_0430W_99L, 2012080100, 03, GFSO, 048,
124N, 542W, 43, 1010, XX, 34, NEQ, 0094, 0000, 0000, 0072, 1012,
102, 70, -17, 620, 154, Y, 269, 78, 376, 627, 323,
543
AL, 99L, 2012080100_F000_097N_0430W_99L, 2012080100, 03, GFSO, 054,
123N, 560W, 39, 1008, XX, 34, NEQ, 0080, 0000, 0000, 0081, 1011,
216, 53, -31, 778, 249, Y, 270, 82, 336, 523, 280,
472
AL, 99L, 2012080100_F000_097N_0430W_99L, 2012080100, 03, GFSO, 060,
121N, 579W, 39, 1010, XX, 34, NEQ, 0075, 0000, 0000, 0067, 1013,
249, 56, -37, 810, 150, Y, 270, 84, 298, 457, 253,
398
AL, 99L, 2012080100_F000_097N_0430W_99L, 2012080100, 03, GFSO, 066,
121N, 596W, 34, 1009, XX, 34, NEQ, 0065, 0000, 0000, 0000, 1010,
71, 65, -41, 729, 63, N, 273, 77, 264, 415, 208, 320
AL, 99L, 2012080100_F000_097N_0430W_99L, 2012080100, 03, GFSO, 072,
122N, 611W, 34, 1010, XX, 34, NEQ, 0061, 0000, 0000, 0000, 1012,
146, 60, -34, 882, 35, N, 274, 71, 242, 376, 186,
273
AL, 99L, 2012080100_F000_097N_0430W_99L, 2012080100, 03, GFSO, 078,
125N, 626W, 31, 1009, XX, 34, NEQ, 0000, 0000, 0000, 0000, 1011,
228, 49, -48, 893, 12, N, 282, 74, 240, 342, 178,
262
AL, 99L, 2012080100_F000_097N_0430W_99L, 2012080100, 03, GFSO, 084,
127N, 644W, 30, 1011, XX, 34, NEQ, 0000, 0000, 0000, 0000, 1013,
125, 67, -23, 864, 3, N, 282, 80, 214, 289, 164,
213
AL, 99L, 2012080100_F000_097N_0430W_99L, 2012080100, 03, GFSO, 090,
131N, 659W, 29, 1009, XX, 34, NEQ, 0000, 0000, 0000, 0000, 1010,
66, 86, -32, 607, 86, N, 288, 73, 199, 251, 152, 204
AL, 99L, 2012080100_F000_097N_0430W_99L, 2012080100, 03, GFSO, 096,
134N, 674W, 29, 1010, XX, 34, NEQ, 0000, 0000, 0000, 0000, -999, -
999, 108, -48, 688, 59, N, 282, 71, 194, 249, 140,
178
AL, 99L, 2012080100_F000_097N_0430W_99L, 2012080100, 03, GFSO, 102,
137N, 692W, 31, 1009, XX, 34, NEQ, 0000, 0000, 0000, 0000, 1010,
73, 88, -51, 423, 123, N, 282, 79, 182, 250, 142, 191
AL, 99L, 2012080100_F000_097N_0430W_99L, 2012080100, 03, GFSO, 108,
140N, 711W, 29, 1011, XX, 34, NEQ, 0000, 0000, 0000, 0000, 1012,
83, 85, -45, 462, 49, N, 283, 84, 159, 217, 112, 154
AL, 99L, 2012080100_F000_097N_0430W_99L, 2012080100, 03, GFSO, 114,
145N, 729W, 28, 1010, XX, 34, NEQ, 0000, 0000, 0000, 0000, 1012,
83, 149, -74, 327, 174, N, 287, 80, 143, 204, 87, 125

89
6.0 The idealized HWRF framework
A mass-consistent idealized tropical cyclone initialization is available within the HWRF
framework. The idealized simulation is configured for the operational HWRF triple-
domain configuration with grid spacing 27-, 9-, and 3-km. The sea-surface temperature is
constant in time and space (currently 302 K) as ocean coupling is not yet supported for
the idealized configuration in HWRF. Initial conditions are specified using an idealized
vortex superposed on a base state quiescent sounding.
To initialize the idealized vortex, the nonlinear balance equation in the pressure-based
sigma coordinate system described in Wang (1995), and reported briefly in Bao et al.
(2012) and Gopalakrishnan et al. (2011 and 2013), is solved within the rotated latitude–
longitude E-grid framework. Sundqvist (1975) first used the balance equation to
determine the wind (in terms of stream function) and the mass field (geopotential height).
Kurihara and Bender (1980) adopted the inverse balance procedure to obtain mass field
from wind field and then solved for surface pressure at the lower boundary of the sigma
coordinates and geopotential elsewhere. A variant of this procedure, discussed in Wang
(1995), is adopted in the HWRF system.

Figure 6.1. Vertical structure of the pressure-sigma coordinate used to create the
idealized vortex.
Figure 6.1 provides an overview of the vertical structure of the sigma coordinate system
used in the idealized initialization. The atmosphere is divided into M layers. The initial
base state temperature (To), along with the forcing term G that approximates the
90
momentum fields, is provided at the interfaces. The zonal (u) and meridional (v) wind
components, along with the temperature perturbation (T’) from the initial base state, are
computed at half levels between the interfaces. The forcing term and the pressure at the
lower boundary (σ=1) are represented by Gd and p*, respectively. The base state
temperature and moisture fields, required in the hydrostatic equation to compute the
geopotential from temperature and pressure, are prescribed in file sound.d. Wang (1995)
provides an extensive overview of the initialization procedure. We describe here only the
relevant equations as used in the code module_initialize_tropical_cyclone.F
The initial wind field, in cylindrical polar coordinates, is prescribed at each sigma level
by:

6.1

where Vm is the maximum wind at radius rm. Both these variables are supplied in file
input.d. Parameter b is set to 1. The momentum field is a function of the u and v wind
components and is given by:

6.2

where J is the Jacobian, f is the Coriolis parameter, ζ is the vorticity and beta is the
meridional gradient of the Coriolis parameter.

6.3

The pressure at σ=1 is obtained by solving the Poisson equation where subscript d
denotes the variable evaluated at σ=1 and R is the gas constant. The temperature
perturbations at the sigma levels are determined from solving the Poisson equation:

6.4

Finally using the hydrostatic approximation, the geopotential heights are obtained from
the total temperature and moisture fields.

91
Even though the generation of the idealized initial conditions are based on the base state
sounding provided in file sound.d and on the vortex properties specified in file input.d, it
is still necessary to provide the model with initial and boundary conditions from the GFS.
The GFS-based initial and boundary conditions, processed through WPS, are overwritten
with the idealized initialization in the ideal_nmm_tropical_cyclone code as explained in
the HWRF Users Guide. The lateral boundary conditions used in the HWRF idealized
simulation are the same as used in real data cases. This inevitably leads to some reflection
when gravity waves emanating from the vortex reach the outer domain lateral boundaries.
In the experiments described by Bao et al. (2012) and Gopalakrishnan et al (2011, 2013),
the simulations were performed on an f-plane centered at 12.50. The idealized vortex
initial intensity was 20 m s-1 with a radius of maximum winds of about 90 km, embedded
in a uniform easterly flow of 4 m s-1 or in a quiescent ambient. The base state temperature
and humidity profile was based on Jordan’s Caribbean sounding (Gray et al. 1975). In
their experiments, the sea surface temperature was set to 302 K, and no land was present
in the domain.
The variables that can readily be customized for the HWRF idealized capability are the
base state sounding thermodynamic structure, the choice of f- or β-plane, the latitude of
the storm, the radius of maximum wind, and the maximum wind speed. Sea surface
temperature can be changed in the source code. Additional settings may be changed by
altering the source code but these changes are not currently part of the code supported at
the Developmental Testbed Center. Examples of possible changes are introduction of
base state non-zero winds, land surface, or coupling to an ocean model. Finally, all the
operational physics, as well as the supported experimental physics options in HWRF, can
be used in the idealized framework.

92
7.0 References
Arakawa, A. and W. H. Schubert, 1974: Interaction of a Cumulus Cloud Ensemble with
the Large-Scale Environment, Part I. J. Atmos. Sci., 31, 674-701.
Bao, J.-W., S. G. Gopalakrishnan, S. A. Michelson, F. D. Marks, and M. T. Montgomery,
2012: Impact of physics representations in the HWRFX on simulated hurricane
structure and pressure–wind relationships. Mon. Wea. Rev., 140, 3278-3299
Barnes, S.L., 1964: A technique for maximizing details in numerical weather map
analysis. J. Appl. Meteor., 3, 396-409.
Barnes, S.L., 1973: Mesoscale objective analysis using weighted time-series
observations. NOAA Tech. Memo. ERL NSSL-62, National Severe Storms
Laboratory, Norman, OK 73069, 60 pp. [NTIS COM-73-10781].
Bender, M. A. and I. Ginis, 2000: Real case simulation of hurricane-ocean interaction
using a high-resolution coupled model: Effects on hurricane intensity. Mon. Wea.
Rev., 128, 917-946.
Bender, M. A., I. Ginis, R. Tuleya, B. Thomas and T. Marchok, 2007: The operational
GFDL Coupled Hurricane-Ocean Prediction System and a summary of its
performance. Mon. Wea. Rev., 135, 3965-3989.
Bister, M. and K. A. Emanuel, 1998: Dissipative heating and hurricane intensity. Meteor.
Atmos. Phys., 65, 233-240.
Black, P. G., E. A. D’Asaro, W. M. Drennan, J. R. French, T. B. Sanford, E. J. Terrill, P.
P. Niiler, E. J. Walsh and J. Zhang, 2007: Air-Sea Exchange in Hurricanes:
Synthesis of Observations from the Coupled Boundary Layer Air-Sea Transfer
Experiment. Bull. Amer. Meteor. Soc., 88, 357-374.
Blumberg, A. F. and G. L. Mellor, 1987: A description of a three-dimensional coastal
ocean circulation model. Three-Dimensional Coastal Ocean Models. N. Heaps,
Ed., Vol. 4, Amer. Geophys. Union, 1-16.
Boyer, T. P. and S. Levitus, 1997: Objective Analysis of Temperature and Salinity for the
World Ocean on a 1⁄4 Grid. NOAA Atlas NESDIS 11, 62 pp.
Braun, Scott A. and W.-K. Tao, 2000: Sensitivity of high-resolution simulations of
hurricane Bob (1991) to planetary boundary layer parameterizations. Mon. Wea.
Rev., 128, 3941–3961.
Buehner, M., 2005: Ensemble-derived stationary and flow-dependent background-error
covariances: Evaluation in a quasi-operational NWP setting. Quart. J. Roy.
Meteor. Soc., 131, 1013–1043.
Deardorff, J. W., 1978: Efficient prediction of groundsurface temperature and moisture,
with inclusion of a layer of vegetation. J. Geophys. Res., 83, 1889-1903.
93
Donelan, M. A., B. K. Haus, N. Reul, W. J. Plant, M. Stiassnie, H. C. Graber, O. B.
Brown and E. S. Saltzman, 2004: On the limiting aerodynamic roughness of the
ocean in very strong winds, Geophys. Res. Lett., 31, L18306.
Emanuel, K. A., 2003: A similarity hypothesis for air-sea exchange at extreme wind
speeds, J. Atmos. Sci., 60, 1420-1428.
Falkovich, A., I. Ginis and S. Lord, 2005: Ocean data assimilation and initialization
procedure for the Coupled GFDL/URI Hurricane Prediction System. J. Atmos.
Oceanic Technol., 22, 1918-1932.
Falkovich, A., I. Ginis and S. Lord, 2005: Ocean data assimilation and initialization
procedure for the Coupled GFDL/URI Hurricane Prediction System. J. Atmos.
Oceanic Technol., 22, 1918-1932.
Fels, S. B. and M. D. Schwarzkopf, 1975: The Simplified Exchange Approximation: A
New Method for Radiative Transfer Calculations, J. Atmos. Sci., 32, 1475–1488.
Ferrier, B. S., 2005: An efficient mixed-phase cloud and precipitation scheme for use in
operational NWP models., Eos., Trans. AGU, 86(18), Jt. Assem. Suppl., A42A-
02.
Gamache, J. F., F. D. Marks Jr., and F. Roux, 1995: Comparison of three airborne
Doppler sampling techniques with airborne in situ wind observations in Hurricane
Gustav (1990). J. Atmos. Oceanic Technol., 12, 171–181.
Gaspari, G., and S. E. Cohn, 1999: Construction of correlation functions in two and three
dimensions. Quart. J. Roy. Meteor. Soc., 125, 723–757.
Ginis, I, A. P. Khain and E. Morozovsky, 2004: Effects of large eddies on the structure of
the marine boundary layer under strong wind conditions, J. Atmos. Sci., 61, 3049–
3063.
Gopalakrishnan, S. G., F. Marks, X. Zhang, J.-W. Bao, K.-S. Yeh, and R. Atlas, 2011:
The experimental HWRF System: a study on the influence of horizontal
resolution on the structure and intensity changes in tropical cyclones using an
idealized framework. Mon. Wea. Rev., 139, 1762–1784.
Gopalakrishnan, S. G., F. Marks Jr., J. A. Zhang, X. Zhang, J.-W. Bao, and V.
Tallapragada, 2013: Study of the impacts of vertical diffusion on the structure and
intensity of the tropical cyclones using the high-resolution HWRF system. J. of
the Atmos. Sci., 70, 524-541.
Gopalakrishnan, S. G., D. P. Bacon, N. N. Ahmad, Z. Boybeyi, T. J. Dunn, M. S. Hall, Y.
Jin, P. C. S. Lee, R. V. Madala, R. A. Sarma, M. D. Turner and T. Wait, 2002: An
Operational Multi-Scale atmospheric model with grid adaptivity for hurricane
forecasting, Mon. Wea. Rev., 130, 1830-1847.
Gray, W., E. Ruprecht, and R. Phelps, 1975: Relative humidity in tropical weather
systems. Mon. Wea. Rev., 103, 685–690.

94
Grell, G.A., 1993: Prognostic evaluation of assumptions used by cumulus
parameterizations. Mon. Wea. Rev., 121, 764-787.
Han, J. and H.-L. Pan, 2006: Sensitivity of hurricane intensity forecasts to convective
momentum transport parameterization. Mon. Wea. Rev., 134, 664-674.
Han, J. and H.-L. Pan, 2011: Revision of Convection and Vertical Diffusion Schemes in
the NCEP Global Forecast System. Wea. and Forec., 26, 520-533.
Hart, R.E., 2003: A cyclone phase space derived from thermal wind and thermal
asymmetry. Mon. Wea. Rev., 131, 585-616.
Haus, B., D. Jeong, M. A. Donelan, J. A. Zhang, and I. Savelyev, 2010: The relative rates
of air-sea heat transfer and frictional drag in very high winds. Geophys. Res.
Lett. , 37, doi:10.1029/2009GL042206.
Hong, S.-Y. and H.-L. Pan, 1996: Nonlocal boundary layer vertical diffusion in a
medium-range forecast model. Mon. Wea. Rev., 124, 2322-2339.
Hong, S.-Y. and H.-L. Pan, 1998: Convective trigger function for a mass flux cumulus
parameterization scheme. Mon. Wea. Rev., 126, 2621-2639.
Janjic, Z. I., 1990a: The step-mountain coordinate: physical package. Mon. Wea. Rev.,
118, 1429-1443.
Janjic, Z. I., 1990b: The step-mountain coordinate model: further developments of the
convection, viscous sublayer and turbulence closure schemes.. Mon. Wea. Rev.
122, 927-945.
Janjic, Z. I., 1994: The step-mountain Eta coordinate model – further developments of
the convection, viscuous sublayer and turbulence closure schemes. Mon. Wea.
Rev., 122(5), 927-945.
Janjic, Z. I., 1996a: The Mellor-Yamada level 2.5 scheme in the NCEP Eta model.
Preprints, 11th Conf. on Numerical Weather Prediction, Norfolk, VA, 19-23
August 1996; Amer. Meteor. Soc. Boston, MA, 333-334.
Janjic, Z. I., 1996b: The surface layer in the NCEP Eta model. Preprints, 11th Conf. on
Numerical Weather Prediction, Norfolk, VA, 19-23 August 1996; Amer. Meteor.
Soc. Boston, MA, 354-355.
Janjic, Z. I., 2000: Comments on ”Development and Evaluation of a Convection Scheme
for Use in Climate Models”, J. Atmos. Sci., 57, p. 3686.
Janjic, Z. I., 2002: Nonsingular Implementation of the Mellor–Yamada Level 2.5 Scheme
in the NCEP Meso model, NCEP Office Note, No. 437, 61 pp.
Janjic, Z. I., R. Gall and M. E. Pyle, 2010: Scientific Documentation for the NMM
Solver. NCAR Technical Note NO. NCAR/TN–477+STR, 1-125, 53 pp.
[Available from NCAR, P.O. Box 3000, Boulder, CO 80307].

95
Kain, J. S., and J. M. Fritsch, 1993: Convective parameterization for mesoscale models:
The Kain-Fritcsh scheme, The representation of cumulus convection in numerical
models, K. A. Emanuel and D.J. Raymond, Eds., Amer. Meteor. Soc., 246 pp.
Kleist, D. T., D. F. Parrish, J. C. Derber, R. Treadon, R.M. Errico and R. Yang, 2009:
Introduction of the GSI into the NCEP Global Data Assimilation System. Mon.
Wea. Rev., 24, 1691-1705.
Kurihara Y. and R. E. Tuleya, 1974: Structure of a tropical cyclone developed in a three-
dimensional numerical simulation model. J. Atmos. Sci., 31, 893–919.
Kurihara, Y., and M. A. Bender, 1980: Use of a movable nested-mesh model for tracking
a small vortex. Mon. Wea. Rev., 108, 1792–1809.
Kwon Y. C., and S. Lord, B. Lapenta, V. Tallapragada, Q. Liu and Z. Zhang, 2010:
Sensitivity of Air-Sea Exchange Coefficients (Cd and Ch) on Hurricane Intensity.
29th Conference on Hurricanes and Tropical Meteorology, 13C.1
Lacis, A. A. and J. E. Hansen, 1974: A parameterization for the absorption of solar
radiation in the earth’s atmosphere. J. Atmos. Sci., 31, 118–133.
Lorenc, A. C., 2003: The potential of the ensemble Kalman filter for NWP—A
comparison with 4D-VAR. Quart. J. Roy. Meteor. Soc., 129, 3183–3203.
Lin, Y.-L., 2007. Mesoscale Dynamics. Cambridge University Press.
Liu, Q., S. Lord, N. Surgi, Y. Zhu, R. Wobus, Z. Toth and T. Marchok, 2006b: Hurricane
relocation in global ensemble forecast system. Preprints, 27th Conf. on
Hurricanes and Tropical Meteorology, Monterey, CA, Amer. Meteor. Soc., P5.13
Liu, Q., T. Marchok, H.-L. Pan, M. Bender and S. Lord, 2000: Improvements in
Hurricane Initialization and Forecasting at NCEP with Global and Regional
(GFDL) models. NCEP Office Note 472.
Liu, Q., N. Surgi , S. Lord, W.-S. Wu, S. Parrish, S. Gopalakrishnan, J. Waldrop and J.
Gamache, 2006a: Hurricane Initialization in HWRF Model. Preprints, 27th
Conference on Hurricanes and Tropical Meteorology, Monterey, CA.
Makin, V. K., 2005: A note on the drag of the sea surface at hurricane winds, Boundary-
Layer Meteorol., 115, 169-176.
Marchok, T. P., 2002: How the NCEP tropical cyclone tracker works. Preprints, 25th
Conf. on Hurricanes and Tropical Meteorology, San Diego, CA, 21-22.
Mellor, G. L., 1991: An equation of state for numerical models of oceans and estuaries. J.
Atmos. Oceanic Technol., 8, 609-611.
Mellor, G. L., 2004: Users guide for a three-dimensional, primitive equation, numerical
ocean model (June 2004 version). Prog. in Atmos. and Ocean. Sci, Princeton
University, 56 pp.

96
Mellor, G. L. and T. Yamada, 1982: Development of a turbulence closure model for
geophysical fluid problems. Rev. Geophys. Space Phys., 20, 851-875.
Michalakes, J., J. Dudhia, D. Gill, T. Henderson, J. Klemp, W. Skamarock and W. Wang,
2004: The Weather Research and Forecast Model: Software Architecture and
Performance. Eleventh ECMWF Workshop on the Use of High Performance
Computing in Meteorology, Reading, U.K., Ed. George Mozdzynski.
Moon I.-J., T. Hara, I. Ginis, S. E. Belcher and H. Tolman, 2004a: Effect of surface
waves on air–sea momentum exchange. Part I: Effect of mature and growing seas,
J. Atmos. Sci., 61, 2321–2333.
Moon I.-J., I. Ginis and T. Hara, 2004b: Effect of surface waves on air–sea momentum
exchange. II: Behavior of drag coefficient under tropical cyclones, J. Atmos. Sci.,
61, 2334–2348.
Moon, I., I. Ginis, T. Hara and B. Thomas 2007: Physics-based parameterization of air-
sea momentum flux at high wind speeds and its impact on hurricane intensity
predictions. Mon. Wea. Rev., 135, 2869-2878.
Nolan, D. S., J. A. Zhang, D. P. Stern, 2009a: Evaluation of planetary boundary layer
parameterizations in tropical cyclones by comparison of in situ observations and
high-resolution simulations of Hurricane Isabel (2003). Part I: initialization,
maximum winds, and the outer-core boundary layer. Mon. Wea. Rev., 137, 3651–
3674.
Nolan, D. S., D. P. Stern, and J. A. Zhang, 2009b: Evaluation of planetary boundary layer
parameterizations in tropical cyclones by comparison of in situ observations and
high-resolution simulations of Hurricane Isabel (2003). Part II: inner-core
boundary layer and eyewall structure. Mon. Wea. Rev., 137, 3675–3698.
Pan, H.-L. and J. Wu, 1995: Implementing a Mass Flux Convection Parameterization
Package for the NMC Medium-Range Forecast Model. NMC Office Note, No.
409, 40 pp. [Available from NCEP, 5200 Auth Road, Washington, DC 20233]
Pan, H.-L, 2003: The GFS Atmospheric Model. NCEP Office Note, No. 442, 14 pp.
[Available from NCEP, 5200 Auth Road, Washington, DC 20233].
Parrish, D. F. and J. C. Derber, 1992: The National Meteorological Center’s spectral
statistical-interpolation system. Mon. Wea.. Rev., 120, 1747–1763.
Phillips, N. A., 1957: A coordinate system having some special advantages for numerical
forecasting. J. Meteor., 14, 184-185.
Powell, M. D., P. J. Vickery and T. A. Reinhold, 2003: Reduced drag coefficient for high
wind speeds in tropical cyclones, Nature, 422, 279-283.
Price, J., 1981: Upper ocean response to a hurricane. J. Phys. Oceanogr., 11, 153-175.
Reynolds, R. W. and T. M. Smith, 1994: Improved global sea surface temperature
analyses using optimum interpolation. J. Climate, 7, 929-948.
97
Roberts, R.E., J. E. A. Selby and L. M. Biberman, 1976: Infrared continuum absorption
by atmospheric water-vapor in 8–12 um range. Applied Optics, 1-91.
Rodgers, C. D., 1968: Some extensions and applications of the new random model for
molecular band transmission. Quart. J. Roy. Meteor. Soc., 94, 99–102.
Ryan, B. F., Wyser, K. and P. Yang, 1996: On the global variation of precipitating layer
clouds. Bull. Amer. Meteor. Soc., 77, 53-70.
Sasamori T., J. London and D. V. Hoyt, 1972: Radiation budget of the Southern
Hemisphere. Meteor. Monogr, 35, 9–23.
Schwarzkopf, M D. and S. Fels, 1985: Improvements to the algorithm for computing
CO2 transmissivities and cooling rates. J. Geophys. Res., 90(C10), 10,541-
10,550.
Schwarzkopf, M. D. and S. Fels, 1991: The simplified exchange method revisited: An
accurate, rapid method for computation of infrared cooling rates and fluxes.
J. Geophys. Res., 96(D5), 9075-9096.
Sirutis, J. J. and K. Miyakoda, 1990: Subgrid scale physics in 1-month forecasts. Part
I: Experiment with four parameterization packages. Mon. Wea. Rev.,
118(5), 1043-1064.
Skamarock, W. C., J. B. Klemp, J. Dudhia, D. O. Gill, D. M. Barker, M. G. Duda, X –Y.
Huang, W. Wang and J. G. Powers, 2008: A Description of the Advanced
Research WRF Version 3, NCAR Technical Note NO. NCAR/TN–475+STR, 1-
125
Smagorinsky, J., 1963: General circulation experiments with primitive equations. Part I:
The basic experiments. Mon. Wea. Rev., 91, 99-164.
Sundqvist, H., 1975: Initialization for models using sigma as the vertical coordinate. J.
Appl. Meteor., 14, 153–158.
Teague, W. J, M. J. Carron and P. J. Hogan, 1990: A comparison between the
Generalized Digital Environmental Model and Levitus climatologies. J. Geophys.
Res., 95, 7167-7183.
Tiedtke, M., 1989: A comprehensive mass flux scheme for cumulus parameterization in
large-scale models. Mon. Wea. Rev., 117, 1779–1800.
Troen, I. and L. Mahrt, 1986: A simple model of the atmospheric boundary layer:
Sensitivity to surface evaporation. Bound. Layer Meteor., 37, 129-148.
Tuleya, R. E., 1994: Tropical storm development and decay. Sensitivity to surface
boundary conditions. Mon. Wea. Rev., 122, 291-304.
Vickers, D. and L. Mahrt, 2004. Evaluating formulations of the stable boundary layer
height. J. of Appl. Meteor., 43, 1736-1749.

98
Wang, X., C. Snyder, and T. M. Hamill, 2007a: On the theoretical equivalence of
differently proposed ensemble/3D-Var hybrid analysis schemes. Mon. Wea. Rev.,
135, 222–227.
Wang, X., T. M. Hamill, J. S. Whitaker, and C. H. Bishop, 2007b: A comparison of
hybrid ensemble transform Kalman filter-OI and ensemble square-root filter
analysis schemes. Mon. Wea. Rev., 135, 1055–1076.
Wang, X., 2010: Incorporating ensemble covariance in the Gridpoint Statistical
Interpolation (GSI) variational minimization: A mathematical framework. Mon.
Wea. Rev., 138, 2990–2995.
Wang, Y., 1995: An inverse balance equation in sigma coordinates for model
initialization. Mon. Wea. Rev., 123, 482–488.
Yablonsky, R. M. and I. Ginis, 2008: Improving the ocean initialization of coupled
hurricane-ocean models using feature-based data assimilation. Mon. Wea. Rev.,
136, 2592-2607.
Yablonsky, R. M. and I. Ginis, 2009: Limitation of one-dimensional ocean models for
coupled hurricane-ocean model forecasts. Mon. Wea. Rev., 137, 4410–4419.
Yablonsky, R. M., I. Ginis, E. W. Uhlhorn and A. Falkovich, 2006: Using AXBTs to
improve the performance of coupled hurricane–ocean models. Preprints, 27th
Conf. on Hurricanes and Tropical Meteorology, Monterey, CA, Amer. Meteor.
Soc., 6C.4. [Available online at
http://ams.confex.com/ams/pdfpapers/108634.pdf.]
Yablonsky, R. M. and I. Ginis, 2013: Impact of a warm ocean eddy’s circulation on
hurricane–induced sea surface cooling with implications for hurricane intensity.
Mon. Wea. Rev., 141, 997-1021.
Yablonsky, R. M., I. Ginis, and B. Thomas, 2013: MPIPOM-TC: A new ocean modeling
system with flexible initialization for improved coupled hurricane-ocean model
forecasts. In preparation.
Zhang, C, Y. Wang, and K. Hamilton, 2011: Improved Representation of Boundary
Layer Clouds over the Southeast Pacific in ARW-WRF Using a Modified Tiedtke
Cumulus Parameterization Scheme. Mon. Wea. Rev., 139, 3489–3513.
Zhang, J. A., S. Gopalakrishnan, F. Marks, R. F. Rogers, and V. Tallapragada, 2012: A
developmental framework for improving hurricane model physical
parameterizations using aircraft observations. Trop. Cycl. Res. And Rev., 1, 1-11.
Zhang, J. A., P. G. Black, J. R. French, and W. M. Drennan, 2008: First direct
measurements of enthalpy flux in the hurricane boundary layer: The CBLAST
results. Geophys. Res. Lett., 35(11):L14813, doi:10.1029/2008GL034374.
Zeng, X., M. Zhao and R. E. Dickinson, 1998: Intercomparison of bulk aerodynamic
algorithms for the computation of sea surface fluxes using TOGA COARE and
TAO data, J. Climate, 11, 2628-2644.
99

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy